doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.00436
19
Step regeneration Given the target and necessary information of the step, we can now ask the LLM to achieve the target independently with only the collected information, without seeing the original step. Because the step is usually a small jump from previous conclusions, and the information collection stage has already filtered out irrelevant information, we can usually trust regeneration results. The prompt for this stage is: We are in the process of solving a math problem. We have some information from the problem: Information 0: [Information I0] The following are some previous steps: Step 0: [Step S0] The target for the next step is: Please try to achieve the target with the information from the problem or previous steps. Here [Target] is the output from the target extraction stage. [Information Ii] and [Step Si] correspond to the specific items selected by the information collection stage. In Figure 1, only Step 4 and no information from the question is directly related to the current step, so SelfCheck simply copies the content of Step 4 into [Step S0] and removes the block containing [Information Ii]. Result comparison The last step is to compare results from the regeneration stage and the original step with the following prompt:
2308.00436#19
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
19
Unseen API usage on a newly collected dataset. Existing benchmarks used in literature come with a limited set of tools. To explore real-world use cases involving a large number of tools, we collect a new benchmark called the LLM Cloud CLI that consists of 200 commands representing the functionalities of the Google Cloud Platform (GCP) command-line interface (CLI). Each command in our CLI is renamed from its corresponding GCP command, preserving the semantics and logic of the original tools, while being unseen to the language models. For instance, the command gcloud compute create NAME , responsible for creating a virtual machine, is renamed to be llmvm compute make NAME . The renaming conventions also allow us to utilize authentic GCP examples as few-shot demos and leverage the corresponding GCP documentation. The benchmark comprises 50 questions, each focused on creating and configuring specific cloud services using command-line tools. Each question requires at least two commands to complete the task. We show an example in Figure 3, and include more in appendix. Due to the length constraints of the LLM we use, we cannot fit documentation of 200 tools in a single prompt. Therefore, we employ a simple TF-IDF search using the questions as queries to retrieve the most relevant documentations and truncate them to fit within the prompt length. More details can be found in the appendix.
2308.00675#19
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.00245
20
∈ O ( ) # ∧ C ) mizations in §4.3.3. 3.3 Conceptual Workflow Given a bug report containing a suspicious variable 𝑣 and its resid- ing function 𝐹 , the workflow Φ is as follows: 𝑣 𝑖 ( ) → { the bug report. 𝐹, 𝑖 𝑣 )) → C for each 𝑖 . ) ( 𝑣 𝐹, C ) (1) Φ1 ( (2) Φ2 ( (3) Φ3 ( 𝑣 , : Summarize the ini- InitStatus tialization status for variable 𝑣 after all possible initializers completion (merge multiple initializers). Decision Policy. The decision policy Δ is defined as: Δ InitStatus 𝑣 : non-bug # = must_init ) ≠ must_init ) # ( InitStatus # ( 𝑣 ) Δ : potential bug ( ( ) In this policy, we adopt a conservative approach by treating all variables not explicitly marked as must_init as potential vulnerabili- ties. And it is worth noting that this policy may introduce some false positives. For example, it might over-approximate preconditions.
2308.00245#20
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
20
Figure 3: A diagram showing the software development process in MetaGPT, emphasizing its sig- nificant dependence on SOPs. The more detailed demonstration can be found in Appendix B. duces a meticulously crafted software solution. We provide a detailed schematic (Figure 3) and a concrete instance (Appendix B) of the SOP workflow in MetaGPT. 3.2 COMMUNICATION PROTOCOL Structured Communication Interfaces Most current LLM-based multi-agent frameworks (Li et al., 2023; Zhuge et al., 2023; Zhang et al., 2023; Park et al., 2023) utilize unconstrained natural language as a communication interface. However, despite the versatility of natural language, a question arises: does pure natural language communication suffice for solving complex tasks? For example, in the telephone game (or Chinese whispers)2, after several rounds of communication, the original information may be quite distorted. Inspired by human social structures, we propose using structured communication to formulate the communication of agents. We establish a schema and format for each role and request that individ- uals provide the necessary outputs based on their specific role and context. As shown in Figure 3, the Architect agent generates two outputs: the system interface design and a sequence flow diagram. These contain system module design and interaction sequences, which serve as important deliverables for Engineers. Unlike ChatDev (Zhao et al., 2023), agents in MetaGPT
2308.00352#20
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
20
Result comparison The last step is to compare results from the regeneration stage and the original step with the following prompt: The following are 2 solutions to a math problem. Solution 2: [Step i] Compare the key points from both solutions step by step and then check whether Solution 1 ‘supports’, ‘contradicts’ or ‘is not directly related to’ the conclusion in Solution 2. Pay special attention to the difference in numbers. If the regeneration output ‘supports’/‘contradicts’ the original step, we can conclude that the original step is likely correct/incorrect respectively. Sometimes, the correctness of the original step cannot be directly inferred from the regeneration output. For example, when the target is to simplify an equation, then there may be multiple valid solutions. In such cases, we are not sure about the correctness of the original step, which makes ‘is not directly related to’ the third possible outcome of the check. 3.2 RESULTS INTEGRATION
2308.00436#20
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
20
Image editing with natural language. We consider image editing as a form of qualitative evaluation. This process calls for the model to plan and use different vision modules to handle complex natural language instructions. For instance, to execute an instruction like "replace the red bus with a green bicycle", the model must localize the red bus, generate its segmentation mask, and then inpaint the masked area. We use the tool sets from VisProg. Unlike VisProg, which depends on few-shot demonstrations, our model only looks at the module documentation. We further include the recently released image understanding works, Segment Anything (SAM) [30] and Grouding DINO [38] to expand the tool set to test the zero-shot capability on the new and unseen tools in a plug-and-play fashion. Video tracking. Video tracking is also utilized in this study as a qualitative evaluation. This task aims to acquire the masks of a tracked object in each frame of a video, necessitating the deployment of processes such as object localization, segmentation, and tracking. In addition to SAM and Groudning DINO, we incorporate the documentation of an unseen object tracking module, Xmen [14] into the VisProg framework with the aim to showcase the model’s ability to adapt and employ new tools without the need for explicit demonstrations again on a different task. 5
2308.00675#20
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
20
cif_id Accessible Surface Area (m*2/cm‘’3) 2664 pcu+N47+E33 5808.59 1411 pcu+N274+E32 5714.84 8 pcu+N613+E90 5665.73 Figure 5. Example of a generator for the question "Can you generate the structures with the largest surface area". The generator establishes a plan, objective and property for the human question. Based on this, it finds parents that satisfy the objective. It uses a genetic algorithm to create children genes and generate structures. This is repeated for a number of cycles to generate new MOFs, which are used to derive the final answer. 17 Moreover, ChatMOF is engineered to perform a diverse set of toolkits, which extend beyond the realms of LLMs. This includes capabilities such as file search, Internet search, and even simple calculations. These additional functionalities are primarily enabled by leveraging the varied capabilities provided by LangChain57, enhancing the overall functionality and utility of ChatMOF. Thus, it is not merely a material analysis tool, but a comprehensive system that can accommodate a wide array of tasks and operations.
2308.01423#20
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
21
Conceptually, LLift will not miss more bugs. The post-constraint guided path optimizations and decision policies are safe. 3.4 Turns and Conversations in LLMs We define two key concepts in interacting with LLMs: turn and conversation. Turn: A turn encapsulates a singular interaction with the LLM. , where 𝑝 represents the Formally, it’s defined as a tuple, problem or question, and 𝑟 denotes the LLM’s response. Conversation: Leveraging the capabilities of LLMs often necessitates a series of interactions, especially for complex problem-solving. A conversation is an ordered sequence of turns. A conversation comprising 𝑛 turns can be expressed as 𝑝1, 𝑟1) 4 DESIGN In Section §3.3, we introduced a conceptual workflow. Elaborating on that foundation, Figure 4 showcases a compelling illustration of our methodological approach. Yet, translating this workflow into 𝑝2, 𝑟2) [( ( ( )] Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian
2308.00245#21
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
21
# 2https://en.wikipedia.org/wiki/Chinese whispers 5 # Preprint communicate through documents and diagrams (structured outputs) rather than dialogue. These documents contain all necessary information, preventing irrelevant or missing content. Publish-Subscribe Mechanism Sharing information is critical in collaboration. For instance, Architects and Engineers often need to reference PRDs. However, communicating this information each time in a one-to-one manner, as indicated by previous work (Li et al., 2023; Zhao et al., 2023; Zhang et al., 2023), can complicate the communication topology, resulting in inefficiencies. To address this challenge, a viable approach is to store information in a global message pool. As shown in Figure 2 (left), we introduce a shared message pool that allows all agents to exchange messages directly. These agents not only publish their structured messages in the pool but also access messages from other entities transparently. Any agent can directly retrieve required information from the shared pool, eliminating the need to inquire about other agents and await their responses. This enhances communication efficiency.
2308.00352#21
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
21
3.2 RESULTS INTEGRATION After running step-checking and getting a checking result for each step, we need an integration function ϕ to give a confidence score, w ∈ [0, 1], for the overall correctness of the solution. The input of ϕ should be a vector in the form of [r0, r1, ..., rn], where each item ri represents the step checking result for Step i. We will use ri = −1, 0, and 1 to represent the step-checking results ‘contradict’, ‘is not directly related to’ and ‘support’ respectively. We find that the following simple integration function works well in practice n n Yn]) = 2 * Sigmoid (ada >> 1, +). (1) i=0 i=0 w = 6([ro.T1
2308.00436#21
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
21
5 —e- With doc —*— Without doc ScienceQA TabMWP NLVRv2 80.5 904 ¥ 80.0 92 ou a B 79.5 90 40 5 8 79.0 88 s 20 % 78.5 86 2 78.0 84 0 0 5 10 0 8 16 0 2 4 12 Number of demos Number of demos. Number of demos Figure 4: Tool-using performance with gpt-3.5-turbo on different benchmarks, which covers from langauge to vision modalities. We report results with and without documentation (doc) and demonstations (demo), and their combinations. Clearly, with documentation only (upper-left blue dot) shows competitive performance across all datasets. # 4 Empirical findings We showcase the importance of tool documentation in three-fold: First, we show that tool documen- tations reduces the need of demonstrations (Section 4.1). Second, based on the finding, we further show that relying on documentation rather than demonstrations provides a more scalable solution to equip LLMs with a large number of available tools (Section 4.2). Finally, we show that with tool documentations alone, LLMs are able to comprehend and utilize most recent vision models to accomplish impressive results on image editing and video tracking tasks, on which existing results are achieved either with human-crafted demos or predefined procedures (Section 4.3).
2308.00675#21
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
21
In addition, ChatMOF incorporates the Atomic Simulation Environment (ASE)58 library as an integral toolkit to facilitate diverse operations on material structure data. The ASE library holds considerable importance in the field of materials science due to its capabilities, including atom manipulation, cell information acquisition, and visualization, among others. Similar to the function of a table searcher, when confronted with a query, ChatMOF devises a strategic plan and constructs suitable Python code utilizing the ASE library to fulfil the query's demands. Subsequently, this code is executed. code is executed. 18 # Evaluation
2308.01423#21
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
22
𝑝2, 𝑟2) [( ( ( )] Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian UBITect reports potential use- before-initialization bugs unsigned int Af (sscanf(str, fa, 8b, 8C,_ 8d. // use of a, static int libcfs_ip_str2addr(...){ b,c, dj} ‘ Identify the initializer: sscanf } anoneai Extract the post-constraint: int sscanf(... ret>=4 va_start(args, fmt); i = vsscanf(buf, fmt, args); va_end(args); Analyze the initializer with post-constraint guidance Figure 4: Example run of LLift. For each potential bug, LLift ① (Φ1) identifies its initializer, ② (Φ2) extracts the post-constraints of the initializer, and ③ (Φ3) analyzes the behavior of the initializer with the post-constraints via LLM. practice presents its challenges. Even with the advanced knowledge and analytical capabilities of cutting-edge LLMs, achieving optimal results remains a challenge. Throughout the development of LLift, we identified several obstacles and subsequently introduced four distinct design components to effectively address these challenges.
2308.00245#22
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
22
Sharing all information with every agent can lead to information overload. During task execution, an agent typically prefers to receive only task-related information and avoid distractions through irrelevant details. Effective management and dissemination of this information play a crucial role. We offer a simple and effective solution-subscription mechanism (in Figure 2 (left)). Instead of relying on dialogue, agents utilize role-specific interests to extract relevant information. They can select information to follow based on their role profiles. In practical implementations, an agent activates its action only after receiving all its prerequisite dependencies. As illustrated in Figure 3, the Architect mainly focuses on PRDs provided by the Product Manager, while documents from roles such as the QA Engineer might be of lesser concern. 3.3 # ITERATIVE PROGRAMMING WITH EXECUTABLE FEEDBACK In daily programming tasks, the processes of debugging and optimization play important roles. However, existing methods often lack a self-correction mechanism, which leads to unsuccessful code generation. Previous work introduced non-executable code review and self-reflection (Zhao et al., 2023; Yao et al., 2022; Shinn et al., 2023; Dong et al., 2023). However, they still face challenges in ensuring code executability and runtime correctness.
2308.00352#22
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
22
n n Yn]) = 2 * Sigmoid (ada >> 1, +). (1) i=0 i=0 w = 6([ro.T1 where A_; and po are two non-negative hyperparameters with A_; > Ao; we fix A_; = 1 and Ao = 0.3 in our experiments. The rationale of this setup is that the more failed checks we see, the more likely the overall reasoning process, and thus final solution, are wrong. Note here that, because the checks are themselves imperfect, we do not necessarily want to immediately reject the whole solution from a single step-check failure, especially for r; = 0 cases. This is why we take a ‘soft’ approach to the verification with a confidence score. The number of successful checks, ie. 7/9 1,,=1, is deliberately not included in our integration function as an increased number of 5 successful checks does not actually increase our confidence in the overall solution: shorter reasoning chains are generally preferable to longer ones for a given question and LLM. Once calculated, the resulting confidence score can be directly used as a weight for voting between different possible solutions. We can thus use SelfCheck to increase the accuracy of an LLM’s answers by generating multiple possible solutions, calculating confidence scores for each, and then choosing our final answer through weighted voting. # 4 EXPERIMENTS
2308.00436#22
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
22
# 4.1 Documentations sidestep the need for demonstrations In this section, we show how tool documentations reduce the need of demonstrations. We present the findings on three datasets: ScienceQA, TabMWP, and NLVRv2. We evaluate the model performance, with and without tool documentations, across varying number of demonstrations (demo) on each dataset. In Figure 4, we see that when provided with tool docs, the model is able to maintain stable performance as we strip away the number of demos used. In fact, without using any demos (i.e., 0-shot), the model is able to achieve on par performances to using 16-shot on TabMWP, and using 12-shot on NLVRv2. On ScienceQA, the model can even achieve better performance solely with docs compared to additionally using 10-shot demos. On the other hand, without tool docs, the model performance is very sensitive to the number of demos used. As we decrease the number of demos, we see significant performance drop on all three datasets. This highlights the importance of tool docs and shows that it provides an effective way to reduce the reliance on demos. In Table 1, when compared to existing baseline methods, we also see that with doc, even 0-shot can perform very competitively.
2308.00675#22
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
22
code is executed. 18 # Evaluation To evaluate performance of ChatMOF, analysis was conducted for “search task”, “prediction task”, and “generation task”. For evaluation purposes, questions for ChatMOF were created utilizing GPT-4.0 to generate various sentences about the given properties of a MOF. The respective questions for each task can be found in Table S1-3. Accuracy, gauging how adequately the logic responded to the question, was measured for each task. An analysis of the accuracy of ChatMOF utilized three labels: "True", "False (token limit exceeded)", and "False (logic error)". The label "True" signifies that ChatMOF's logic was precise and the yielded answer was accurate. The term "False (Token Limit Exceeded)" was used when the token count in LLM surpassed the maximum allowance of 4,000, thus obstructing further progress. Lastly, the "False (Logic Error)" label designated situations where an error in ChatMOF's logic resulted in an incorrect response or an anomaly. Such situations typically occur when an erroneous plan for obtaining an answer was devised or when an error in output interpretation diverts the system from the desired direction.
2308.01423#22
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00352
23
Our first MetaGPT implementations overlooked certain errors during the review process, due to LLM hallucinations (Manakul et al., 2023). To overcome this, after initial code generation, we introduce an executable feedback mechanism to improve the code iteratively. More specifically, as shown in Figure 2, the Engineer is asked to write code based on the original product requirements and design. This enables the Engineer to continuously improve code using its own historical execution and debugging memory. To obtain additional information, the Engineer writes and executes the corre- sponding unit test cases, and subsequently receives the test results. If satisfactory, additional devel- opment tasks are initiated. Otherwise the Engineer debugs the code before resuming programming. This iterative testing process continues until the test is passed or a maximum of 3 retries is reached. # 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTING
2308.00352#23
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
23
# 4 EXPERIMENTS We now run experiments on three math-reasoning datasets to evaluate SelfCheck’s effectiveness in checking multi-step reasoning and improving final answer accuracies. Note here that our focus on math-reasoning problems is due to ease of performance evaluation and dataset availability; SelfCheck is directly applicable to other question-answering problems with nominal changes to our prompts. Datasets GSM8K (Cobbe et al., 2021), MathQA (Amini et al., 2019), and MATH (Hendrycks et al., 2021) consist of math problems on primary school, middle school, and competition levels, containing 1319, 2985, and 5000 test samples, respectively. For GSM8K and MathQA, we evaluate SelfCheck on the whole test sets. Due to limited resources, we use a subset of MATH test set taken from Ling et al. (2023).1 Besides the levels of difficulty, the three datasets differ from each other in the following aspects. Firstly, MathQA provides 5 options to choose from for each problem, while GSM8K and MATH have no options. Secondly, GSM8K only has arithmetic problems, while MathQA and MATH contain more diverse problems in geometry, physics, probability, and algebra.
2308.00436#23
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
23
By sidestepping the need for demos, we are able to alleviate the efforts needed to carefully curate these demos. For example, aligned with recent studies [81, 12], we observe in Figure 4 that the model performance is sensitive to which demos are used, shown by the large performance variances under 5-shot on ScienceQA and 2-shot on NLVRv2. # 4.2 Documentations enable efficient scaling on tool-using The findings in Section 4.1 show that one can in fact reduce the reliance on few-shot demos with tool docs. By relaxing this constraint, we study whether tool docs enables a more scalable way to equip LLMs with a large number of tools, wherein few-shot demos can specifically fall short on covering limited tool-use cases. We present our findings in this section on the newly collected LLM Cloud CLI dataset with 200 available tools. Qualitative walk-through result. Figure 3 serves as a qualitative example illustrating the limita- tions of the LLMs with different information. As expected, zero-shot LLM successfully identifies and responds to the touch command, which is familiar and well-known. However, when faced with the 6
2308.00675#23
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
23
Figure 6 presents the accuracy measurements for the three tasks using ChatMOF with GPT-4. Accuracy was measured for 100 sample questions for the search and prediction tasks, and 10 sample questions for the generation task. The number in the bar graph indicates the number of each question in each class. Both the search and prediction tasks rendered accurate answers with high frequency. Excluding 'Token Limit Exceeded' instances (4 out of 100, 6 out of 100, and 2 out of 100, for search, prediction, and generation tasks respectively), they exhibit high accuracies of 96.9% and 95.7%, respectively. For the generation task, the accuracy stood at 87.5%. Given its complexity, relative to the other two tasks, the accuracy rate is lower. Regardless, all three tasks report high accuracy rates, and these tasks carry significance weight because these are tasks that a typical LLM fails to execute flawlessly. A single LLM fails to provide precise information since 19 it lacks specific information about the material, particularly for obtaining property information, which is challenging to source via an internet search.
2308.01423#23
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
24
# e # e # e C1. Limited Understanding of Post-constraint. Despite LLMs (e.g., GPT-4) are able to comprehend the definition of post-constraint and apply them in simple scenarios, we found their capacity to utilize this knowledge in actual program analysis—such as summarizing function behavior in line with specific post-constraint —to be limited. This critical limitation often results in unpredictable and inconsistent outcomes. C2. Token Limitations. It is known that LLMs have token limitations. For example, GPT-3.5 supports 16k tokens and GPT-4 supports 32k tokens [20]. This means that we do not want to copy a large number of function bodies in our prompts to LLMs. C3. Unreliable and Inconsistent Response. LLMs are known to result in unreliable and inconsistent responses due to halluci- nation and stochasticity [41]. Stochasticity refers to the inherent unpredictability in the model’s outputs [32]; and the halluci- nation refers to LLMs generating nonsensical or unfaithful re- sponses [11, 42]. By design, the stochasticity can be mitigated with lower temperature, a hyperparameter controlling the degree of randomness in outputs [27]; however, reducing temperature may impair the model’s exploring ability [37] and therefore may miss corner cases that result in vulnerabilities.
2308.00245#24
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
24
# 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTING Datasets We use two public benchmarks, HumanEval (Chen et al., 2021a) and MBPP (Austin et al., 2021), and a self-generated, more challenging software development benchmark named Soft- wareDev: (1) HumanEval includes 164 handwritten programming tasks. These tasks encompass function specifications, descriptions, reference codes, and tests. (2) MBPP consists of 427 Python tasks. These tasks cover core concepts and standard library features and include descriptions, ref- erence codes, and automated tests. (3) Our SoftwareDev dataset is a collection of 70 representa- tive examples of software development tasks, each with its own task prompt (see Table 5). These tasks have diverse scopes (See Figure 5), such as mini-games, image processing algorithms, data visualization. They offer a robust testbed for authentic development tasks. Contrary to previous datasets (Chen et al., 2021a; Austin et al., 2021), SoftwareDev focuses on the engineering aspects. In the comparisons, we randomly select seven representative tasks for evaluation. 6 Preprint
2308.00352#24
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
24
LLMs We use GPT-3.5 (gpt-3.5-0301) and GPT-4 (gpt-4-0613) as our LLMs, focusing in particular on the former due to budget restrictions. Note that the same prompts are used for all datasets with both LLMs during evaluation; no dataset-specific customization or tuning has been performed. When devising the prompts, a small number of training samples from MathQA dataset were utilized. Baselines We use majority voting (also known as Self-Consistency Decoding (Wang et al., 2022) in the context of CoT reasoning) as our main baseline following Ling et al. (2023) and Lightman et al. (2023). Despite its simplicity, this is still quite a strong baseline in the current literature. In particular, most existing few-shot methods report similar results compared with it (Weng et al., 2022; Ling et al., 2023). We also compare with previously quoted results from Self Verification (SV, Ling et al. (2023)) and Deductive Verification (DV, Weng et al. (2022)) when possible. We note though that these approaches are not directly comparable to SelfCheck in general, as they require additional exemplars which will often not be available in practice. Despite this, we will find that SelfCheck outperforms them when comparisons are possible.
2308.00436#24
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
24
6 Table 1: Comparisons to existing baseline methods on different benchmarks. We follow [40, 19] to select the beasline methods for each benchmark task. We see that 0-shot with doc performs competitively, outperforming CoT and PoT on ScienceQA and TabMWP. On NLVRv2, ViLT-NLVR is finetuned on the dataset, while the LLM performs in a zero-shot fashion. Benchmark Methods CoT [67] without doc (0-shot) with doc (0-shot) ScienceQA 78.54 78.25 79.91 PoT [13] without doc (0-shot) with doc (0-shot) TabMWP 89.28 84.13 92.69 ViLT-NLVR [29] without doc (0-shot) with doc (0-shot) NLVRv2 76.30 0.00 63.40
2308.00675#24
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
24
Also ChatMOF, when integrated with GPT-4, exhibits superior performance compared to its integration with GPT-3.5-turbo. As evidenced in Figure S2, the accuracy of ChatMOF with GPT- 3.5-turbo stands at 95%, 91%, and 77.8% for the search, prediction, and generation tasks respectively, excluding instances of "Token Limit Exceeded". Across all tasks, GPT-4 consistently outperforms GPT-3.5-turbo in accuracy. This enhanced accuracy of GPT-4 can be attributed to its refined reasoning and comprehension capabilities, particularly during the planning phase. Figure S3 illustrates the distinct approaches that GPT-4 and GPT-3.5-turbo take when presented with the same query: "How does the pore limiting diameter of YUSGID_clean compare with other materials?". While GPT-3.5-turbo seeks the values for all materials mentioned in the query, leading to a token error and subsequent inability to provide an answer, GPT-4 adopts a more holistic strategy. It assesses the distribution of all materials, leveraging metrics such as mean, variance, and quartile values of the property in question. This approach enables GPT-4 to determine the relative position of the target material in the overall distribution, thus delivering a # more informative response to the user.
2308.01423#24
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
25
4.2 Design Overview We will discuss our design strategies to address the above chal- lenges in the rest of the section. Before that, we provide a high-level overview of our solution. To tackle challenge C1 (Post-constraint), we propose to encode (D#1) Post-Constraint Guided Path Analysis by teaching LLMs with examples, or few-shot in-context learning, of post- constraints. This approach enables LLMs to learn from a small number of demonstrative examples, assimilate the underlying patterns, and apply this understanding to process post-constraint guidance in our analysis.
2308.00245#25
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00436
25
We omit results from Faithful-CoT (Lyu et al., 2023), because it has already been shown to decrease the accuracies on GSM8K and MATH by 11.8% and 4.2%, respectively compared to majority voting (Ling et al., 2023). It is also impossible for us to compare with training/finetuning based methods such as Lightman et al. (2023), because we have neither access to their finetuned models nor computation resources to repeat their training/finetuning. The significant extra data and resources they require also means their contributions are somewhat tangential to SelfCheck regardless. 4.1 FINAL ANSWER CORRECTNESS Figure 2 shows the performance gains using the confidence scores from SelfCheck to do weighted voting compared with baseline methods. The upper plots show that accuracies of both SelfCheck and majority voting have the same increasing tendency as the number of generated solutions per question increases, which is a result of the variance reduction provided by averaging over more solutions. The bottom plots show the difference in accuracy between the two including the standard error in the estimate. We can see that by allocating higher weights to correct solutions, SelfCheck achieves significantly higher accuracies than majority voting for all solution numbers per question. We also find the improvements of SelfCheck (compared with majority voting) to be higher than Deductive Verification and Self-Verification in their reported settings, despite the use of in-context learning
2308.00436#25
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
25
—e- Withdoc —*— Without doc text-davinci-002 gpt-3.5-turbo 0.5 05 gas © Wee Of O42 04 O37 035 035 o35 Of = A £8: +o 5 0.3 0.3 a 4 0.2 0.2 0,18 0,19 018 he Od ee 0.1 0.1 oloz oloz 0,05 0,05 0.0 0.0 0 5 10 15 0 5 10 15 Number of demos Number of demos Figure 5: Command planning of LLM Cloud Platform CLI with and without documentation (doc) and demonstations (demo), and their combinations. Few-shot demonstration without documentation results in unsatisfactory performance due to low coverage of large number of tools, while reading documentation significantly boosts the performance. unseen LLM-Cloud command lines, the zero-shot LLM fails to generate accurate responses involving these unfamiliar tools due to its lack of knowledge regarding their syntax and usage.
2308.00675#25
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
25
# more informative response to the user. For the "search task," the writing of code utilizing the pandas library significantly impacts the accuracy. 'Token Limit Exceeded' generally occurs when the output code surpasses the permissible token count. This frequently arises when all relevant materials that satisfy a given condition are provided (for example, when a list of materials with a particular property is listed), or when the question contains a comparative clause such as "compared to other materials." 'Logic Error' typically surfaces when there is a flawed strategic approach or a code error. An instance of this 20 would be when a request to provide 10 specific items is met with a misguided strategy that solely aims to "extract high values," failing to retrieve the specified number of items.
2308.01423#25
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
26
4 The Hitchhiker’s Guide to Program Analysis: A Journey with Large Language Models 0.1; Initializer & Post-constraints Extraction - Prompt LLMs to extract the initializer & post-constraints Prompt LLMs to summarize the initializer ¥ Perform Progressive Prompt ¥ with requested information Perform Self-validation with previous response ed more If succeed es en A Perform Self-validation with previous response ¥ ¥ Conclude the previous response in JSON Conclude the previous response in JSON Figure 5: The workflow of LLift. Given a potential bug, we let LLM first iden- tify the initializer and then extract its post-constraints (Convo.1), then leverage them to summarize the behavior of the initializer (Convo.2). A conversation consists of prompts (boxes) and responses (edges).
2308.00245#26
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
26
# (n−c k ) (n k) 1 − For SoftwareDev, we prioritize practical use and evaluate performance through human evaluations (A, E) or statistical analysis (B, C, D): (A) Executability: this metric rates code from 1 (failure/non- functional) to 4 (flawless). ‘1’ is for non-functional, ‘2’ for runnable but imperfect, ‘3’ for nearly perfect, and ‘4’ for flawless code. (B) Cost: the cost evaluations here include the (1) running time, (2) token usage, and (3) expenses. (C) Code Statistics: this includes (1) code files, (2) lines of code per file, and (3) total code lines. (D) Productivity: basically, it is defined as the number of token usage divided by the number of lines of code, which refers to the consumption of tokens per code line. (E) Human Revision Cost: quantified by the number of rounds of revision needed to ensure the smooth running of the code, this indicates the frequency of human interventions, such as debugging or importing packages.
2308.00352#26
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
26
1https://github.com/lz1oceani/verify_cot/tree/main/results/chatgpt3.5/ natural_program/MATH_np.json 6 (c) MATH∗ (a) GSM8K (b) MathQA Figure 2: The upper plots show the accuracies of SelfCheck and majority voting for different numbers of generated solutions per question with GPT-3.5. The lower plots show the accuracy gaps between each method and majority voting, where DV and SV stand for Deductive Verification (Weng et al., 2022) and Self-Verification (Ling et al., 2023), respectively. It is difficult to compare with DV and SV with respect to absolute accuracies because they are using different generator models. However, we can see that SelfCheck achieves higher relative performance gains than both in their reported settings.
2308.00436#26
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
26
unseen LLM-Cloud command lines, the zero-shot LLM fails to generate accurate responses involving these unfamiliar tools due to its lack of knowledge regarding their syntax and usage. While few-shot demonstrations have the potential to enhance model performance, it is important to acknowledge that the coverage of these demonstrations is limited due to the vast number of command-line tools. Consequently, certain commands or flags may not be adequately covered. In Figure 3, although we observe data copying is commonly appeared the few-shot examples, however, the model encounters difficulties in correctly configuring the less common flag --port, instead hallucinating the use of -P based on familiarity with the scp -P command in Linux. Conversely, in the same example illustrated in Figure 3, by solely utilizing the provided documentation, the language models not only successfully discern the steps required for utilizing tools (such as a hidden step of creating a topic before sending messages), but also possess the ability to accurately configure flags (e.g., --port) by leveraging information extracted from the documentation.
2308.00675#26
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
26
During the "prediction task," difficulties often occur in the interpretation process of the observed values using machine learning techniques. Both the 'Token Limit Exceeded' and 'Logic Error' occurrences can stem from the effort to draw the correct answer from the table based on the estimated values. 'Logic Errors' can manifest not only during the table search phase but also during the strategy formulation stage. An erroneous strategy could either lead to the loading of an unsuitable model or to the generation of an input that is incompatible with the intended model. The "generation task" presents a more intricate set of challenges, inviting a variety of errors. A frequently observed 'Logic Error' appears when no parent genes can be retrieved from database. If the objective function aims for maximum or minimum values, a satisfying parent gene can always be found. However, if the goal is to get close to a certain value or to fall within a specific range, the selected range might not yield any satisfying parent genes. In such scenarios, the strategy is adapted to incorporate more data. However, if no suitable parent genes are found even after modifying the strategy, it results in an error. Further, both 'Token Limit Exceeded' and 'Logic Error' might occur during the extraction of the most suitable MOF from the generated MOFs, aligning with the objective function. 21
2308.01423#26
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
27
To tackle challenge C2 (Token Limitation), We employ two strate- gies: (D#2) Progressive Prompt. Instead of copying a large num- ber of function bodies (i.e., subroutines), we only provide function details on demand, i.e., when LLMs are not able to conduct a re- sult immediately. (D#3) Task Decomposition. We break down the problem into sub-problems that can be solved in independent conversations, i.e., a sequence of prompt and response pairs. To tackle challenge C3 (Unreliable Response), we employ the following strategies: (D#4) Self-Validation. We ask LLMs to review and correct their previous responses. This helps improve the consistency and accuracy based on our observation. Besides, (D#2) Progressive Prompt and (D#3) Task Decomposition also help to deal with this challenge. Additionally, we implement majority voting by running each case multiple times and use majority voting to combat stochasticity. We elaborate the design of (D#1 - #4) Post Constraint Guided Path Analysis, Progressive Prompts, Task Decomposition, and Self-Validation detailed in the rest of this section. The effectiveness and efficiency of these design strategies are rigorously evaluated in §6.4, revealing a substantial enhancement in bug detection within the Linux kernel.
2308.00245#27
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
27
Baselines We compare our method with recent domain-specific LLMs in the code generation field, including AlphaCode (Li et al., 2022), Incoder (Fried et al., 2022), CodeGeeX (Zheng et al., 2023), CodeGen (Nijkamp et al., 2023), CodeX (Chen et al., 2021a), and CodeT (Chen et al., 2022) and general domain LLMs such as PaLM (Chowdhery et al., 2022), and GPT-4 (OpenAI, 2023). Several results of baselines (such as Incoder, CodeGeeX) are provided by Dong et al. (2023). We modify certain role-based prompts in MetaGPT to generate code suitable for the target prob- lem (e.g., generate functions instead of classes for HumanEval and MBPP). With the SoftwareDev benchmark, we provide a comprehensive comparison between MetaGPT, AutoGPT (Torantulino et al., 2023), LangChain (Chase, 2022) with Python Read-Eval-Print Loop (REPL) tool3, Agent- Verse (Chen et al., 2023), and ChatDev (Qian et al., 2023). 4.2 MAIN RESULT
2308.00352#27
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
27
Table 1: SelfCheck significantly increases final answer accuracies with both GPT-3.5 and GPT- 4, even we only have 2 candidate solutions for each question. ∆Acc is the performance gain of SelfCheck compared with majority voting (MV), with the ± indicating the standard error. ✗✗, ✗✓and ✓✓represent the proportions of questions with 0, 1 or 2 correct solutions. We see that the gains from SelfCheck are typically larger in cases where it is common for only one of the solutions to be correct, as these are the cases using weighted voting can influence the final answer.
2308.00436#27
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
27
Quantitative comparisons. We calculate the command-line level F1 score of each example and report the average F1 across 50 examples. Figure 5 showcases the performance of various LLMs in the zero-shot setting, where they have no prior exposure to the LLM-Cloud command-line tools we create. As anticipated, all zero-shot LLMs demonstrate low F1 scores. Zero-shot text-davinci-002 achieves an F1 score of 0.02, while the gpt-3.5-turbo model achieves a slightly higher score of 0.13. The improved performance of the gpt-3.5-turbo model can be attributed to better handling of common Linux commands, such as touch. As mentioned in quantitative comparison, few-shot demos improve upon zero-shot, but still fail on uncovered commands or flags in the demo. Therefore, the best few-shot demo in text-davinci-002 and gpt-3.5-turbo are only with 0.05 and 0.19 F1 scores respectively. On the other hand, LLM with documentation boosts the performance by a large margin to be 0.37 in text-davinci-002 and 0.45 in gpt-3.5-turbo . 7
2308.00675#27
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
27
with the objective function. 21 100 (mm True False (exceed token limit) |" False (logic error) 80 60 40 Percentage (%) 20 Search task Prediction task Generation task Figure 6. Depiction of accuracies for three tasks using GPT-4 model - search, prediction, and generation. Accuracies were evaluated based on three labels: True, False (exceeding token limit), and False (logical error). The number in the bar represent the count of each label. 22 # Inverse Design Validation One notable observation is that with each generation, the genetic algorithm refines the distribution of material properties to better align with the target value. Figure 7 illustrates the outcomes of the generation task for two different scenarios. Figure 7(a) reveals the structures generated in response to the question, "Can you generate structures with the largest surface area?" In this case, ChatMOF interpretes the property as accessible surface area, with the objective as maximizing this parameter. The initial generation MOF (0th generation) displays a broad distribution of surface area with an average value of 3,748 m2/g. However, with each subsequent generation, the peak at a higher position amplifies. By the third generation, the offspring MOF exhibits a significantly elevated average peak value of 5,554 m2/g.
2308.01423#27
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
28
# 4.3 Design #1: Post-Constraint Guided Path Analysis The Linux kernel frequently employs return value checks as illus- trated in Table 2. Through our detailed examination of non-bug instances, we found that a path-sensitivity analysis can effectively eliminate over 70% of these negative cases. However, path-sensitive static analysis usually suffers from path explosion, especially in large-scale codebases like the Linux kernel. Fortunately, we can prompt the LLM to collect 𝑝𝑜𝑠𝑡 and sum- 𝑝𝑜𝑠𝑡 . It is worth noting C marize the function with respective to the # C 5 Table 2: Two types of post-constraints and their variants. Check Before Use Failure Check Type A: if (sscanf(...) >= 4) { err = func(&a); use(a, b, c, d); if (err) { return/break/goto; } } use(a) Type B: Type A’: switch(ret=func(&a)){ case some_irrelevant_case: Type B’: while(func(&a)){ do_something(...); do_something(...); } break; use(a); case critical_case: use(a);
2308.00245#28
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00436
28
Dataset Generator Checker ✗✗ (%) ✗✓ (%) ✓✓ (%) Acc(MV, %) Acc(SelfCheck, %) ∆Acc (%) GSM8K GPT-3.5 GPT-4 GPT-4 GPT-3.5 GPT-4 GPT-3.5 16.8 8.8 8.8 23.0 8.2 8.2 60.2 83.0 83.0 71.7 87.1 87.1 74.3 86.9 88.1 2.8±0.9 -0.2±0.2 1.0±0.3 MathQA GPT-3.5 GPT-4 GPT-4 GPT-3.5 GPT-4 GPT-3.5 27.6 16.2 16.2 26.4 11.0 11.0 46.0 72.8 72.8 59.2 78.3 78.3 64.6 80.9 81.2 5.4±1.1 2.6±0.4 3.0±0.4 MATH∗ GPT-3.5 GPT-4 GPT-4 GPT-3.5 GPT-4 GPT-3.5 52.6 42.0 42.0 23.2 20.2 20.2 24.2
2308.00436#28
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
28
Question: Replace the bench with a blue sofa Program: BOXO = DINO_LOC(image=IMAGE, object='bench’) Pp OBJO = SAM_SEG(image=|MAGE, a box=BOX0) Documentation ; » [Documentation IMAGEO = REPLACE(image=IMAGE, of other APIs] Question: object=OBJO, prompt='blue sofa’) : Change the hair color FINAL_ANS = RESULT(var=IMAGEO) + Program: 3 DINO_LOG (NEW BOXO = DINO_LOC(image=IMAGE, 3 © Description: object="hair’) a Ce NOL Pr >|] OBJO=SAM SEG(image=IMAGE, [>>| 98 ° Usge 2] | box-Box0) $ BOX=DINO_LOC(..) | | IMAGEO = REPLACE(image=IMAGE, & - i bject=OBJO, prompt="red hair’) Fy ISAM_SEG [NEW: ix go 2 : Deception Question: 5) \ FINAL_ANS = RESULT(var=IMAGEO) = $ an | | ee Ee Cbieet Segmentation by _ [Track the cat in the video Re-invent Grounded-SAM S - ° e Usage: Pa
2308.00675#28
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
28
Analogously, Figure 7(b) portrays the distribution of structures generated to fulfill the request "I'm looking for structures with a hydrogen uptake of about 500 cm3/cm3 at 100 bar and 77 K, can you generate those?". Here, ChatMOF sets the property to hydrogen uptake at 100bar and 77 K with the objective of achieving close proximity to 500 cm3/cm3. The distribution of the initial structure spans evenly from 250 cm3/cm3 to 650 cm3/cm3. However, the structure created in the final generation displays the most pronounced and narrow peak at 500 cm3/cm3. This indicates the efficiency of the genetic algorithm utilizing the LLMs. Figures 7(c) and 7(d) depict the final structures for the queries in 7(a) and 7(b). The optimal structure in 7(c), rtl+N535+N234, boasts the highest surface area amongst the generated MOFs. The predicted value stands at 6411.28 m2/g. Upon performing a geometric optimization and calculating accessible surface area using Zeo++59, the surface area is revealed to have 7647.62 m2/g. This value is notably higher when compared to the CoREMOF database. Figure S1 illustrates 23
2308.01423#28
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
29
that current LLMs (e.g., GPT-4) are not natively sensitive to the sen- sitivity; without any additional instructions, LLMs usually overlook the post-constraints. Therefore, we teach the LLM to be sensitive to post-constraints rules through few-shots in-context learning. We describe the design details as follows: 4.3.1 Post-Constraints Extraction. To extract the qualified postcon- dition, we first determine the post-constraints that lead to the use of suspicious variables. We incorporate few-shot in-context learn- ing to teach LLMs how to extract such constraints from the caller context. Table 2 demonstrates how we teach LLM with in-context learning. We focus primarily on two types of code patterns: # e # e
2308.00245#29
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
29
Figure 4: Pass rates on the MBPP and HumanEval with a single attempt. Performance Figure 4 demonstrates that MetaGPT outperforms all preceding approaches in both HumanEval and MBPP benchmarks. When MetaGPT collaborates with GPT-4, it significantly im- proves the Pass @k in the HumanEval benchmark compared to GPT-4. It achieves 85.9% and 87.7% in these two public benchmarks. Moreover, as shown in Table 1, MetaGPT outperforms ChatDev on the challenging SoftwareDev dataset in nearly all metrics. For example, considering the executabil- ity, MetaGPT achieves a score of 3.75, which is very close to 4 (flawless). Besides, it takes less time (503 seconds), clearly less than ChatDev. Considering the code statistic and the cost of human revi- sion, it also significantly outperforms ChatDev. Although MetaGPT requires more tokens (24,613 or 31,255 compared to 19,292), it needs only 126.5/124.3 tokens to generate one line of code. In contrast, ChatDev uses 248.9 tokens. These results highlight the benefits of SOPs in collabora- tions between multiple agents. Additionally, we demonstrate the autonomous software generation capabilities of MetaGPT through visualization samples (Figure 5). For additional experiments and analysis, please refer to Appendix C.
2308.00352#29
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00675
29
= $ an | | ee Ee Cbieet Segmentation by _ [Track the cat in the video Re-invent Grounded-SAM S - ° e Usage: Pa = Program: oa teackinew) IMAGE = EVAL(expr="{VIDEO}[O]") = BOXO = DINO_LOC(image=IMAGE, e Description: Video object antctcat') tracking by XMem [...] object='cat) . © Usage: OBJ=TRAACK(..) [>| >| OBJO=SAM_SEG(image=IMAGE, [> box=BOX0) \N A VIDEOO = TRACK(video=VIDEO, object=OBJO) FINAL_ANS = RESULT(var=VIDEOO) _/ Re-invent Track Anything
2308.00675#29
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
29
23 the distribution of accessible surface areas within CoREMOF. This particular structure's surface area ranks the third-highest position in the CoREMOF ranking. In a similar vein, the optimal configuration of dia+N719+E186, showcased in Figure 7(d), possesses a surface area of 499.998 cm3/cm3, mirroring the stipulated target of 500 cm3/cm3. Following geometric optimization of this structure, its uptake was calculated using RASPA, yielding a value strikingly close to the goal, at 495.823 cm3/cm3. Despite its successes, the generation task of ChatMOF does present some limitations. Chief among these is the decrease in gene diversity due to constraints on input and output tokens. The token count restricts the number of parent and child structures to around 100, a fraction compared to inversed design studies that employ conventional genetic algorithm procedures that generate upwards of 100,000 structures for each generation. Other constraints, such as the limited number of topologies and cycles, stem from resource and time restrictions. Yet, despite these limitations, ChatMOF excels in generating MOFs fitting the objective function, attesting to its efficacy. 24
2308.01423#29
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
30
# e # e Check Before Use. Type A is our motivating example; by look- ing at its check, the post-constraint should be 𝑟𝑒𝑡 4. Type A’ describes a similar case with switch-cases, with expected output 𝑟𝑒𝑡 Failure Check. This pattern captures the opposite of the first pattern. They commonly occur in the Linux kernel where the error conditions cause the use to become unreachable, as illus- trated in Type B, the post-constraint is 𝑒𝑟𝑟 0. Type B’ depicts a variant where the initializer keeps retrying til success, and therefore with expected output 𝑟𝑒𝑡 0, which indicates its first successful execution to break the endless loop. ↦→ 4.3.2 Function Behavior Summarization. Once we obtain the post- contraints in Convo.1, we feed them to the LLM to obtain the behav- ior summary in Convo.2 . For example, we provide the following:
2308.00245#30
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
30
# 3https://en.wikipedia.org/wiki/Read–eval–print loop 7 Preprint | yo Ton fi, a = Pec none S vont Chik a = Figure 5: Demo softwares developed by MetaGPT. # Table 1: The statistical analysis on SoftwareDev. ChatDev Statistical Index MetaGPT w/o Feedback MetaGPT (A) Executability (B) Cost#1: Running Times (s) (B) Cost#2: Token Usage (C) Code Statistic#1: Code Files (C) Code Statistic#2: Lines of Code per File (C) Code Statistic#3: Total Code Lines (D) Productivity (E) Human Revision Cost 2.25 762 19,292 1.9 40.8 77.5 248.9 2.5 3.67 503 24,613 4.6 42.3 194.6 126.5 2.25 3.75 541 31,255 5.1 49.3 251.4 124.3 0.83 # 4.3 CAPABILITIES ANALYSIS
2308.00352#30
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
30
from additional examples. We will perform additional ablations on how performance changes when ensembling over a larger number of solutions in Section 5.1. To investigate the effect of using more powerful LLMs, and of using a different LLM for the generation and checking, we further conducted experiments with GPT-4 and a mix of GPT-4 and GPT-3.5. Because of the high cost of calling the GPT-4 API, we randomly sample 500 questions from each dataset to form the test sets and generate 2 (instead of 10) answers to each question. In Table 1, we see that SelfCheck significantly outperforms majority voting with both GPT-3.5 and GPT-4. We also notice that using GPT-3.5 to check GPT-4 generated answers yields surprisingly good results, actually outperforming checking with GPT-4 on the simpler GSM8K and MathQA tasks. This is likely because using different LLMs helps to further decorrelate the errors of the generator and the checker, and shows that using a cheaper LLM can still often be sufficient for the checking. For the more difficult problems in MATH, using GPT-4 as checker always produces better results, but even here the checking from GPT-3.5 is beneficial compared to doing no checking at all. 4.2 VERIFICATION PERFORMANCE
2308.00436#30
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
30
Figure 6: Plug-and-play new vision tools without demonstration. We add GroundingDINO [38], Segment Anything (SAM) [30], XMem [14] as new tools for VisProg. Solely with the documentations of the new tools, the LLM is able to automatically “re-invent” recent Grounded-SAM [23] and Track Anything [70] without knowing these derivatives, taking a further step toward automatic knowledge discovery. We further compare the performance of the documentation reading with that of the documentation supplemented with few-shot demonstrations. In the case of text-davinci-002 , with documen- tation only, we achieves an F1 score of 0.37. Conversely, the documentation augmented with different shots yields an average F1 score of 0.35. Similarly, in the gpt-3.5-turbo experiment, the performance with different shot demonstrations (0.44, 0.44, 0.42) are consistently lower than the documentation-only performance (0.45).
2308.00675#30
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
30
24 (a) (b) Question: Can you generate the structures with Question: I'm looking for structures with a hydrogen uptake of the largest surface area? about 500 cm*/cm* at 100bar and 77K, can you generate those? 0.012 3 Initial structure 0.0007 ; HE) Generated structure ‘© Initial structure (3 Generated structure 0.0006 0.0005 ity 0.0004 Densi 0.0003 0.0002 0.0001 0.0000 + 0 1000 2000 3000 4000 5000 6000 7000 250 300 350 400 450 500 550 600 650 Accessible surface area (m?/g) Hydrogen uptake at 100bar, 77K (cm3/cm?3) (c) (d) rtl+N535+N234 dia+N719+E186 Predicted ASA: 6411.28 m7/g Predicted Hz uptake: 499.998 cm*/cm? Calculated ASA (after opt) : 7647.62 m*/g Calculated Hz uptake (after opt): 495.823 cm*/cm*
2308.01423#30
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
31
{ "initializer": "ret = sscanf(str,'%u.%u.%u.%u%n',&a,&b,&c,&d,&n)", "suspicious": ["a", "b", "c", "d"], "postconstraint": "ret >= 4" } The LLM may respond with { "ret": "success", "response": { "must_init": ["a", "b", "c", "d"], "may_init": [{"name":"n", "condition": "ret > 4"}] } } The response succinctly encapsulates the function behavior, where variables a,b,c,d are classified as must_init, and n is cat- egorized as may_init. This is due to the initialization of n only occurring when 𝑟𝑒𝑡 > 4, and not when 𝑟𝑒𝑡 ↦→
2308.00245#31
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
31
# 4.3 CAPABILITIES ANALYSIS Compared to open-source baseline methods such as AutoGPT and autonomous agents such as AgentVerse and ChatDev, MetaGPT offers functions for software engineering tasks. As presented in Table 2, our framework encompasses a wide range of abilities to handle complex and specialized development tasks efficiently. Incorporating SOPs (e.g., role-play expertise, structured communi- cation, streamlined workflow) can significantly improve code generation. Other baseline methods can easily integrate SOP-like designs to improve their performance, similar to injecting chain-of- thought (Wei et al., 2022) in LLMs. 4.4 ABLATION STUDY The Effectiveness of Roles To understand the impact of different roles on the final results, we perform two tasks that involve generating effective code and calculating average statistics. When we exclude certain roles, unworkable codes are generated. As indicated by Table 3, the addition of roles different from just the Engineer consistently improves both revisions and executability. While more roles slightly increase the expenses, the overall performance improves noticeably, demonstrating the effectiveness of the various roles. 8 Preprint Table 2: Comparison of capabilities for MetaGPT and other approaches. ‘/° indicates the presence of a specific feature in the corresponding framework, ‘X° its absence.
2308.00352#31
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
31
4.2 VERIFICATION PERFORMANCE Besides serving as a confidence score calculator to improve the performance of voting, SelfCheck can also predict the correctness of a single solution. To do so, we simply set a threshold t to the confidence score, where solutions with confidence scores w ≥ t are classified as correct. 7 (a) GSM8K (b) MathQA (c) MATH∗ Figure 3: When raising the classification thresholds t, the proportions of real correct solu- tions in predicted correct solutions (Real + in Pred +) increase for GSM8K (67.5%→76.5%), MathQA (59.4%→82.2%) and MATH (34.6%→50.8%). TP rate 0.0 0.5 1.0 FP rate Figure 4 shows the ROC curves for each dataset. As a com- parison, directly prompting GPT-3.5 to verify whole reasoning chains leads to no meaningful control on the false and true pos- itive rates (FP and TP): they are always both 100% on MATH and 98% on GSM8K, as observed by Ling et al. (2023). In other words, the checker always predicts the answer as correct, providing no useful information.
2308.00436#31
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
31
These results highlight two observations. First, the performance of the model is highly sensitive to the selection of few-shot demonstrations. The observation aligns the finding in [12] that more few-shot demos might be redundant and even degrade performance due to spurious correlations. It emphasizes the importance of careful selection and design, which may involve more human effort. Second, the zero-shot documentation reading baseline exhibits remarkable robustness and delivers competitive performance across both examples. This highlights the potential value and reliability of relying solely on the documentation, which is usually easy to get in many packages and tools. # 4.3 Plug-and-play with new image and video tools In this section, we validate that one can equip LLMs with unseen tools to solve novel tasks solely with tool docs, and without any further demos. We present our results on image editing and video tracking tasks. We show that LLMs can effectively re-invent existing human-programmed image editing and video tracking pipelines, backed by state-of-the-art vision models to achieve impressive results. Recent advancements in vision models, including GroundingDINO [38], an advanced open-set object detector; Segment Anything (SAM) [30], a cutting-edge image segmentation tool; and XMem [14], a 8
2308.00675#31
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
31
Figure 7. (a) Histogram depicting the initial structure and the generated structure for a question concerning the maximum value of surface area. (b) Histogram showing the initial structure and the generated structure for a query where the value of hydrogen uptake is set close to 500. (c) Illustration of the MOF with the largest surface area as generated by ChatMOF. ASA stand for accessible surface area. (d) Representation of the MOF with an H2 uptake value closest to 500 cm3/cm3 at 298K, 1bar, as generated by ChatMOF. 25 # Collaborative Online Platforms One limiting factor of ChatMOF is the performance reliance on the number of pre-trained weights in the MOFTransformer used in the predictor task. An increased quantity of fine-tuned weights allows for the prediction of more properties, thereby enabling more active prediction and generation processes. However, each user faces constraints on the number of models that can be utilized, given that it is unrealistic for one individual to possess all the data. To train a model, the collection of experimental data or the execution of computational simulations is necessary. While some calculations, such as pore limiting diameter or surface area, demand less time, other tasks such as band-gap, homo, and lumo calculations are considerably more computationally demanding. The generation and training of data for these complex tasks can be quite cumbersome.
2308.01423#31
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
32
↦→ 1 2 3 4 5 6 must_init = or ⊤ some_condi ∈ {¬ 𝑜 ∀ must_init = if: ∅ 𝑝𝑜𝑠𝑡 = 𝑝𝑠 int func(int* a){ if(some_condi) return -1; C ∀ : 𝑝𝑠 𝑝𝑜𝑠𝑡 } : 𝑜 ⊥ C 𝑝𝑜𝑠𝑡 0 } if: 𝑟𝑒𝑡 ↦→ 𝑎 } ) ∧ C 𝑝𝑜𝑠𝑡 ∈ { ⊥ C *a = ... // init return 0; { some_condi 0 𝑝𝑜𝑠𝑡 or } ∧ ↦→ Figure 6: A sample case of initializer func, *a is may_init or must_init under different post-constraints.
2308.00245#32
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
32
8 Preprint Table 2: Comparison of capabilities for MetaGPT and other approaches. ‘/° indicates the presence of a specific feature in the corresponding framework, ‘X° its absence. Framework Capabiliy AutoGPT LangChain AgentVerse ChatDev MetaGPT PRD generation Tenical design genenration API interface generation Code generation Precompilation execution Role-based task management Code review x x v x x x *x* ¥} NX & N\& N&O NAN SAN & SANNA Table 3: Ablation study on roles. ‘#’ denotes ‘The number of’, ‘Product’ denotes ‘Product man- ager’, and ‘Project’ denotes ‘Project manager’. ‘/” indicates the addition of a specific role. “Revi- sions’ refers to ‘Human Revision Cost’. Engineer Product Architect Project | #Agents #Lines Expense Revisions Executability Engineer Product Architect Project | #Agents #Lines Expense Revisions Executability
2308.00352#32
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
32
As well as verification accuracies, we may also care about the solution quality after filtering out solutions with low confidence scores w. Figure 3 shows that by increasing the threshold t, SelfCheck can filter out more incorrect solutions, such that a higher proportion of the solutions that pass the check are indeed correct (Real + in Pred +). Though this is at the cost of misclassifying more of the real correct solutions as incorrect, this can be a useful feature in cases where the risk of choosing an incorrect solution is higher than rejecting a correct one. Figure 4: True positive rates (TP) vs. false positive rates (FP) as clas- sification threshold, t, is varied. # 5 ANALYSIS We now perform some ablations to justify some of the key design choices made by SelfCheck and provide insights on its behavior. Limited by budget and time, all experiments in this section are performed on a subset of the MathQA test set with 100 randomly selected questions. 0.85 4 0.80 4 e . 5 0.75 4 8 9-704 — Majority Voting 0.65 4 —— Selfcheck T T T T T T 1 10 20 30 40 50 #Solutions per question 5.1 MORE SOLUTIONS PER QUESTION?
2308.00436#32
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
32
8 state-of-the-art video object segmentation tool, accompany the progress of language models. These breakthroughs, emerging in the past year, serve as additional tools that are yet unfamiliar to our LLM (gpt-3.5-turbo). By expanding VisProg to include these new tools, we embark on the intriguing exploration of whether LLMs can effortlessly comprehend the documentation associated with these new models, and combine these tools in a plug-and-play manner, enabling a wide range of applications.
2308.00675#32
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
32
To address these issues, there is a need to create an online platform akin to HuggingFace, where users can freely post their learned weights. An example of this model would be HuggingGPT10, which functions by selecting the most appropriate model among those posted on HuggingFace. Should users upload their trained models built on data they have uploaded onto this platform, it will enable other users to access them. Upon the posting of new weights online, ChatMOF will review them and if the required data is available online, the model will be downloaded automatically. The existence of this online platform will reinforce ChatMOF as a potent toolkit for predicting MOF properties. Moreover, pre-calculated data, such as those from multiple mining, can also be employed for table searches. If data sharing is executed effectively, superior results can be achieved collectively. 26 # Conclusion
2308.01423#32
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
33
↦→ Figure 6: A sample case of initializer func, *a is may_init or must_init under different post-constraints. Note that this seemingly simple interaction with LLMs can be challenging for static analysis or symbolic execution. Consider the sscanf() example, even if the analysis is aware that the qualified postcondition should be limited to those where 𝑟𝑒𝑡 4, it would still need to enumerate the paths inside of sscanf(), which involves loops and can easily lead to timeouts as explained in §2.1. 4.3.3 Apply Path Analysis. Following §3.2, Figure 6 presents a con- cert example of post-constraint guided path analysis. This case of the variable 𝑎. Given an early shows a simple initializer 𝑖 return, the initialization in line 4 may not be executed. As such, the qualified postconditions become contingent on the post-constraints
2308.00245#33
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
33
Engineer Product Architect Project | #Agents #Lines Expense Revisions Executability The Effectiveness of Executable Feedback Mechanism As shown in Figure 4, adding executable feedback into MetaGPT leads to a significant improvement of 4.2% and 5.4% in Pass @1 on Hu- manEval and MBPP, respectively. Besides, Table 1 shows that the feedback mechanism improves feasibility (3.67 to 3.75) and reduces the cost of human revisions (2.25 to 0.83). These results illustrate how our designed feedback mechanism can produce higher-quality code. Additional quan- titative results of MetaGPT and MetaGPT without executable feedback are shown in Table 4 and Table 6. # 5 CONCLUSION
2308.00352#33
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
33
5.1 MORE SOLUTIONS PER QUESTION? Serving as a method to reduce variance, majority voting increased final answer accuracies on different datasets when we increased from 2 to 10 solutions in Figure 2. In cases where we only care about fi- nal predictive performance, one might thus question whether it is better to simply use our computational resources to keep increasing the size of this ensemble, rather than relying on a checking scheme. Figure 5: SelfCheck achieves significantly higher final answer accuracies than majority voting for large ensembles of solutions. However, as shown in Figure 5, this effect saturates for larger solution ensembles, with the accuracy of majority voting never going above that achieved when n = 9, thereby never reaching the performance we already achieved by SelfCheck for the smaller ensemble. Moreover, the performance of SelfCheck continues to increase as the ensemble grows. By lowering the weights (confidence) of incorrect solutions, SelfCheck increases the chance of selecting the correct answers, even when their generation probabilities in the generator LLM are low. Therefore, with SelfCheck, LLMs can effectively rectify their own biased beliefs by themselves. 8 5.2 ALBATION STUDIES In order to pick apart the effect of several critical design choices for SelfCheck, we compare SelfCheck with some of its variants with respect to final answer and verification accuracies on MathQA.
2308.00436#33
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
33
In Figure 6, when performing an image editing request “replace the bench with a blue sofa”, the LLM generates a VisProg program that harnesses the power of GroundingDINO and SAM from the expanded tool set to segment the bench, and apply the stable diffusion [54] for synthesizing the sofa. This program re-invents the wheel by replicating the behavior of recent popular project, Grounded- SAM [23] without prior knowledge of this repository. Similarly, when tasked with video tracking “track the cat in the video”, the generated VisProg program by the LLM incorporates GroundingDINO together SAM for first frame segmentation as the initialization for XMem to do video tracking. It again re-invents the results obtained in the contemporary work, Track Anything [70]. We note that TaskMatrix [69] also has an updated approach with Grounded-SAM. However, they pre-program the entire Grounded-SAM editing pipeline as an image editing function, allowing the LLM to control it rather than enabling the LLM to generate the editing program using the building tools alone as we present here.
2308.00675#33
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
33
26 # Conclusion The investigation into the role of generative AI in materials science, specifically through the lens of ChatMOF, unveils substantial potential for predicting and generating MOFs. This unique system, which bridges the transformative capabilities of AI and the intricate facets of materials science, demonstrates exceptional performance across various tasks. The accuracy analysis reports high success rates, notably 96.9% and 95.7% for the search and prediction tasks, respectively. Meanwhile, the more complex structure generation task, despite its intricacy, yields a notable accuracy rate of 87.5%. These promising results underline the efficacy of ChatMOF, even when confronted with the most demanding tasks. Despite certain limitations, such as dependence on the number of pre-trained weights, ChatMOF symbolizes a significant stride towards fully autonomous AI in the realm of materials science. As the technology evolves, and with a systematic enhancement of the model's capacity and data sharing across an online platform, ChatMOF's performance could be further optimized, paving the way for unprecedented advancements in MOF # research. 27 # Method
2308.01423#33
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
34
𝑝𝑜𝑠𝑡 . There are: If the use of variable a is unconditional, i.e., . In this case, the variable 𝑎 is labeled as may_init given that the initial- ization may not be reached. In general, if all path constraints and outcomes of must_init are disjoint from 𝑝𝑜𝑠𝑡 , no path can be pruned out. We could also conclude 𝑎 as may_init. If the use of variable 𝑎 is conditional with constraints, i.e., # e 𝑝𝑜𝑠𝑡 ≠ C # e
2308.00245#34
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
34
# 5 CONCLUSION This work introduces MetaGPT, a novel meta-programming framework that leverages SOPs to en- hance the problem-solving capabilities of multi-agent systems based on Large Language Models (LLMs). MetaGPT models a group of agents as a simulated software company, analogous to simu- lated towns (Park et al., 2023) and the Minecraft Sandbox in Voyager (Wang et al., 2023a). MetaGPT leverages role specialization, workflow management, and efficient sharing mechanisms such as mes- sage pools and subscriptions, rendering it a flexible and portable platform for autonomous agents and multi-agent frameworks. It uses an executable feedback mechanism to enhance code generation quality during runtime. In extensive experiments, MetaGPT achieves state-of-the-art performance on multiple benchmarks. The successful integration of human-like SOPs inspires future research on human-inspired techniques for artificial multi-agent systems. We also view our work as an early attempt to regulate LLM-based multi-agent frameworks. See also the outlook (Appendix A). # Acknowledgement
2308.00352#34
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
34
In order to pick apart the effect of several critical design choices for SelfCheck, we compare SelfCheck with some of its variants with respect to final answer and verification accuracies on MathQA. Global v.s. step-by-step checking The first question is can we simply ask an LLM to check the whole solution without taking steps into consideration. To answer it, we prompt the LLM to perform global checking with the following instruction: The following is a question and a solution to it from a student. Carefully check whether the solution is correct step by step. End your response with your conclusion that starts with "Correct", "Wrong" or "Not Sure". Question: [Question] Solution: [Step 0, Step 1,..., Step n] Similar to the findings of Ling et al. (2023), we find that the global checker outputs "correct" most of the time and rarely recognizes an error. Consequently, its final answer accuracies are very close to majority voting (in Figure 6) and its verification accuracy (55.0%) is only marginally above random guess (50.0%). This lack of ability to deal with the difficulty of global checking is what makes step checking necessary. Single-stage v.s. multiple-stage step checking Next, we ask whether we really need to decompose the step checking into several stages? To answer this, we design the following prompt to use the LLM directly. # #Solutions per question
2308.00436#34
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
34
By successfully re-inventing the functionalities of Grounded-SAM and Track Anything without prior knowledge, solely relying on the available building blocks, the LLM demonstrates not only its capacity to effortlessly comprehend and combine new tools with documentation only but also highlights its potential for automatic knowledge discovery. It discovers new insights through leveraging its existing knowledge only without further demonstration. # 4.4 Performance v.s. documentation quality We investigates the impact of documentation quality on performance. To assess LLM’s capability to comprehend realistic documentation, we refrain from engineering or curating the content of the documentation. Instead, we vary the document length by truncating the documents and keeping the first n words, using it as a proxy for assessing thoroughness and quality. In this ablation, we consider the LLM-Cloud benchmark, which has long documentation based on real-world GCP CLI manuals. We illustrate the result in Figure 7. 0.45 —— gpt-3.5-turbo (doc) 0.40 —s— text-davinci-002 (doc) 0.35 gpt-3.5-turbo (best 15 shots) So30f Ne text-davinci-002 (best 15 shots) a 0.25 o20f 0.15 5 sa 200° 300 400 500 600 700 800 Documentation Length Figure 7: Performance of zero-shot documentation LLM when varying the input document length.
2308.00675#34
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
34
# research. 27 # Method ChatMOF operates via the LangChain57 library. LangChain serves as a platform for accessing diverse modules within a Large Language Model (LLM), streamlining prompt engineering in an LLM environment. ChatMOF integrates various toolkits from LangChain alongside its distinct toolkit. For the roles of agent, evaluator, and toolkit within ChatMOF, OpenAI's Chatmodel, GPT- 4, and GPT-3.5-turbo LLMs are employed. During the experiments, the temperature parameter was calibrated to 0.1. The searcher component of ChatMOF adopts the CoreMOF structure, enriched by geometric features derived through ZEO++59. In instances of code discrepancies, corrections are made up to a threshold of three attempts. The predictor module within ChatMOF leans on MOFTransformer, trained on insights from four academic articles. Notably, MOFTransformer operates under version 2.1.2.
2308.01423#34
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
35
, two cases emerge: ⊤ (1) 𝑝𝑜𝑠𝑡 clashes with the constraints of the path (e.g., C some_condi), or (2) 𝑝𝑜𝑠𝑡 could be some_condi or func(...)==0 In these instances, C and we can designate *a as must_init. 4.4 Design #2: Progressive Prompt The Linux kernel has an extremely large codebase. Summarizing an initializer using LLMs without providing any supplementary function definitions can result in incomplete or erroneous responses. On the other hand, flooding the LLM with every relevant function definition upfront risks exceeding their context window limitations. To address this dilemma, we choose to progressively provide function definitions as needed. Illustrated in Figure 5, this approach, which we refer to as Progressive Prompt, fosters a dynamic inter- action with the LLM rather than expecting a response in one shot. Throughout this iterative exchange, we consistently prompt the LLM: “If you encounter uncertainty due to a lack of function defini- tions, please signal your need, and I’ll supply them”. Should the LLM need more information, LLift will promptly extract the relevant details on demand from the source code and provide it to the LLM automatically, enabling it to reassess and generate a more accurate response.
2308.00245#35
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
35
# Acknowledgement We thank Sarah Salhi, the Executive Secretary of KAUST AI Initiative, and Yuhui Wang, Postdoc- toral Fellow at the KAUST AI Initiative, for helping to polish some of the text. We would like to express our gratitude to Wenyi Wang, a PhD student at the KAUST AI Initiative, for providing com- prehensive feedback on the paper and for helping to draft the outlook (Appendix A) with Mingchen. We also thank Zongze Xu, the vice president of DeepWisdom, for providing illustrative materials for AgentStore. 9 Preprint # Author Contributions
2308.00352#35
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
35
# #Solutions per question Figure 6: Generation accuracies for variants of SelfCheck on MathQA with GPT-3.5. The following is a question and the first a few steps in its solution. Question: [Question] Solution: [Step 0, Step 1,..., Step i-1] Check the correctness of the next step: [Step i] Please consider the information it relies on and check step by step. Please end your response with your conclusion that starts with "Correct", "Wrong" or "Not Sure". Figure 6 and Table 2 show that although this is better than global checking, it is still significantly worse than SelfCheck with its multi-stage checking. This indicates that checking a step in a single stage is still too challeng- ing for the LLM, so it is necessary to further decompose step checking into a pipeline of easier sub-tasks. _ Table 2: Verification accuracies for vari- ants of SelfCheck on MathQA with GPT- 3.5. The reported verification accuracy is the average of true positive and true nega- tive rates.
2308.00436#35
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
35
Figure 7: Performance of zero-shot documentation LLM when varying the input document length. In both text-davinci-002 and gpt-3.5-turbo experiments, we consistently observe a trend where performance improves as the document length increases, up to a length of 600. This finding aligns with our hypothesis that the models possess the ability to comprehend and leverage documen- tation effectively. Remarkably, this improvement in performance is achieved without any additional training, fine-tuning nor document curation . It highlights the tremendous value of providing compre- hensive documentation, as it empowers the models to leverage a wide range of command-line tools at scale, solely through the process of reading and understanding the documentation. We note that a degradation in performance after the document length exceeds 600 words. We attribute this decline to the inherent challenges associated with comprehending lengthy documents in language models [61]. However, we foresee the ongoing advancements in handling long inputs in language models will gradually address this limitation [10, 5, 2]. We leave exploring solutions for overcoming this limitation for future research. 9 # 5 Conclusion
2308.00675#35
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
35
The generative aspect of ChatMOF is structured around three iterative cycles. This generator employs a genetic algorithm across nine unique topologies, namely pcu, dia, acs, rtl, cds, srs, ths, bcu, and fsc. For every topology, a batch of 100 offspring genes arises from a set of 100 parental genes, chosen from a foundational group of 2000 MOFs. Structures are then formulated based on these newly minted genes, followed by value computation via the predictor. This cycle refines the pool of parental genes, and after the designated cycles, an optimized target structure is procured from the cumulative data. 28 # Conflicts of interest There are no conflicts to declare. # Author Contributions Y.K developed ChatMOF and wrote the manuscript with J.K. The manuscript was written through the contributions of all authors. All authors have given approval for the final version of the manuscript. # Code availability The ChatMOF library is available at https://github.com/Yeonghun1675/ChatMOF.git. # Acknowledgements
2308.01423#35
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
36
Specifically, We teach the LLM to ask for more information with a specific format: [{"type":"function_def", "name":"some_func" }] 6 Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian Subsequently, LLift scans this format in the LLM’s response. For each requested function definition, LLift supplies its correspond- ing code along with comments extracted from the Linux source code. Though GPT-4 may seek other types of information beyond function definitions (e.g., struct definitions), we currently limit our support to requests pertaining to function definitions. The iterative process continues until either the LLM no longer re- quests additional information, or LLift cannot supply the requested details. In certain situations where LLift is unable to provide more information (e.g., the definition of an indirect call), LLift will still prompt the LLM to proceed with the analysis. In these instances, the LLM is encouraged to infer the behavior based on the available data and its inherent knowledge, thereby facilitating continued analysis even when not all information is directly accessible. 4.5 Design #3: Task Decomposition We systematically apply the principle of task decomposition, a vital element of our design process. This concept is incorporated primarily in two distinct ways.
2308.00245#36
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
36
9 Preprint # Author Contributions Sirui Hong conducted most of the experiments and designed the executable feedback module. She also led the initial version of the write-up, supported by Ceyao Zhang, and also by Jinlin Wang and Zili Wang. Mingchen Zhuge designed the self-improvement module, discussed additional experi- ments, and led the current write-up. Jonathan Chen helped with the MBPP experiments, outlined the methods section, and contributed to the current write-up. Xiawu Zheng provided valuable guid- ance, reviewed and edited the paper. Yuheng Cheng contributed to the evaluation metric design and HumanEval experiments. Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Lingfeng Xiao helped with the MBPP experiments and comparisons to open-source baseline methods. Chenyu Ran cre- ated most of the illustrative figures. Chenglin Wu is the CEO of DeepWisdom, initiated MetaGPT, made the most significant code contributions to it, and advised this project. J¨urgen Schmidhuber, Director of the AI Initiative at KAUST and Scientific Director of IDSIA, advised this project and helped with the write-up. # REFERENCES
2308.00352#36
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
36
Error check v.s. regenerate and compare We now justify the choice to perform step regeneration and com- parison instead of direct error checking for each step. To do so, we replace our regeneration stage and com- parison stage with a single error-checking stage. We first compare with a zero-shot version of the variant with the following prompt: Method SelfCheck Global Check Single stage Check Error Check (0-shot) Error Check (1-shot) Accuracy (%) 66.7% 55.0% 57.2% 63.1% 64.2% Given the following information: Information 0: [Information I0] Step 0: [Step S0] Step 1: [Step S1] Check the correctness of the next step [Step i] Please check for grounding errors, reasoning errors and calculation errors step by step. Please end your response with your conclusion that starts with "Correct", "Wrong" or "Not Sure". ... We then add an examplar from Ling et al. (2023) (see Appendix B) to make a more powerful one-shot error checker. However, results in Figure 6 and Table 2 show that even with a very detailed and instructive example, direct error checking still performs worse than our regenerate and compare approach, which supports our previous argument that LLMs are better at generation than checking. # 6 CONCLUSIONS
2308.00436#36
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
36
9 # 5 Conclusion In this paper, we examined the effectiveness of tool docs in enabling zero-shot tool usage with LLMs. We first showed that LLMs can achieve on par or better performance than their few-shot counterparts when provided with tool docs. We then scaled up to a significantly larger tool set on a newly collected API through docs only. By simply plugging in new tools along with their docs, LLMs are able to tackle unseen tasks in image editing and video tracking without further demos and replicate the functionalities of recent popular projects, suggesting a potential for automatic knowledge discovery. Overall, we shed light on a new perspective of tool usage with LLMs by focusing on their internal planning and reasoning capabilities with docs, rather than explicitly guiding their behaviors with demos. # References
2308.00675#36
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
36
# Code availability The ChatMOF library is available at https://github.com/Yeonghun1675/ChatMOF.git. # Acknowledgements Y. K., and J. K. acknowledge funding from the National Research Foundation of Korea (NRF) under Project Number 2021M3A7C208974513 and 2021R1A2C2003583. This work was supported by the National Supercomputing Center with supercomputing resources including technical support (KSC-2022-CRE-0515). # References 1 2 3 4 5
2308.01423#36
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
37
4.5 Design #3: Task Decomposition We systematically apply the principle of task decomposition, a vital element of our design process. This concept is incorporated primarily in two distinct ways. Multistage Problem Solving. As illustrated in Figure 5, we em- ploy a two-conversation approach to complete the task. Each con- versation, essentially consists of multiple iterations of prompts and responses. The first conversation (Convo.1) is dedicated to extracting the initializer and its associated post-constraints (sub- tasks 1 and 2), while the second conversation (Convo.2) focuses on summarizing the function (subtask 3) based on the previously identified post-constraints. This division allows a more manageable and effective way of achieving the task, compared to combining all three subtasks into a single conversation. The efficacy of this task decomposition approach is further evaluated in §6.5.
2308.00245#37
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
37
# REFERENCES Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, and Eric Schulz. Playing repeated games with large language models. arXiv preprint, 2023. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models, 2021. Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al. Human-level play in the game of diplomacy by com- bining language models with strategic reasoning. Science, 378(6624):1067–1074, 2022. Robert Balzer. A 15 year perspective on automatic programming. IEEE Transactions on Software Engineering, 11(11):1257–1268, 1985. R.M. Belbin. Team Roles at Work. Routledge, 2012. URL https://books.google.co.uk/ books?id=MHIQBAAAQBAJ. Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv preprint, 2023.
2308.00352#37
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
37
# 6 CONCLUSIONS In this paper, we have introduced SelfCheck, a general-purpose, zero-shot, step-by-step checking scheme for LLMs. Unlike previous approaches, SelfCheck does not require any additional data or external resources: it uses the LLM to identify errors in its own reasoning, leveraging a novel regenerate-and-compare approach. By using the results of this checking to perform weighted voting over different solutions, we find that SelfCheck is able to, in turn, increase final predictive accuracy. 9 # REFERENCES Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Co- jocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. Falcon-40B: an open large language model with state-of-the-art performance. 2023.
2308.00436#37
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
37
# References [1] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. Do as i can and not as i say: Grounding language in robotic affordances. In arXiv preprint arXiv:2204.01691, 2022. [2] Anthropic. 100k context windows. https://www.anthropic.com/index/ 100k-context-windows, 2023. Accessed: 05/15/2023.
2308.00675#37
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
37
# References 1 2 3 4 5 Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Bommasani, R. et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021). Brown, T. et al. Language models are few-shot learners. Advances in neural information processing systems 33, 1877-1901 (2020). Touvron, H. et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023). Bubeck, S. et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 (2023). 29 8 9 10
2308.01423#37
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
38
Thinking in English. Our workflow necessitates a structured output, such as a JSON format, for automation. However, we ob- serve that LLMs often produce suboptimal results when directly prompted to output in this format. As LLMs build responses incre- mentally, word-by-word, based on preceding outputs [32], direct prompts to output JSON may interrupt their thought progression. This emphasizes the importance of initially soliciting responses in natural language to ensure comprehensive and effective reasoning. Consequently, we instruct the LLM to first articulate their thought processes in English, followed by a subsequent prompt to transform their response into a JSON summary. 4.6 Design #4: Self-Validation At times, LLMs can display unpredictable or inconsistent behav- iors, particularly in complex scenarios involving detailed logical constructs. Consider a case where an initializer carries the postcon- dition must_init if 𝑟𝑒𝑡 0. LLMs may still mistakenly assume it to be may_init, despite the explicit presence of the post-constraint 𝑟𝑒𝑡 ↦→ Conversely, an LLM might erroneously interpret a non-existent post-constraint and incorrectly infer a may_init case as must_init. This phenomenon is known as hallucination. Essentially, the hallu- cination can lead to both false positives and false negatives in bug detection, thereby affecting accuracy and reliability.
2308.00245#38
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00436
38
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. MathQA: Towards interpretable math word problem solving with operation-based In Proceedings of the 2019 Conference of the North American Chapter of the formalisms. Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2357–2367, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1245. URL https://aclanthology.org/N19-1245. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
2308.00436#38
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
38
[3] AutoGPT. Auto gpt. https://autogpt.net/category/chatgpt-tools/autogpt/, 2023. Accessed: 05/15/2023. [4] Michael Bain and Claude Sammut. A framework for behavioural cloning. In Machine Intelli- gence 15, pages 103–129, 1995. [5] Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew R Gormley. Unlimiformer: Long- range transformers with unlimited length input. arXiv preprint arXiv:2305.01625, 2023. [6] Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206–2240. PMLR, 2022. [7] SRK Branavan, David Silver, and Regina Barzilay. Learning to win by reading manuals in a monte-carlo framework. Journal of Artificial Intelligence Research, 43:661–704, 2012.
2308.00675#38
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
38
Vaswani, A. et al. Attention is all you need. Advances in neural information processing systems 30 (2017). Liu, P. et al. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys 55, 1-35 (2023). https://github.com/yoheinakajima/babyagi. https://github.com/Significant-Gravitas/Auto-GPT. Shen, Y. et al. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580 (2023). Khan, R. A., Jawaid, M., Khan, A. R. & Sajjad, M. ChatGPT-Reshaping medical education and clinical management. Pakistan Journal of Medical Sciences 39, 605 (2023). Taylor, R. et al. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085 (2022). Hendrycks, D. et al. Aligning ai with shared human values. arXiv preprint arXiv:2008.02275 (2020). Hendrycks, D. et al.
2308.01423#38
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
39
In addition to task decomposition, we also introduce the concept of self-validation to enhance reliability. Before the LLM reaches its The Hitchhiker’s Guide to Program Analysis: A Journey with Large Language Models final conclusion, this method reinforces specific rules, allowing the LLM to reassess their previous responses for adherence and make necessary corrections. We observed that this practice yields better results. We evaluate the effect of self-validation in §6.4. As seen in Figure 5, we employ self-validation in both conversa- tions. By prompting a list of correct properties that we expect, LLMs can verify and correct their results by themselves automatically.
2308.00245#39
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
39
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fo- tios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob Mc- Grew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code, 2021a.
2308.00352#39
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
39
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Google. Palm 2 technical report. arXiv preprint arXiv:2303.08774, 2023.
2308.00436#39
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
39
[8] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. [9] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. [10] Aydar Bulatov, Yuri Kuratov, and Mikhail S Burtsev. Scaling transformer to 1m tokens and beyond with rmt. arXiv preprint arXiv:2304.11062, 2023. [11] Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv preprint arXiv:2305.17126, 2023.
2308.00675#39
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
39
D. et al. Aligning ai with shared human values. arXiv preprint arXiv:2008.02275 (2020). Hendrycks, D. et al. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 (2020). Bran, A. M., Cox, S., White, A. D. & Schwaller, P. ChemCrow: Augmenting large- language models with chemistry tools. arXiv preprint arXiv:2304.05376 (2023). Guo, T. et al. What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks. arXiv preprint arXiv:2305.18365 (2023). Bucior, B. J. et al. Identification schemes for metal–organic frameworks to enable rapid search and cheminformatics analysis. Crystal Growth & Design 19, 6682-6697 (2019). Hu, T., Song, H., Jiang, T. & Li, S. Learning representations of inorganic materials from generative adversarial networks. Symmetry 12, 1889 (2020).
2308.01423#39
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
40
4.7 Additional Prompting Strategies In order to further optimize the efficacy of our model, we have incorporated several additional strategies into our prompt design: Chain-of-Thought. Leveraging the Chain-of-Thought (CoT) approach, we encourage the LLMs to engage in stepwise reason- ing, using the phrase “think step by step”. This not only helps generate longer, comprehensive responses, but it also provides intermediate results at each juncture of the thought process. Pre- vious studies suggest the CoT approach considerably enhances the LLMs’ reasoning capabilities [3]. We incorporate the CoT strategy into every prompt. Source Code Analysis. Rather than analyzing abstract repre- sentations, we opt to focus our attention directly on the functions within the source code. This approach not only economizes on token use compared to LLVM IR, but also allows the model to leverage the semantic richness of variable names and other pro- gramming constructs to conduct a more nuanced analysis. There are still some interesting details in designing an effective prompt but due to space constraints and without changing the overall strategy, we will not list them all. Readers intrigued can delve into the intricacies of our open-sourced prompt1 design and experimental implementations to gain a deeper understanding.
2308.00245#40
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
40
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia Qin, Yaxi Lu, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. Agentverse: Facili- tating multi-agent collaboration and exploring emergent behaviors in agents, 2023. Xinyun Chen, Chang Liu, and Dawn Song. Execution-guided neural program synthesis. In ICLR, 2018. Xinyun Chen, Dawn Song, and Yuandong Tian. Latent execution for neural program synthesis beyond domain-specific languages. NeurIPS, 2021b. 10 Preprint
2308.00352#40
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
40
Google. Palm 2 technical report. arXiv preprint arXiv:2303.08774, 2023. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations, 2020. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. On the advance of making language models better reasoners. arXiv preprint arXiv:2206.02336, 2022. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050, 2023.
2308.00436#40
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]