doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.01390
72
59.4 60.4 44.7 49.3 50.0 50.8 51.0 31.8 33.6 33.6 33.5 32.6 28.8 34.9 39.4 43.0 44.0 57.0 62.7 63.9 64.5 63.5 81.6 91.3 97.4 102.2 105.5 46.0 54.0 58.6 62.0 65.5 44.5 45.9 47.0 48.0 49.1 28.6 31.0 32.6 33.8 35.1 24.5 27.2 28.0 27.5 29.2 32.1 37.3 41.7 44.8 45.8 63.9 66.1 67.2 68.1 68.2 79.0 90.5 96.8 101.1 104.1 45.0 53.9 58.6 61.5 64.6 44.0 45.3 46.1 46.6 46.7 29.2 31.4 32.8 33.9 34.6 26.0 28.9 28.8 28.8 29.0 32.6 39.0 42.5 44.9 45.9 65.2 65.7 66.0 65.0 67.7 75.9 89.0 96.6 102.4 105.1 36.3 48.9
2308.01390#72
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
73
45.9 65.2 65.7 66.0 65.0 67.7 75.9 89.0 96.6 102.4 105.1 36.3 48.9 51.3 56.5 56.2 44.9 49.0 48.5 46.6 44.5 31.0 34.8 33.3 29.2 28.7 22.8 26.7 22.7 19.5 17.93 28.0 37.4 35.7 31.5 27.3 64.8 65.4 67.8 67.5 64.4 79.5 92.7 101.6 106.0 107.8 43.0 53.6 53.4 56.4 56.6 46.6 49.6 47.7 45.4 48.9 32.0 35.1 34.6 32.7 36.5 23.5 28.9 25.6 22.0 23.0 30.1 36.6 37.0 36.6 36.4 66.2 66.8 69.3 67.1 62.2 HatefulMemes [15] 76.8 93.1 99.8 105.6 109.0 39.2 52.2 58.7 60.6 63.0 53.0 55.1 56.0 56.6 56.8 38.3 42.0 44.1 45.1 46.3 28.3
2308.01390#73
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
75
Table 12: Full evaluation results using RICES across seven vision-language datasets using 0, 4, 8, 16, and 32 in-context examples. Row 1 Link Link Link Row 2 Link Link Link Row 3 Link Link Link Table 13: Source of images for Figure 2 for each row the links are from left to right in the order of how the images are presented. 19 system prompt (human authored) You are a helpful assistant that knows a lot about image descriptions and how they are used with text in broader contexts. Your job is to generate an interleaved sequence of images (referenced by their image descriptions) and sentences. Image descriptions should only describe the single image. They should be specific, 1-sentence, self-contained descriptions that capture the salient elements of the image. The sentences can refer to concepts in any of the images and to each-other. The interleaved sequences are interesting, clever, and represent compelling patterns. To understand them requires knowledge of real-world entities/events, comparison between the images, and commonsense knowledge. Real-world references/facts must be correct.
2308.01390#75
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
76
Remember to: 1) be creative. tell stories, reference real-world entities/events, etc. The images/sentence can play off each-other in fun ways. 2) be interesting. generate sequences that are cool, fun, compelling and require interesting commonsense reasoning across and between images/sentences. 3) make sure the image descriptions are self-contained, and the output format follows the requested format. user (human authored) Generate a creative, interesting sequence of sentences/images with the following format: (image A, sentence 1, image B, sentence 2, image C, sentence 3) assistant (human authored) Sure! Sequence format: (image A, sentence 1, image B, sentence 2, image C, sentence 3). This interleaved sequence of images/sentences contains images of animals from various parts of the world. The sentences first name the depicted animal, and then describe where the animal is native to.
2308.01390#76
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
77
Image A: A photo of a person holding a chinchilla in their hand Sentence 1: This is a chinchilla. They are mainly found in Chile. Image B: A photo of a shiba-enu in the fall sitting in the forest Sentence 2: This is a shiba. They are very popular in japan. Image C: A photo of a flamingo with a lake in the background Sentence 3: This is a flamingo. They are found in the Caribbean and South America. 4x more hand designed/interleaved examples...) user (human authored) Generate a creative, interesting sequence of sentences/images with the following format: (image A, image B, sentence 1, image C, image D, sentence 2, image E, image F, sentence 3). assistant (machine generated) Sure!
2308.01390#77
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
78
Sure! This interleaved sequence of images/sentences contains images of nature. The sentences that follow each image are statements that lead to conclusions. The final sentence explains the broader significance of the pattern. Image A: A photo of a seed being planted in the soil. Image B: A photo of a small plant sprouting from the soil. Sentence 1: The plant is growing. Image C: A photo of trees in a forest. Image D: A photo of an orchard with rows of apple trees. Sentence 2: Trees are used to produce fruits. Image E: A photo of a river running through a valley. Image F: A photo of a waterfall cascading down a mountain. Sentence 3: Water is a critical component of many natural processes, from plant growth to the formation of landscapes. Figure 12: We prompt ChatGPT using a system prompt, 5 interleaved human-authored sequences, and a query pattern, in this case “(Image A, Image B, Sentence 1, Image C, Image D, Sentence 2, Image E, Image F, Sentence 3)”. Based on these examples, ChatGPT first explains what sequence it is attempting to generate (here: “... images of nature. The sentences that follow each image are statements that lead to conclusions”). Later, images are retrieved based on the generated “Photo X” descriptions to form interleaved sequences. 20
2308.01390#78
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.00245
0
3 2 0 2 v o N 5 1 ] E S . s c [ 3 v 5 4 2 0 0 . 8 0 3 2 : v i X r a # The Hitchhiker’s Guide to Program Analysis: A Journey with Large Language Models Haonan Li [email protected] UC Riverside Riverside, California, USA Yu Hao [email protected] UC Riverside Riverside, California, USA Yizhuo Zhai [email protected] UC Riverside Riverside, California, USA Zhiyun Qian [email protected] UC Riverside Riverside, California, USA
2308.00245#0
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
0
3 2 0 2 v o N 6 ] I A . s c [ 5 v 2 5 3 0 0 . 8 0 3 2 : v i X r a Preprint # METAGPT: META PROGRAMMING FOR A MULTI-AGENT COLLABORATIVE FRAMEWORK Sirui Hong1∗, Mingchen Zhuge2∗, Jonathan Chen1, Xiawu Zheng3, Yuheng Cheng4, Ceyao Zhang4, Jinlin Wang1, Zili Wang, Steven Ka Shing Yau5, Zijuan Lin4, Liyang Zhou6, Chenyu Ran1, Lingfeng Xiao1,7, Chenglin Wu1†, J ¨urgen Schmidhuber2,8 1DeepWisdom, 2AI Initiative, King Abdullah University of Science and Technology, 3Xiamen University, 5Nanjing University, 7University of California, Berkeley, # ABSTRACT
2308.00352#0
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
0
3 2 0 2 t c O 5 ] I A . s c [ 3 v 6 3 4 0 0 . 8 0 3 2 : v i X r a # SELFCHECK: USING LLMS TO ZERO-SHOT CHECK THEIR OWN STEP-BY-STEP REASONING # Ning Miao1* Yee Whye Teh1 Tom Rainforth1 ABSTRACT The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by- step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets—GSM8K, MathQA, and MATH—and find that it successfully recognizes errors and, in turn, increases final answer accuracies. # INTRODUCTION
2308.00436#0
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
0
3 2 0 2 g u A 1 ] L C . s c [ 1 v 5 7 6 0 0 . 8 0 3 2 : v i X r a # Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models Cheng-Yu Hsieh1†, Si-An Chen2†, Chun-Liang Li3, Yasuhisa Fujii4, Alexander Ratner1, Chen-Yu Lee3, Ranjay Krishna1∗, Tomas Pfister3∗ 1University of Washington, 2National Taiwan University, 3Google Cloud AI Research, 4Google Research [email protected] # Abstract
2308.00675#0
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
0
# ChatMOF: An Autonomous AI System for # Predicting and Generating Metal-Organic # Frameworks Yeonghun Kang, Jihan Kim* Department of Chemical and Biomolecular Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291, Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea 1 # ABSTRACT ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT- 3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements. 2 # Introduction
2308.01423#0
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
1
ABSTRACT Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalabil- ity. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in com- prehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully auto- mated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability,
2308.00245#1
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
1
# ABSTRACT Remarkable progress has been made on automated problem solving through so- cieties of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT en- codes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On col- laborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT 1 # INTRODUCTION
2308.00352#1
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
1
# INTRODUCTION Recent years have witnessed dramatic changes in the areas of NLP and AI brought on by significant advances in LLMs. From GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022), Llama (Tou- vron et al., 2023) and Falcon (Almazrouei et al., 2023) to GPT-4 (OpenAI, 2023) and PaLM-2 (Google, 2023), the increasing model sizes and exploding amount of training data have empowered LLMs to achieve human-level performance on a large range of tasks, including summarization, translation, and question answering. The invention of Chain-of-Thought prompting (CoT, Wei et al. (2022)) has further enhanced LLMs’ ability to solve complex problems by generating step-by-step solutions.
2308.00436#1
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
1
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool’s usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and in- variably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation—descriptions for the individual tool usage—over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight
2308.00675#1
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
1
2 # Introduction The realm of generative artificial intelligence (AI) is witnessing an unprecedented surge, predominantly fostered by a new generation of computational tools known as large-scale language models (LLMs)1-5. These innovative models are deeply rooted in a novel architectural design paradigm, referred to as transformer models6. Their capabilities, however, stretch far beyond the domain of basic language tasks. Emulating human thought processes, they exhibit logical functions such as learning from a limited number of examples (few-shot learning), or in certain situations, learning without any examples (zero-shot learning).3,5,7 This is achieved through the understanding of vast volumes of text data, underscoring the immense potential held by these models. A significant development in this rapidly advancing field is the rise of autonomous LLM agents, which uses LLMs for a wide range of tasks8-10. This system, powered by multiple LLMs, excels in making decisions on its own. As such, the idea of using LLMs to independently reason, find information, and produce answers is becoming more popular across different areas of research fields11-14.
2308.01423#1
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
2
etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable preci- sion (50%) and appears to have no missing bug. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
2308.00245#2
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
2
1 # INTRODUCTION Autonomous agents utilizing Large Language Models (LLMs) offer promising opportunities to en- hance and replicate human workflows. In real-world applications, however, existing systems (Park et al., 2023; Zhuge et al., 2023; Cai et al., 2023; Wang et al., 2023c; Li et al., 2023; Du et al., 2023; Liang et al., 2023; Hao et al., 2023) tend to oversimplify the complexities. They struggle to achieve effective, coherent, and accurate problem-solving processes, particularly when there is a need for meaningful collaborative interaction (Zhang et al., 2023; Dong et al., 2023; Zhou et al., 2023; Qian et al., 2023).
2308.00352#2
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
2
However, the performance of even the largest LLMs is still unsatisfactory on more difficult reasoning problems. For example, GPT-4 with CoT prompting only correctly answers 42.5% of problems in the MATH dataset (Bubeck et al., 2023; Hendrycks et al., 2021), which is far below human level. Such problems require careful and extensive multi-step reasoning to solve, and LLMs are consequently prone to make mistakes: even though their error rate on individual steps may be low, the probability of generating at least one erroneous step can still be quite high, undermining the final answer. Recent works have tried to overcome this limitation by checking for errors in these step-by-step solutions (Cobbe et al., 2021; Li et al., 2022; Ling et al., 2023). Such checks can then be used to provide confidence scores in answers and select between different possible alternatives. This checking has typically been performed either by using an external verification model (Cobbe et al., 2021; Lyu et al., 2023; Peng et al., 2023), or through few-shot in-context learning (Brown et al., 2020) of an LLM (Weng et al., 2022; Ling et al., 2023).
2308.00436#2
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
2
we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffu- sion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM [23] and Track Anything [70] models.
2308.00675#2
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
2
Despite marked progress in application of LLM across diverse fields15,16, the full potential of its advanced technology within materials science remains largely untapped. This limitation primarily stems from two considerable challenges. Firstly, the inherent complexity of materials poses a significant hurdle as these materials often lack sufficient text-compatible input representations, thereby failing to fully encapsulate their complete properties17-20. This difficulty in encoding materials for LLMs restricts their understanding and processing capabilities. Secondly, there is a notable scarcity of material-specific training data in the field. In comparison to other disciplines, materials science lags behind due to fewer dedicated databases and their associated data, exacerbating the challenge of representing this scant data in a text format suitable for LLMs. 3 Despite these obstacles, there are ongoing attempts to leverage the capabilities of LLMs in materials science.21-23 However, so far, these efforts have primarily focused on extracting data from scientific literature and generating responses based on this extracted data, with the actual material itself remaining a largely untouched resource. As such, the exploration and realization of the full potential of LLMs within the sphere of materials science still beckons.
2308.01423#2
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
3
1 INTRODUCTION Static analysis is a popular technique in software engineering, par- ticularly in the area of bug discovery, that can improve code quality, reliability, and security. However, the effectiveness of these tech- niques is influenced by the fundamental trade-off between precision and scalability, especially when dealing with extensive and com- plex programs [9, 24]. On the one hand, static analysis solutions with lower precision tend to generate numerous false positives. On the other hand, expensive static analysis or symbolic execution solutions with higher precision often struggle to complete the anal- ysis. Consequently, achieving comprehensive and accurate static program analysis for sizable programs like the Linux kernel poses a significant challenge. UBITect [40], a powerful static analysis solution illustrates these inherent limitations thoroughly. Targeting Use-Before-Initialization (UBI) bugs in the Linux kernel, it packages a pipeline of (1) a scalable bottom-up summary-based static analysis with limited precision,
2308.00245#3
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
3
Through extensive collaborative practice, humans have developed widely accepted Standardized Operating Procedures (SOPs) across various domains (Belbin, 2012; Manifesto, 2001; DeMarco & Lister, 2013). These SOPs play a critical role in supporting task decomposition and effective coor- dination. Furthermore, SOPs outline the responsibilities of each team member, while establishing standards for intermediate outputs. Well-defined SOPs improve the consistent and accurate exe- cution of tasks that align with defined roles and quality standards (Belbin, 2012; Manifesto, 2001; DeMarco & Lister, 2013; Wooldridge & Jennings, 1998). For instance, in a software company, Product Managers analyze competition and user needs to create Product Requirements Documents (PRDs) using a standardized structure, to guide the developmental process. Inspired by such ideas, we design a promising GPT-based Meta-Programming framework called MetaGPT that significantly benefits from SOPs. Unlike other works (Li et al., 2023; Qian et al., 2023), MetaGPT requires agents to generate structured outputs, such as high-quality requirements ∗These authors contributed equally to this work. †Chenglin Wu ([email protected]) is the corresponding author, affiliated with DeepWisdom. 1 Preprint
2308.00352#3
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
3
Unfortunately, existing methods generally require extra training data and/or domain-specific exem- plars, which often makes them inconvenient to use in practice and restricts them to specific domains or data formats. The aim of our work is thus to instead provide a general-purpose, zero-shot, approach to checking that relies only on the original LLM, without the need for additional external resources. To this end, we introduce SelfCheck, a zero-shot step-by-step checker for self-identifying errors in LLM reasoning chains. SelfCheck uses the LLM to individually check the conditional correctness of each step in the chain based on the preceding steps, in a manner similar to a human going back to check their working. The results of these individual checks are then integrated to form an overall correctness estimation for the whole reasoning chain. Key to SelfCheck’s success is a novel mechanism for performing the checking of individual steps. As we will show, the naive approach of directly asking the LLM to check a step is typically ineffective. Instead, we introduce a multi-stage approach that breaks the problem down into a series of simpler 1Department of Statistics, University of Oxford. *Email: <[email protected]>. 1
2308.00436#3
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
3
# Introduction Today, large language models (LLMs) summon the imagery of a craftsman: when asked to solve a complex task, they decompose the task into simpler sub-tasks and assemble the best possible tools to tackle each sub-task [51, 72]. For example, consider the complex task of question answering given the image in Figure 1. To answer “whether the two magnets will attract or repel each other”, the LLM needs the following: it needs to identify the positions of the magnets in the image, extract general knowledge explaining that “opposite (same) poles attract (repel)”. Just like a competent craftsman who knows what their tools are capable of, an LLM with such knowledge of its tools will be able to invoke one tool (e.g. its Text Detector) to identify the north and south poles and a second tool (e.g. Knowledge Retriever) to extract pertinent background knowledge about magnetic forces. But how does an LLM know which tool is capable of what? †Work done as student researchers at Google Cloud AI Research. *The authors contributed equally to this work. Preprint. Under review.
2308.00675#3
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
3
In this work, we highlight the development of an innovative methodology that utilizes autonomous AI to generate new materials and to predict properties of the materials with the focus specifically on metal-organic frameworks (MOFs)24-26. MOFs are used in many chemical applications27-30 due to their large porosity31-33, high surface area32, and exceptional tunability34. To this end, we have developed the autonomous AI system for MOF (called ChatMOF), which holds the potential to predict MOF properties from text-based inquiries and to generate MOFs with specified properties (i.e. inverse design). This pioneering approach can potentially significantly bridge the gap between the novice users and the computational and machine learning tools, which can potentially facilitate the progress in developing new materials for various applications. 4 Question: What is the surface area of MIBQAR16? Answer: The surface area of MIBQAR16 is 2195.04 m?/cm? ChatMOF Question: Can you generate structure with accessible volume fraction with 0.5 and visualize it? Answer: ChatMOF Figure 1. A Conceptual image that explains the ChatMOF. When a user poses a textual question about the properties of a MOF, an appropriate answer is provided by ChatMOF. If a user desires to generate a new MOF, ChatMOF is capable of creating a new MOF that satisfies the condition. 5 # Results and Discussion
2308.01423#3
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
4
and (2) a precise symbolic execution with limited scalability. The solution illuminates the need for alternative strategies to navigate the complex trade-offs between precision and scalability effectively. Despite this strategic combination of analysis techniques, nearly 40% of the potential bugs reported from the static analysis phase experience a timeout or memory exhaustion during the static sym- bolic execution phase, preventing any conclusive results on such cases. This limitation hinders the overall effectiveness of the tool, leading to the potential of two distinct outcomes: missed bugs if these potential bug reports are ignored (what UBITect performs), or false positives if they are sent to developers for inspection.
2308.00245#4
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
4
1 Preprint MetaGPT Agents Collaboration with Developing SOP 1/5 | | One-line requirement Define Write a classic and simple Flappy Bird game. 2/5 : Design Boss makes acceptance: check and payment Planning Requirement Analysis Architectural Design Meta Programming System Design 1 3/5 % | Plan&Code Pretty good ! I can %, 1 directly use the %, * a| ¢ interface and %, . 1 —_ keyboard to play oe, 7 4/5 Flappy Bird. % ‘ Testin; 7 Test ms bd QA Engineer W ‘N am °R . a4 cn 4 5/5] Figure 1: The software development SOPs between MetaGPT and real-world human teams. In software engineering, SOPs promote collaboration among various roles. MetaGPT showcases its ability to decompose complex tasks into specific actionable procedures assigned to various roles (e.g., Product Manager, Architect, Engineer, etc.).
2308.00352#4
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
4
1Department of Statistics, University of Oxford. *Email: <[email protected]>. 1 tasks, leverages the generative strengths of the LLM, and decorrelates errors between the original generation and checking. Specifically, using separate calls to the LLM we first extract the target and relevant context for the step, then regenerate an independent alternative step from these, and finally compare the two. The original step is then deemed to pass the check if it matches the regeneration. Besides providing an estimation of correctness for each solution, SelfCheck can also boost final answer accuracies for the original questions by weighted voting. Namely, given multiple solutions to a question, it uses confidence scores as weights to vote among the answers, which provides a soft way to focus on more accurate solutions.
2308.00436#4
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
4
†Work done as student researchers at Google Cloud AI Research. *The authors contributed equally to this work. Preprint. Under review. Input Plan Question: Will these magnets attract or Text Detector | “S": (6, 12] repel each other? ry 54, 14] s N s N Large Language “N" eee % Choices: [repel, attract] Model | 1490, 12) Knowledge | Magnet has two poles. If different Tool Set Retriever | poles are closest, the magnets a ia attract. If the same poles are TO Text mo Knowledge closest, the magnets repel Detector Retriever { f= Search Image Solution | The magnet on the left has a Engine Captioner Generator | north pole facing the magnet on . the right, which has a south pole Solution i Calculator Generator L (?) facing left. Opposite poles attract. Figure 1: Example workflow of tool-using with LLMs to solve a multi-modal question answering task. Given the input question with an image, the LLM selects appropriate tools from the tool set and generates an execution plan to answer the question correctly. Here, the LLMs outlines a plan to first use Text Detector to understand the positioning of the magnets in the image, then leverage Knowledge Retriever to obtain relevant background knowledge about magnets, then finally generate the solution based on the previous steps.
2308.00675#4
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
4
5 # Results and Discussion # Design for ChatMOF The effectiveness of autonomous LLM agents is predicated on its capability to accurately extract essential details from textual inputs and offer relevant responses, irrespective of the presence of a rigidly structured query.21 This concept is vividly illustrated in ChatMOF, as demonstrated in Figure 1. A user may pose a query in textual form regarding the properties of a material, to which ChatMOF responds by supplying a detailed description related to the material in question. Moreover, the operational scope of this system extends beyond the simple retrieval of information. When a user expresses the need to generate a MOF with specific properties, ChatMOF is capable of generating the requested material structure accordingly.
2308.01423#4
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
5
In this paper, we investigate the possibility of leveraging Large Language Models (LLMs) as an alternative to handle such “difficult cases”. This is because recent LLMs have exhibited strong potential in understanding, generating, and even debugging code [4, 8, 13]. Nevertheless, navigating the intricacies of utilizing LLMs for bug discovery proves to be a complex feat. The technical report on GPT-4 underscores this challenge, admitting that when it comes to discovering new vulnerabilities, it may not be the best solution standalone [21]: “... is less effective than existing tools for complex and high-level activities like novel vulnerability identification”. In the same vein, prior research demonstrates the competence of LLMs mostly in simpler tasks or programs [1, 25, 26]. This is because LLMs are far from perfect. For instance, they suffer from hallucination [11] where instead of identifying the bugs in faulty code, LLMs may create non-existent facts in an attempt to rationalize the original intention behind the problematic code [17, 31]. Another issue is the stochasticity of LLMs which can result in inconsistent or outright incorrect results, thus throwing another wrench into the gears of bug discovery [41]. Finally, LLMs have limited context windows, meaning they can only scrutinize a relatively small codebase.
2308.00245#5
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
5
documents, design artifacts, flowcharts, and interface specifications. The use of intermediate struc- tured outputs significantly increases the success rate of target code generation. More graphically, in a company simulated by MetaGPT, all employees follow a strict and streamlined workflow, and all their handovers must comply with certain established standards. This reduces the risk of hallucina- tions caused by idle chatter between LLMs, particularly in role-playing frameworks, like: “Hi, hello and how are you?” – Alice (Product Manager); “Great! Have you had lunch?” – Bob (Architect). Benefiting from SOPs, MetaGPT offers a promising approach to meta-programming. In this context, we adopt meta-programming1 as ”programming to program”, in contrast to the broader fields of meta learning and ”learning to learn” (Schmidhuber, 1987; 1993a; Hochreiter et al., 2001; Schmidhuber, 2006; Finn et al., 2017).
2308.00352#5
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
5
We evaluate SelfCheck on three math tasks, namely GSM8K (Cobbe et al., 2021), MathQA (Amini et al., 2019), and MATH (Hendrycks et al., 2021). For all datasets, we find that using SelfCheck achieves a significant increase in final answer accuracies compared with simple majority voting and other baselines. We also see that SelfCheck provides an accurate confidence estimation for LLM’s solutions, which decreases the proportion of incorrect solutions by 9%, 22.8%, and 16.2% on the three datasets respectively when filtering out solutions with low confidence scores. We further perform a number of ablations to justify some of our key design choices in the SelfCheck approach. To summarize, we introduce SelfCheck as a novel and effective zero-shot schema for self-checking step-by-step reasoning in LLMs. Unlike previous methods, SelfCheck does not need any finetuning or example crafting, so can be directly applied to reasoning tasks in different domains. Our experiments confirm that it can, in turn, be used to improve final predictive performance of LLMs. Our code is available at https://github.com/NingMiao/SelfCheck. # 2 RELATED WORK
2308.00436#5
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
5
Currently, LLM tool-usage provides LLMs with few-shot demonstrations (demos) of what its tools can do, hoping that these demos will help generalize the model’s behavior to newer complex tasks. This process has been rather successful so far. These few-shot demos contain one or several exemplars of <input, output> mappings [68] on given instructions and their corresponding tool-use plans (illustrated in Figure 2). LLMs are expected to find patterns within these demos and generalize them for new tasks. On textual tasks, LLMs have presented with demos of calculators [15, 47, 56], Python interpreters [13, 18] and search engines [62, 43, 50, 56, 40] can perform logical and arithmetic operations to obtain more accurate and factual knowledge. On visual tasks, LLMs with demos of pretrained vision models can do complex visual reasoning [37, 40, 57, 16, 73], can generate and even edit images [19, 9]. On embodied robotic tasks, LLMs can similarly be used to reason and plan [75, 21, 1, 17].
2308.00675#5
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
5
In the context of ChatMOF, LLMs performs the role of a central planner, regulator and evaluator, akin to a human brain. Although LLM effectively fulfills the role of reasoning, its proficiency in specialized tasks falls short. While a single LLM's proficiency in specialized tasks may be limited15, LLMs offer an effective way of integrating and employing various databases and machine learning models seamlessly due to their inherent capacity for reasoning35,36. ChatMOF utilizes the LLM to orchestrate a comprehensive plan and employ toolkits for information gathering, comparable to an expert devising a strategy, writing the code, and executing the plan. This synergy allows the system to precisely predict material properties, deduce synthesis techniques, and fabricate new materials with preset properties. As depicted in Figure 2, ChatMOF is composed of three main components: an agent, toolkit, and an evaluator. The agent responds to human queries by traversing four primary steps (i.e. thoughts, actions, action inputs, and observation), mirroring the methodology outlined in the ReAct37 and MRKL papers38. Initially, the user's query is set as the end goal, followed by 6
2308.01423#5
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
6
In response, we propose LLift, a fully automated framework that bridges static analysis with LLMs in analyzing UBI bugs. Our solution packages several novel components. First, LLift performs post-constraint guided path analysis, which helps verify the path feasibility of the “use” of an initialized variable, a difficult task for static analysis and symbolic execution. Second, to efficiently in- teract with LLMs, we employ task decomposition to break down the analysis into more than a single step. Third, we employ pro- gressive prompting by providing information incrementally only when necessary, instead of providing an enormous scope of code at once. Finally, we propose self-validation by requesting LLMs to 1 static int libcfs_ip_str2addr(...){ unsigned int a, b, c, d; if (sscanf(str, "%u.%u.%u.%u%n", 8a, &b, &c, &d, &n) >= 4){ // use of a, b, c, d 3 3 int sscanf(const char buf, const char «fmt, ...){ va_list args; int i; va_start(args, fmt); i = vsscanf(buf, fmt, args); va_end(args); # Figure 1: Code snippet of sscanf and its usecase
2308.00245#6
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
6
This notion of meta-programming also encompasses earlier efforts like CodeBERT (Feng et al., 2020) and recent projects such as CodeLlama (Rozi`ere et al., 2023) and WizardCoder (Luo et al., 2023). However, MetaGPT stands out as a unique solution that allows for efficient meta- programming through a well-organized group of specialized agents. Each agent has a specific role and expertise, following some established standards. This allows for automatic requirement analysis, system design, code generation, modification, execution, and debugging during runtime, highlight- ing how agent-based techniques can enhance meta-programming.
2308.00352#6
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
6
# 2 RELATED WORK How to automatically check the correctness of a sequence of reasoning steps is a long-standing question. We now discuss how previous methods have tried to tackle this in an LLM context. We note that none of these works are able to work in the zero-shot setting covered by SelfCheck, requiring either problem-specific examples, an external model, and/or finetuning. Few-shot verification Though our focus will be on zero-shot checking, for some problems one may have hand-crafted exemplars available that are specifically designed to that particular question- answering task. Previous methods have been designed to perform checking of LLMs’ generated solutions in this few-shot checking scenario. For example, the Self-Verification (SV) approach of Weng et al. (2022) verifies the whole solution by backward prediction. That is, it uses the conclusion from CoT reasoning to predict a masked condition in the question. However, it only supports single-step checking and is based on the assumption that every piece of information in the question can be recovered using a correct solution of it, which is often not the case. Consequently, it is only applicable to simpler tasks, such as GSM8K.
2308.00436#6
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
6
We argue that this reliance on demos in tool using is unnecessary in some cases, and might be even limiting. In fact, recent work finds that LLMs tend to be sensitive to demos [81], and carefully selecting demos is needed to avoid biasing or overfitting to a particular usage [12]. This leads to the follow-up question: how do we choose which few-shot demos to use? There are no known principled approaches to select demos without human intervention or to even efficiently enable humans to choose or create them. To make the matter worse, when we scale up the number of tools that LLMs have access to, this few-shot selection process becomes combinatorially intractable. Just as a craftsman doesn’t need to see a new tool being demonstrated and can instead discern their capabilities from reading a user manual for the tool, we seek to enable LLMs to learn how to use tools without seeing any demos.
2308.00675#6
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
6
6 contemplation about the path to achieve this goal. Subsequently, ChatMOF decides on the appropriate toolkit to employ from the available options. After the chosen toolkit is executed, the observed results serve as the basis for evaluating whether a final answer can be generated. If feasible, the final answer is presented, otherwise, the process cycles back to the thought step to formulate a new strategy. formulate a new strategy. 7 ChatMOF Agent Toolkits Query G nS ce x Search data Plan’and Ee from Look-up table HUMAN select toolkit a To Predict property using MOF Transformer Evaluator < Yes G é e oe Generate MOF a= e 7 @ that satisfies condition —_» e . Evaluate and Final make final answer Answer Figure 2. The schematic image of ChatMOF. ChatMOF comprises three core components: an agent, toolkits, and an evaluator. Upon receiving a query from human, the agent formulates a plan and selects a suitable toolkit. Subsequently, the toolkit generates outputs following the proposed plan, and the evaluator makes these results into a final response. 8 # Toolkits
2308.01423#6
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
7
i = vsscanf(buf, fmt, args); va_end(args); # Figure 1: Code snippet of sscanf and its usecase Table 1: UBITect’s summary for sscanf. Both use and initialization for va_args are incorrect. ✓ and ✗ stand for whether this parameter will be used/initialized after its call. “...” represents all other parameters of va_args. buf fmt ... *buf *fmt Use Initialize ✓ ✗ ✓ ✓ ✗ ✗ ✓ ✗ ✓ ✗ review responses at various stages to obtain accurate and reliable responses. We implement a prototype of LLift and test it in real-world scenarios. Focusing on the inconclusive cases of UBITect caused by time or memory limitation, LLift successfully identifies 13 pre- viously unknown UBI bugs in the Linux kernel that we confirmed with the Linux community. With 26 positive reports out of nearly 1,000 cases, LLift reaches a high precision of 50%. We also test LLift against all previously known bugs found by UBITect, and observe a recall of 100%. # e # e # e ¢
2308.00245#7
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
7
To validate the design of MetaGPT, we use publicly available HumanEval (Chen et al., 2021a) and MBPP (Austin et al., 2021) for evaluations. Notably, in code generation benchmarks, MetaGPT achieves a new state-of-the-art (SoTA) with 85.9% and 87.7% in Pass@1. When compared to other popular frameworks for creating complex software projects, such as AutoGPT (Torantulino et al., 2023), LangChain (Chase, 2022), AgentVerse (Chen et al., 2023), and ChatDev (Qian et al., 2023). MetaGPT also stands out in handling higher levels of software complexity and offering extensive functionality. Remarkably, in our experimental evaluations, MetaGPT achieves a 100% task com- pletion rate, demonstrating the robustness and efficiency (time and token costs) of our design. We summarize our contributions as follows: • We introduce MetaGPT, a meta-programming framework for multi-agent collaboration based on LLMs. It is highly convenient and flexible, with well-defined functions like role definition and message sharing, making it a useful platform for developing LLM-based multi-agent systems. # 1https://en.wikipedia.org/w/index.php?title=Metaprogramming 2 Preprint
2308.00352#7
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
7
The Deductive Verification (DV) approach of Ling et al. (2023) instead looks to verify independent sub-tasks, as per SelfCheck. However, its verifier only supports checking reasoning chains in a special format called Natural Programs. As a result, it can only work with a specific specialised generator, without serving as a general verifier for multi-step reasoning. Verification with external resources In some cases, there might be external resources available to verify the logical correctness or faithfulness of LLM outputs. Lyu et al. (2023) translate a question into a symbolic reasoning chain using an LLM and solve the problem by a symbolic logic solver. Peng et al. (2023) introduced an external database to check for incorrect knowledge in LLM outputs. These methods are limited by the availability of external resources and are typically restricted to checking for certain types of errors.
2308.00436#7
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
7
Our work provides an alternative to demonstrations: tool documentation (doc). Similar to the metaphor of a manual indicating an physical tool’s capabilities, a software tool’s docs outline what the tool can and cannot be used for and how to invoke it. Docs provide relatively neutral instruction about the tools’ functionalities and how individual tools should be used (illustrated in Figure 2), and they are usually conveniently available through the creation of the tools organically. Intuitively, just as the craftman leans to use a new tool by reading the manual, we provide LLMs with README files when encountering a new tool/repository. With docs, an LLM may not necessarily need demos to use a new tool. Distinct from existing work that rely mostly on few-shot demos for tool-learning, in this work, we study whether LLMs can instead solely rely on docs to use tools. We study the tool-learning performances of LLMs as we include or exclude docs, and vary the number of demos from few-shot down to zero-shot. We conduct the experiments on 6 tasks across vision and text modalities. Our experiments show that: • Surprisingly, when provided with tool docs, LLMs’ zero-shot tool-using performance is on par or even better than their few-shot counterparts, showing that including docs is an effective way to sidestep the few-shot demos needed. 2
2308.00675#7
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
7
proposed plan, and the evaluator makes these results into a final response. 8 # Toolkits ChatMOF employs an assortment of toolkits to acquire, predict, or generate material information. These toolkits can be primarily classified into five categories: table-searcher, internet-searcher, predictor, generator, and others. Table-searcher involves obtaining desired information from existing data, while internet-searcher employs internet resources, such as Google and Bing, to produce desired information. The predictor utilizes machine learning models to obtain specified properties. The generator refers to the toolkit that constructs material structures fulfilling certain properties. Lastly, the others encompass a variety of aids like calculators, file saving and reading functions, and visualization toolkits.
2308.01423#7
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
8
# e # e # e ¢ We summarize our contributions as follows: New Opportunities. We introduce a novel approach to static analysis that enhances its precision and scalability at the same time by harnessing the capabilities of LLMs. To the best of our knowledge, we are the first to use LLMs to assist static analysis in bug-finding tasks with large-scale and real-world datasets. New Methodologies. We develop LLift, an innovative and fully automated framework that arms static analysis with LLMs. LLift employs several prompt strategies to engage with LLMs, eliciting accurate and reliable responses. Results. We rigorously investigate LLift by conducting an in- depth analysis of nearly 1000 cases, resulting in a reasonable precision rate (50%). Additionally, our examination led to the discovery of 13 previously unknown bugs. Open source. Committed to open research, we will publicly release all of our code and data, fostering further exploration of the new space of LLM-assisted program analysis.
2308.00245#8
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
8
# 1https://en.wikipedia.org/w/index.php?title=Metaprogramming 2 Preprint • Our innovative integration of human-like SOPs throughout MetaGPT’s design significantly en- hances its robustness, reducing unproductive collaboration among LLM-based agents. Furthermore, we introduce a novel executive feedback mechanism that debugs and executes code during runtime, significantly elevating code generation quality (e.g., 5.4% absolute improvement on MBPP). • We achieve state-of-the-art performance on HumanEval (Chen et al., 2021a) and MBPP (Austin et al., 2021). Extensive results convincingly validate MetaGPT, suggesting that it is a promising meta-programming framework for developing LLM-based multi-agent systems. 2 RELATED WORK
2308.00352#8
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
8
Training/finetuning a verifier A few other methods train or finetune a separate verifier model to check reasoning chains. Cobbe et al. (2021) finetuned a GPT-3 model on GSM8K to predict the correctness of a solution as a whole. Li et al. (2022) trained a binary deberta-v3-large (He et al., 2020) classifier on each domain to predict step correctness. More recently, Lightman et al. (2023) built a large dataset, which contains step-wise correctness labels from human labelers, and finetuned a GPT-4 model on it. Unlike SelfCheck, all of these methods require extra data and external computational resources, restricting their applicability and ease of use. 2
2308.00436#8
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
8
2 a ‘a 7 Demonstration Documentation Description: examples of questions and the tool-use plan. Description: available tools and their functionalities. ¢ Question: e Text Detector: Which property do these objects have in common? It detects the text in an image [...] e Knowledge Retriever: 7 It retrieves relevant knowledge [...] & ~4 e Search Engine: It searches the web for relevant info [...] e Image Captioner: It generates a caption for an image [...] Tool-use Plan: [...] L JL J Chips Pretzel Fries Tool-use Plan: Text Detector + Knowledge Retriever — Solution Generator e@ Question: [...] Tool-use Plan: [...] e@ Question: [...] Figure 2: Two types of knowledge for prompting LLMs for tool-use: Demonstrations (demos) and Documentations (docs). Demos consist of <input, output> pairs on input instructions and their corresponding output tool-use plans. They require manual efforts for careful curation on every new task, and the model performance can be sensitive to which demos are used [81, 12]. Many demos may also be necessary for good coverage when the number of tools scales up. On the other hand, docs provide descriptions for the tool functionality, and are usually organically available for tools.
2308.00675#8
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
8
Due to the facile synthesis MOF structures, there are many different database associated with the MOF structures: (1) computational-ready experimental MOFs (CoREMOF)39,40and (2) quantum MOF (QMOF) database41. The CoREMOF database is an archive of synthesized materials present in a CSD MOF subset42, encompassing computations of various properties of MOFs including geometric descriptors. The QMOF database is populated with electrical property data, such as bandgap, formation energy, homo, and lumo, derived through DFT calculations. When a user demands these electrical properties, ChatMOF seeks and extracts them from the QMOF database. As such, if a user seeks information about a specific MOF that is included in these databases, ChatMOF can locate and deliver the desired information from the pre-tabulated data. Figure 3 provides an illustrative example of a table-search operation conducted by ChatMOF. When faced with a user query, ChatMOF devises a strategic approach to extract the requisite data. Subsequently, it creates a Python code tailored to retrieve specific information from the database in accordance with the premeditated strategy. This code typically uses the “pandas43” library to 9
2308.01423#8
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
9
2 BACKGROUND & MOTIVATION 2.1 UBITect and Motivating Example UBITect is a state-of-the-art static analysis solution aiming at find- ing Use Before Initialization (UBI) bugs in the Linux kernel [40]. It employs a two-stage pipeline where the first stage employs a bottom-up summary-based static analysis of the Linux kernel. By design, this stage aims for scalability and sacrifices precision, pro- 140k), most of ducing a significant number of potential bugs (i.e., ∼ 2 Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian which are false alarms. The static analysis is imprecise partly due to its lack of path sensitivity (often needed to discover UBI bugs). It is complemented by a second stage of static symbolic execution that filters as many false alarms as possible by verifying their path feasibility. However, 40% of the reported bugs are discarded due to timeout (10 minutes) or memory limitations (2 GB) during the symbolic execution, potentially missing genuine bugs.
2308.00245#9
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
9
Automatic Programming The roots of automatic programming reach back deep into the previ- ous century. In 1969, Waldinger & Lee (1969) introduced “PROW,” a system designed to accept program specifications written in predicate calculus, generate algorithms, and create LISP imple- mentations (McCarthy, 1978). Balzer (1985) and Soloway (1986) made efforts to advance auto- matic programming and identified potential methods to achieve it. Recent approaches use natural language processing (NLP) techniques (Ni et al., 2023; Skreta et al., 2023; Feng et al., 2020; Li et al., 2022; Chen et al., 2018; 2021b; Zhang et al., 2023). Automatic programming has grown into an industry delivering paid functions such as Microsoft Copilot. Lately, LLMs-based agents (Yao et al., 2022; Shinn et al., 2023; Lin et al., 2023) have advanced automatic programming develop- ment. Among them, ReAct (Yao et al., 2022) and Reflexion (Shinn et al., 2023) utilize a chain of thought prompts (Wei et al., 2022) to generate reasoning trajectories and action plans with LLMs.
2308.00352#9
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
9
Step checking Question Let S be the set of complex numbers z such that the real part of $ rac{1}{z}$ is equal to $ rac{1}{6}S. This set forms Stage 1: Target extraction The step completes the square to rewrite the equation in standard form accurve. Find the area of the region inside the curve. of a circle. The next step “...” directly follows from Step 4. Stage 3: Step regeneration To rewrite the equation in standard form of a circle, we need to complete the square for both $x$ and Sy6. =a — J 1 : Multi-step reasoning: Step 0: Let $2=x+yi$ be a complex number, where $x$ and Sy$ are real numbers. Step 1: We are given that the real part of $ rac(1}{z}$ is equal to S rac(1}O)S. i | | | Stage 2: Information collection Simplifying: $(x-3)'2 + y42 = 98. This is now in the standard form of a circle with center $(3,0)$ and radius $38. Stage 4: Result comparison | | In both solutions, ... However, in Solution 1, the completion of the | | square results in the center of the circle being
2308.00436#9
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
9
• Building on the above finding, we relax the few-shot demo constraint, and show that we can efficiently scale up to a significantly larger tool set, on a newly collected API usage dataset, by simply providing the LLMs with docs. • We show how to seamlessly add new tools along with their docs to a tool set for LLMs to solve unseen tasks on image editing and video tracking, all without any further demos in a plug-and-play manner. • Finally, with unseen tools developed recently as building blocks, we showcase LLMs are capable of re-inventing popular yet even more recent works Grounded-SAM [23] and Track Anything [70], which suggests a potential from zero-shot tool usage to automatic knowledge discovery. # 2 Related work
2308.00675#9
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
9
9 extract or filter relevant details. The drafted code is then executed within the ChatMOF's designated executor. Upon observing the outcome of this operation, ChatMOF contemplates the necessary steps to derive the final answer, culminating in the delivery of the requested response. The versatility of ChatMOF extends to handling diverse table data derived from text mining or rule-based coding processes. For questions related to the building blocks of a MOF, the MOFkey17 database proves to be instrumental. This particular database leverages rule-based methods to obtain insights about the organic linkers and metal clusters of a MOF, providing details about its topology and the potential presence or absence of interpenetration. In addition, for users seeking guidance on MOF synthesis, the DigiMOF44 database becomes a valuable resource. DigiMOF provides an array of synthesis conditions, extracted via text mining techniques from MOF-related academic papers, and includes information on organic and metal precursors, and solvent. 10
2308.01423#9
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
10
Figure 1 shows a case where UBITect’s static analysis stage considers it a potential UBI bug (a false alarm) and the subsequent symbolic execution stage times out and fails to generate a definitive conclusion. In other words, UBITect failed to rule out this case as a false alarm. As Table 1 presents, the static analysis stage generates a summary of sscanf() as “may not initialize parameters a, b, c, and d” but does use them at Line 3. Consequently, the static analysis stage reports two locations of use-before-initialization at Line 3 and Line 4, respectively. There are two reasons for the static analysis stage to consider the case a potential bug: 1) inability to recognize special functions: For soundness, UBITect assumed the va_start() is a normal function. However, since it cannot find its definition, it has to conservatively assume that the arguments passed to it will be used inside. Unfortunately, in reality, va_start is a compiler built-in function that simply “prepares’ the arguments without any uses. 2) insensitivity of path constraints: It fails to recognize the path constraint, i.e., if(sscanf(...)>=4), which ensures its arguments a to d must be initialized before use.
2308.00245#10
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
10
et al., 2023) utilize a chain of thought prompts (Wei et al., 2022) to generate reasoning trajectories and action plans with LLMs. Both works demonstrate the effectiveness of the ReAct style loop of reasoning as a design paradigm for empowering automatic programming. Additionally, ToolFormer (Schick et al., 2023) can learn how to use external tools through simple APIs. The research most closely aligned with our work by Li et al. (2023) proposes a straightforward role-play framework for programming that involves communication between agents playing different roles. Qian et al. (2023) utilizes multiple agents for software development. Although existing papers (Li et al., 2023; Qian et al., 2023) have improved productivity, they have not fully tapped into effective workflows with structured output formats. This makes it harder to deal with complex software engineering issues.
2308.00352#10
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
10
$38. Stage 4: Result comparison | | In both solutions, ... However, in Solution 1, the completion of the | | square results in the center of the circle being $(3,0)$ and a radius of |_| $38. In Solution 2, the completion of the square results in the center of the circle being $(3/2,0)$ and a radius of $3/28. Step 4: Cross-multiplying and rearranging, we get the equation $6x ig $(312,0)8 $3:28: = x2 +28 Step 5: Completing the square, we obtain Sileft(x- rac{3}{2)right)*2 + y*2 = rac{9}{4}.$ Therefore, Solution 1 contradicts the conclusion in Solution 2 as the centers and radii of the circles are different. Step checking results: 1, 1, 1, 1, 1, -1,1,1 Step 7: The area of the region inside the circle is $ rac{O\piK{4}s. | i ion: So the answer is S rac{9\pi}{4}S. Integration function: Confidence score: w
2308.00436#10
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
10
# 2 Related work LLMs with retrieval augmentation and tools. In spite of the remarkable achievements demon- strated by LLMs, the performance can be further boosted with external tool usages to be more accurate, efficient or versatile for wider applications. The authors in [51] detailed the cognitive origins, the paradigm shift of foundation models, and the complementary roles of tools and models to LLMs. The example tool usage starts from knowledge retrieval [6, 20, 33, 74, 77] and expands to search engine [43, 31, 32, 62, 58, 46, 40], QA system [56], calculator [15, 47, 56], the Python inter- preter [18, 13, 65, 24, 46, 16], simulation engines [37], machine learning models [57, 73, 69, 40, 16], or even tools created by LLMs [11]. Pioneer works of LLMs with tools often rely on human su- pervision [62, 31] or additional self-supervised learning techniques [56], which pose challenges for practical plug-and-play usage. Recent advancements eliminate additional training by using example demos in the prompt [19, 75, 73, 57, 40, 46]. Our work further simplifies prompt design by only leveraging documentation for individual tools, while maintaining competitive performance.
2308.00675#10
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
10
papers, and includes information on organic and metal precursors, and solvent. 10 Question: How high is the accessible surface area of JUKPAI? Thought: | need to find the accessible surface area of JUKPAI Action: search_csv Action Input: “Search name JUKPAI and provide information of its accessible surface area” {Table Searcher] Thought: To search for a specific value in the DataFrame, we can use the ... [Table Searcher] Input: python (df.loc[dff '].to_markdown()) [Table Searcher] Observation: Accessible Surface Area (m*2/cm‘3) 4837 1474.22 [Table Searcher] Final Thought: The accessible surface area of name ‘JUKPAI’ is 1474.22 ... Thought: | now know the final answer Answer : The accessible surface area for JUKPAI is 1474.22 m?/cm* Figure 3. Example of a Table Searcher for the question “How high is the surface area of JUKPAI?”. The human's question prompts the system to devise a strategy. From this, Python code is generated and executed to extract the desired information, which is then returned as the final answer. 11
2308.01423#10
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
11
2.2 Practical Challenges of Static Analysis In light of our motivating example of the sscanf() case, we can summarize the reasons for UBITect’s failure as follows: Inherent Knowledge Boundaries. Developers need to model specific functions or language features. Otherwise, they influence the correctness of the results. For compiler built-in functions, e.g., va_start(), their definitions are simply not available. Beyond this example, there exists an array of other scenarios, which are par- ticularly prevalent in the Linux kernel. These situations include assembly code, hardware behaviors, callback functions, concur- rency, and compiler built-in functions. However, in practical terms, it is often time-consuming to discover and model all these cases, because they can be highly dependent on the analysis target and evolve over time. This limitation often compromises the effective- ness of static analysis, leaving it less precise and comprehensive than desired.
2308.00245#11
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
11
LLM-Based Multi-Agent Frameworks Recently, LLM-based autonomous agents have gained tremendous interest in both industry and academia (Wang et al., 2023b). Many works (Wang et al., 2023c; Du et al., 2023; Zhuge et al., 2023; Hao et al., 2023; Akata et al., 2023) have improved the problem-solving abilities of LLMs by integrating discussions among mul- tiple agents. Stable-Alignment (Liu et al., 2023) creates instruction datasets by deriving consensus on value judgments through interactions across a sandbox with LLM agents. Other works focus on sociological phenomena. For example, Generative Agents (Park et al., 2023) creates a “town” of 25 agents to study language interaction, social understanding, and collective memory. In the Natural Language-Based Society of Mind (NLSOM) (Zhuge et al., 2023), agents with different functions interact to solve complex tasks through multiple rounds of ”mindstorms.” Cai et al. (2023) propose a model for cost reduction by combining large models as tool makers and small models as tool users.
2308.00352#11
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
11
Figure 1: Example of using SelfCheck, focusing on the checking of a particular step (Step 5). To check the correctness of the step, SelfCheck goes through 4 stages. First, in the target extraction stage, it figures out that the main purpose of Step 5 is to complete the square. In the information collection stage, it then establishes that Step 5 only directly relies on Step 4. Next, the step regeneration stage instructs the LLM to complete the square independently, only using Step 4 as context. The regeneration result shows that the center and radius of the circle are (3, 0) and 3, which is different from what is implied by the original Step 5. Consequently, the result comparison stage concludes that Step 5 is likely to be wrong. After checking all the steps, SelfCheck integrates the results to form an overall confidence score, w. See Appendix A for a complete version of the example. # 3 SELFCHECK: USING LLMS TO CHECK THEIR OWN REASONING Rather than relying on external resources or problem-specific data like the aforementioned approaches, it would be highly beneficial if we could develop self-contained checking schemes that require only the original LLM itself. In other words, we would like to use the LLM to identify errors in its own step-by-step reasoning, analogously to how a human might go back to check their working.
2308.00436#11
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
11
Planning with LLMs. Language models are proven to have potential to conduct planning for solving complex tasks or decompose the complex tasks into sub-problems when prompted properly. [21, 22] retrieve demos at test-time with large knowledge space coverage to generate admissible actions. [28] relies on pre-designed demos for task decomposition. Similarly, recent works of tool using with LLMs leverage the example demonstrations of solving examples tasks with a planning of tools [13, 19, 75, 73, 57, 40, 46]. However, crafting demos of interactions between tools may be challenging in practice when the number of tools surges. Concurrent work [48, 52, 71] tackles the challenge by using strong LLMs such as GPT-4 [45] to create large instruction-following datasets that cover diverse instructions and corresponding tool-use plans, typically through mechanisms like 3 self-instruct [66]. The resultant datasets can then be used to finetune and equip other LLMs (e.g., LLaMA [63] and OPT [79]) the ability to use a large collection of tools for unseen instructions. On the other hand, our work showcases the potential for LLMs to utilize any unseen new tools by reading their tool docs.
2308.00675#11
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
11
final answer. 11 The accuracy of the look-up table search is contingent on the pre-calculated values available in the specific files. And for queries regarding the properties of MOFs that are not available, computational simulation can serve as an attractive alternative method, but unfortunately, simulations are a time-intensive process and an abundance of computational resources45. The best resolution to such challenges is the application of machine learning models, which enable high- accuracy predictions grounded in extensive data. In the case of pre-trained machine learning models, predictions are quick and can be made for a significant volume of substances simultaneously, making it an excellent toolkit for integration into ChatMOF. As an appropriate toolkit for the prediction task, ChatMOF uses the MOFTransformer46,47 model that has been developed in our group for the universal prediction of MOF properties. This model leverages both local features, such as atoms and bonds, and global features like surface area and topology. Having undergone pre-training with one million hypothetical MOFs and subsequent fine-tuning for specific properties, MOFTransformer shows high performance in predicting various properties. Moreover, it affords the convenience of predicting the desired property by retaining the model structure while altering the weights of a model that's been fine-tuned for the desired property.
2308.01423#11
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
12
Exhaustive Path Exploration. Correctly handling cases like sscanf() requires it to consider the check: sscanf(...)>=4. Unfortunately, existing path-sensitive static analysis (and symbolic execution) techniques operate under a methodical but exhaustive paradigm, exploring all potential execution paths through the codebase. While this approach is theoretically comprehensive, it often leads to a combinatorial explosion. The vast array of execution paths necessitates the exploration of myriad functions, many of which ultimately prove irrelevant to the specific analysis task at hand. In the sscanf() case, its return value is computed inside an unbounded loop when iterating over an unknown string variable buf. This causes UBITect’s symbolic execution to time out exactly due to this problem. The Hitchhiker’s Guide to Program Analysis: A Journey with Large Language Models UBITect result 40% undecided Symbolic Execution Static Analysis LLift 60% successful execution Figure 2: The overview of LLift. Start with the discarded cases by UBITect and determine whether these potential bugs are true or false. O UBITect result 40% undecided Symbolic Execution Static Analysis LLift 60% successful execution Specifically, they detail the guarantees a routine offers based on its observable outcomes.
2308.00245#12
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
12
Some works emphasize cooperation and competition related to planning and strategy (Bakhtin et al., In our implementations, we 2022); others propose LLM-based economies (Zhuge et al., 2023). observe several challenges to multi-agent cooperation, such as maintaining consistency and avoiding unproductive cycles. This motivates our focus on applying advanced concepts such as Standard Operating Procedures in software development to multi-agent frameworks. # 3 METAGPT: A META-PROGRAMMING FRAMEWORK MetaGPT is a meta-programming framework for LLM-based multi-agent systems. Sec. 3.1 pro- vides an explanation of role specialization, workflow and structured communication in this frame- work, and illustrates how to organize a multi-agent system within the context of SOPs. Sec. 3.2 presents a communication protocol that enhances role communication efficiency. We also imple- ment structured communication interfaces and an effective publish-subscribe mechanism. These methods enable agents to obtain directional information from other roles and public information 3 Preprint
2308.00352#12
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
12
Unfortunately, directly asking the LLM to check its own reasoning is largely ineffective: it almost invariably declares that the original answer is correct, with Ling et al. (2023) finding answers checked in this way are deemed correct more than 90% of the time regardless of whether they actually are. As we will show in Section 5, individually prompting the LLM to check each step in the CoT reasoning fares slightly better, but is still only able to offer marginal gains compared to not checking at all.
2308.00436#12
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
12
Demonstration and Documentation. Learning from demonstration is popular in reinforcement learning [49, 4, 44, 55]. [8] propose the in-context learning algorithm for efficient and effective downstream task adaptations through showing example demonstrations. Inspired by the success, most of existing LLM tool-using works rely on few-shot demonstration [13, 19, 75, 73, 57, 40, 46]. However, [12] show that having more example demonstration might counter-intuitively degrade performance, and a careful selection might be needed. [35] proposes a retrieval method for demo selection, which implicitly requires a larger set of examples to be selected. Using documentation to improve algorithms is relatively under-explored. [7, 82] propose document reading algorithms for specific games. [83] introduced DocPrompting, which employs a trained retriever on the given training data to boost code generation by retrieving relevant documents. In this work, we take a step towards exploring the zero-shot tool planning in LLMs solely with the aid of documentation, and investigate a wide range of diverse tasks from language to vision domains. While [64, 42] showcase pure zero-shot planning capability of LLMs, they do not study either the tool usage or the
2308.00675#12
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
12
The key to generating accurate responses is selecting the appropriate fine-tuned model with the MOFTransformer and the material to which it will be applied, based on the query. Similar to HuggingGPT10, ChatMOF does not directly compute the material properties, but it rather selects the appropriate machine learning model. Figure 4 shows the example prompt for ChatMOF using the MOFTransformer predictor. When a user asks a question such as “At room temperature (298K), what's the CO2 Henry coefficient for XEGKUR?”, generator develops a strategy for solving the problem. The optimal fine-tuned model of MOFTransformer that aligns with the plan, in this case, 12 ‘CO2_henry_coefficient_298K’ is chosen, and the material to predict (XEGKUR) is decided. Following this, the selected model is employed to make prediction. Finally, it make a response to the query using a language learning model based on the observed values from the machine learning model. Since this is a logarithmic value, ChatMOF selects the “Python_REPL” tool and calculates an exponential value to get the final answer. 13
2308.01423#12
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
13
UBITect result 40% undecided Symbolic Execution Static Analysis LLift 60% successful execution Specifically, they detail the guarantees a routine offers based on its observable outcomes. . These outcomes are defined as updates to its parameters (and return value) for a path of 𝑅. Particularly, does not include initialization for variables for convenience. In the study of UBI bug, for a routine 𝑅 that can yield a set of outcomes # , the postcondition O P𝑅 : # P must_init 𝑅 (2) S( ) → O × Here, 𝑅, set of variables that must be initialized. Motivating Example. Consider the sscanf() function in our mo- tivating example. Based on these return values, the postconditions assure the initialization of certain variables: Figure 3: A typical type of potential UBI bug. For each suspicious variable 𝑋 , we expect it to 1) have an initializer function that probably initializes 𝑋 and 2) use 𝑋 .
2308.00245#13
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
13
3 Preprint Name Alex Agent Profile Profile Engineer Goal Write elegant, readable.extensible, efficient code @ 5 wy 8 . Constraint The code you write should conform to code standard like PEP8, be modular,easy to read and maint Architect Project Manager L en a ee @ diagram tool @ diagram tool Me = @& eC Memory Retrieval msgA . . (subscription| Engineer Iterative Programming content: {Architect: Implementation appro...) - ‘ + i Code: game.py msgB |( instruct_conter ata structures and in ..." © lex game.py Z = 090 ee cause_by: WriteTasks import random . . Engineer BPRD Document msgC |\ sent_from: ProjectManager cass i" writes [class Game: BsSystem Design send_to: Engineer game, def init__(self, ae self.size = self.score Feedback self.high_score = 0 . J self.board = {[0l*size for_in | ag code range(size Product Manager QA Engineer self.game_over = False a seff.start) @& web search tool x Debug + ‘E) ti BA ch search ool ae ran ; ebus. xecution) Executable Feedback fools | web search tool lebugging tool jiagram tool a @ debuge = (J Structured Message(...
2308.00352#13
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
13
A more nuanced method to perform this checking is thus required. To this end, we introduce SelfCheck, a general-purpose, zero-shot, checking schema for self-identifying errors in LLM CoT reasoning. Given a question, q, and its step-by-step solution, s, produced by some generator (which will generally be an LLM with appropriate CoT prompting), SelfCheck considers each step of s in turn and tries to establish its individual correctness based on the preceding steps. This checking is done by leveraging an LLM (which can either be the same LLM used to generate s or a separate one), but rather than directly asking the LLM to perform the check, we instead introduce a novel step checking method (see Section 3.1) that exploits their generative modeling strengths. The results of the checks on individual steps are then combined into a single confidence score, w ∈ [0, 1], for the whole solution. These confidence scores, in turn, allow us to improve predictive performance, by using them to perform weighted voting on multiple solutions to the same question. 3 3.1 STEP CHECKING
2308.00436#13
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
13
tasks from language to vision domains. While [64, 42] showcase pure zero-shot planning capability of LLMs, they do not study either the tool usage or the unseen scenarios to the language models. ViperGPT [16] is a concurrent work, which focuses on visual programming in Python and uses function implementations and specifications as documentation. Lastly, while AutoGPT [3] provides several demos that showcase the LLM’s capability of tool using through documentation reading, our study focuses on a systematic exploration ranging from real-world use cases to academic benchmarks.
2308.00675#13
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
13
Question: At room temperature (298K), what's the COz Henry coefficient for XEGKUR? Thought: | need to find the COz Henry coefficient for XEGKUR at room temperature Action: search_csv Thought: The search_csv tool did not provide any information on the COz Henry coefficient... Action: predictor Action Input: Predict the COz Henry coefficient for XEGKUR at 298K [predictor] Thought: | need to gather data on XEGKUR and train a machine learning to pre... [predictor] Property: CO2_henry_coefficient_298K [predictor] Materials: XEGKUR bash load model : CO2_henry_coefficient_298K/best.ckpt Predicting DataLoader 0: 100% | AE SSS 8888) 1/1 [00:00<00:00] [predictor] Final Thought: Based on the results, answer the question using the predicted ... Observation: The predicted CO2 Henry coefficient at 298K for XEGKUR_clean is -3.62769 mol/Kg-Pa. However, this is a logarithmic value. To get the original value, an exponential must be applied. Thought: | need to convert the logarithmic value to the original
2308.01423#13
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
14
0, must_init 1, must_init 2, must_init 3, must_init 4, must_init 5, must_init 𝑝𝑎𝑡ℎ1) 𝑝𝑎𝑡ℎ2) 𝑝𝑎𝑡ℎ3) 𝑝𝑎𝑡ℎ4) 𝑝𝑎𝑡ℎ5) 𝑝𝑎𝑡ℎ6) 𝑟𝑒𝑡 { 𝑟𝑒𝑡 { 𝑟𝑒𝑡 { 𝑟𝑒𝑡 { 𝑟𝑒𝑡 { 𝑟𝑒𝑡 { : P ( ↦→ ↦→ ∅} 𝑎 ↦→ { : P ( ↦→ }} 𝑎, 𝑏 : P ( ↦→ ↦→ { }} 𝑎, 𝑏, 𝑐 : P ( ↦→ ↦→ { }} 𝑎, 𝑏, 𝑐, 𝑑 : P ( ↦→ ↦→ { }} 𝑎, 𝑏, 𝑐, 𝑑, 𝑛 : P ( ↦→ ↦→ {
2308.00245#14
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
14
Figure 2: An example of the communication protocol (left) and iterative programming with exe- cutable feedback (right). Left: Agents use a shared message pool to publish structured messages. They can also subscribe to relevant messages based on their profiles. Right: After generating the initial code, the Engineer agent runs and checks for errors. If errors occur, the agent checks past messages stored in memory and compares them with the PRD, system design, and code files. from the environment. Finally, we introduce executable feedback—a self-correction mechanism for further enhancing code generation quality during run-time in Sec. 3.3. 3.1 AGENTS IN STANDARD OPERATING PROCEDURES Specialization of Roles Unambiguous role specialization enables the breakdown of complex work into smaller and more specific tasks. Solving complex tasks or problems often requires the collab- oration of agents with diverse skills and expertise, each contributing specialized outputs tailored to specific issues.
2308.00352#14
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
14
3 3.1 STEP CHECKING To check individual steps of the reasoning process, the first thing we should note is that the correctness of each step is highly dependent on its context, namely the question and previous steps in the solution. For example, we usually need to refer to previous steps for the definition of variables and the meaning of specific numbers. If each step is conditionally correct based on the provided context and the last step provides an answer in the required format, then the overall reasoning will itself be correct. The target of the step checking is thus simply to check the conditional correctness of each step based on the provided context. That is, we only care about catching errors at the current step, and can assume all information from its context to be correct.
2308.00436#14
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
14
# 3 Experimental setup # 3.1 General workflow We follow the general framework of tool-using with LLMs in [51], which encompasses many of the recent works [75, 27, 19, 57, 73, 69, 40]. Specifically, given a natural language instruction, an LLM planner generates a program to be sequentially executed where each step of the program may rely on using tools selected from a tool set. After the program is generated, it is then executed by an environment which finally returns the execution results. Here, the program extends beyond conventional coding practice [76, 53, 25] and is more closely associated with automata theory [59]: a set of instructions of automations (e.g. tools in our case). Therefore, the tool set can be libraries with specific programming languages (e.g. Python), or general computation with properly defined input-output, such as trained models, API calls, and beyond. # 3.2 Tool-use prompting methods As discussed in Section 1, two main types of information are considered in prompting LLMs for tool-using plans: demonstrations (demos) and documentations (docs). Demos showcase how tool interactions can accomplish specific tasks, while docs describe individual tool functionalities without task-specific ties as shown in Figure 2. In the experiment, we explore combinations of including/excluding docs and demos in prompts, as well as varying numbers of demos. # 3.3 Evaluation tasks
2308.00675#14
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.00245
15
2.3 Capability of LLMs Fortunately, LLMs [21] offers a promising alternative to summa- rizing code behaviors [22] in a flexible way and bypassing the aforementioned challenges. This is because LLMs are trained and aligned with extensive datasets that include both natural language and programs. Specifically, we observe that LLMs possess funda- mental abilities that assist in addressing each challenge: 1) domain- specific code recognition and 2) smart code summarization. Domain-specific Programming Constructs Recognition. This proficiency is showcased in three key areas: 1) Function Recogni- tion: LLMs can identify frequently used interfaces in the Linux ker- nel from its semantics, such as sscanf(), kzalloc(), kstrtoul(), and ‘list for each’, simplifying the analysis and making the analysis more scalable. 2) Function pointers and callbacks: LLMs can accurately interpret complex uses of function pointers as callbacks, which often require manual modeling. We will show an interesting case in §6.6. }}
2308.00245#15
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
15
In a software company, a Product Manager typically conducts business-oriented analysis and derives insights, while a software engineer is responsible for programming. We define five roles in our software company: Product Manager, Architect, Project Manager, Engineer, and QA Engineer, as shown in Figure 1. In MetaGPT, we specify the agent’s profile, which includes their name, profile, goal, and constraints for each role. We also initialize the specific context and skills for each role. For instance, a Product Manager can use web search tools, while an Engineer can execute code, as shown in Figure 2. All agents adhere to the React-style behavior as described in Yao et al. (2022). Every agent monitors the environment (i.e., the message pool in MetaGPT) to spot important ob- servations (e.g.,, messages from other agents). These messages can either directly trigger actions or assist in finishing the job. Workflow across Agents By defining the agents’ roles and operational skills, we can establish basic workflows. In our work, we follow SOP in software development, which enables all agents to work in a sequential manner.
2308.00352#15
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
15
A simple idea to try and achieve this would be to feed the current step as well as all its context to an LLM and directly ask it to ‘check the correctness of the step’. However, in practice, we find that this task is too difficult for the LLM to do effectively, even with careful prompting that exemplifies how to do the checking in detail (see Section 5). This difficulty comes first from the fact that there are multiple aspects to the checking problem that the checker must deal with simultaneously: it needs to understand the key content in the step and then collect all related information from the context, before actually checking for its correctness. Second, ‘checking’ is a less common task in the training corpus of most LLMs, such that it is a problem that does not necessarily play to their strengths. Finally, there are likely to be strong correlations between the errors such a checker will make with the errors made in the original generation, undermining its usefulness. To address these difficulties, SelfCheck instead decomposes the checking task for each step into four stages: target extraction, information collection, step regeneration, and result comparison. The LLM is used to execute each stage successively, with the outcome of the result comparison providing the correctness prediction.
2308.00436#15
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
15
# 3.3 Evaluation tasks We conduct our experiments on 6 tasks across multiple modalities with a variety of tool sets. We describe the setup and the tool sets for each task below. Except for specific cases where it is explicitly specified, the LLM planner is ChatGPT (gpt-3.5-turbo). Multi-modal question answering on ScienceQA. ScienceQA [39] consists of multi-modal multiple- choice science questions that requires language and visual understanding as well as domain-specific knowledge to answer correctly. On ScienceQA, we follow the setup used in Chameleon [40] and employ the same tool set with 7 tools, such as the search engine and the image text detector. Tabular math reasoning on TabMWP. TabMWP [41] is a math reasoning dataset with various forms of tables. It requires a model to understand structured or domain-specific tables, and utilize the 4
2308.00675#15
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
15
Figure 4. Example of a predictor for the question "At room temperature (298K), what's the CO2 Henry coefficient for XEGKUR?" The predictor sets up a plan to solve the question, an appropriate model, and target material. Based on this, it uses machine learning to predict the value, which is then used to derive the final answer. 14 Finally, a key aspiration among researchers in the field of MOFs is the inverse design of MOFs exhibiting desired properties. In materials science, various generative models, including Generative Adversarial Networks (GAN)48,49 and Diffusion models50, have been employed for inverse design. However, due to the inherent complexity of MOFs, which includes a large number of atoms, large void fraction, and complex topologies, an atom-by-atom inverse design approach has been elusive. As a workaround, MOF inverse design has been facilitated top-down approaches leveraging through genetic algorithms51-53, Variational Autoencoders54 (VAE), or reinforcement learning55 for the selection of building blocks and their placement into suitable topologies. Genetic algorithms are notably suitable for integration with LLMs. As a bio-inspired
2308.01423#15
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
16
}} 𝑝𝑎𝑡ℎ6 represent different possible paths in the Here, the 𝑝𝑎𝑡ℎ1 − sscanf() and each path corresponds with a different postcondition. For UBI detection, not every associated postcondition is relevant; reachable are critical. 𝑝𝑜𝑠𝑡 [? ]. The ( P𝑞𝑢𝑎𝑙 , is a subset of P𝑞𝑢𝑎𝑙 = # P # C # P | C 4, ≥ the qualified postcondition would be , which ensures that variables a, b, c, and d must be initialized; therefore, all variables used subsequently are initialized, and no UBI happens. In subsequent discussions, unless otherwise specified, the term Smart Code Summarization. LLMs can work with complicated functions; for example, that they can summarize loop invariants [26], which is an inherently difficult task in program analysis. This is likely because it has been trained on various functions with loops and their semantics. In contrast, traditional static analysis follows explicitly defined rules without a limited ability to generalize. ‘postcondition’ shall denote ‘qualified postcondition’.
2308.00245#16
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
16
Specifically, as shown in Figure 1, upon obtaining user requirements, the Product Manager under- takes a thorough analysis, formulating a detailed PRD that includes User Stories and Requirement Pool. This serves as a preliminary functional breakdown. The structured PRD is then passed to the Architect, who translates the requirements into system design components, such as File Lists, Data Structures, and Interface Definitions. Once captured in the system design, the information is directed towards the Project Manager for task distribution. Engineers proceed to execute the des- ignated classes and functions as outlined (detailed in Figure 2). In the following stage, the QA Engineer formulates test cases to enforce stringent code quality. In the final step, MetaGPT pro4 # Preprint
2308.00352#16
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
16
The idea behind this decomposition is to make the LLM focus on an easier task at each stage and ensure the individual tasks carried out are more closely aligned to the LLM’s strengths. Moreover, by focusing on regenerating and then comparing, we hope to reduce the correlations between the errors of the checking and the original generation. At a high level, the stages work by first prompting the LLM to figure out the target of the current step and what information it uses to achieve the target; we find that the LLM is usually able to perform these tasks extremely accurately. Then we ask the LLM to re-achieve the target using only the collected information, providing an alternative to the original step that maintains the same purpose in the overall reasoning process. Here the clear description of the target and the simplified context we provide make the regeneration stage less challenging. As a result, we hope its output will be more reliable and thus serve as a useful reference. Even if this is not the case, it will still hopefully provide a viable alternative, with a distinct generation, that can be used for comparison. The last stage then uses the LLM to compare the original step with the regenerated output. If their main conclusions match/mismatch, this provides evidence that the original step was correct/incorrect.
2308.00436#16
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
16
Answer (in GCP) a @ gcloud compute firewall-rules create NAME wcallow tep:8000 (Question: Here is a new cloud service called LLMVM, which provides its own SDK CLI tool (llmcloud). * Create a firewall allowing port 8000. Please touch a file ¢ touch my file my_file then copy it to server with port 8000. © gcloud compute scp -~po my file my_vm:./ ‘© Last, please publish a message “Hi” to the project. © gcloud pubsub topics create TOPIC ¢ gcloud pubsub topics publish PROJ --message "hi" 1 Few Shot 1 Documentation > 8 low my_vm --port, @ 1lmcloud comp firewall-rules | @ 1lmcloud comp firewall-rules create 1 tep [Hallucination] H create NAME ow tep i NAME --allow tep:8000 ) | @ touch my file 1 @ touch my file @ scp -P 8000 /path/to/my file 1O@. ud comp scp --P 8000 1 @ ilmcloud comp scp --port 8000 user@server_ip:~ [Wrong command] f my file my_vm:./ [Wrong flag] 1 my file my_vm:./ @ [Missing topic creation] 1 @ [Missing topic creation] 1 @
2308.00675#16
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
16
Genetic algorithms are notably suitable for integration with LLMs. As a bio-inspired optimization methodology, genetic algorithms operate on a selection, mutation, and crossover principle, making them adaptable and efficient.56 For their application to MOFs, these frameworks must be delineated by genes comprising topology and building blocks. For instance, a representative MOF, HKUST-1, can be depicted as tbo+N17+N10, with tbo representing topology and N17 and N10 representing the building block notations. As these gene representations are textual, they facilitate the application of genetic algorithms using an LLM. Figure 5 showcases the utilization of a genetic algorithm by ChatMOF to fabricate a MOF per user specifications. Upon a user query, the system develops a strategy optimized for the genetic algorithm. It also identifies the target property and determines the loss function most suited for the objective, such as choosing the maximum, minimum, or closest value. Guided by this plan, ChatMOF selects parent genes from the existing database that align with the loss function. Using LLM, it produces children genes from the selected parent genes that are most likely to achieve the target. These children are then transformed into a structure file, and their properties are estimated through machine learning. This procedure is reiterated a fixed number of times, generating 15
2308.01423#16
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
17
‘postcondition’ shall denote ‘qualified postcondition’. 3.2 Post-Constraint Guided Path Analysis When analyzing a routine or function in a path-sensitive manner, the number of paths to explore can grow rapidly. Fortunately, if we have information about what the function is expected to achieve 𝑝𝑜𝑠𝑡 ), we can prune paths that inherently don’t meet (given by those expectations. We categorize two scenarios, direct applica- tion and outcome conflicts, in applying this optimization. 𝑅 3 PROBLEM FORMULATION 3.1 Definitions and Scope 3.1.1 Use-Before-Initialization. A Use Before Initialization (UBI) bug refers to the erroneous scenario where a variable 𝑣 is accessed or involved in any operation prior to its correct initialization. Let: e d(v) represent the declaration of v. 𝑣 ( ) 𝑣 ( ) 𝑣 ) ( if there exists 𝑑 # 𝑑 𝑢 𝑖 represent the declaration of 𝑣. signify a use operation involving 𝑣. denote the initialization operation of 𝑣. • •
2308.00245#17
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
17
Make the 2048 sliding tile [ MetaGPT number puzzle game _ ) rroduct Manager Action: WritePRD (x) Architect Action: WriteDesign . . . atari Think: The boss requires the creation of 2048 sliding title number puzzle game. Think: For this project, we will use Pygame, an open-source library designed for making video games with Python. © Program call flow "Create a web-based 2048 game that is easy to play” ## Product Goals - a Incorporate features that enhance user engagement and retention HH User Stories "Asa user, I want fo easily understand how to play the game so that Ican start playing immediately", "Asa user, I want the game to be engaging so that I ean play it fora long time”. = ## Competitive Analysis 32 Competitive Quadrant Chart [Anything Teach end nganet of canons == oe Unclear? vetoed sae ih sare) it v 2048 by Gabriele Cirulli: The e 2S = a original 2048 game, simple and i ats atabase py", 2048 by Ketchapp: Offers a —— "test_database.py"] Amsthing variety of game modes... - 2048 by Androbaby: Has a clean i interface and high score tracking... i Engineer ‘Action:
2308.00352#17
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
17
A worked example of this step-checking process is provided in Figure 1. In the following, we describe each of the subtasks in detail and provide our specific instructions to the LLM. We note here that the different LLM queries are made independently, rather than keeping the queries and answers from previous stages in context. Thus, for example, when the LLM is called to carry out the step regeneration, it does not have access to the original generation. The same prompts are used across LLMs and datasets, thereby providing a general-purpose approach. Target extraction To check a step (for example, Step 5 in Figure 1), we first need to figure out what the step is trying to achieve. Without a specific target, the regeneration stage would proceed in a random direction, making it impossible to serve as a reference to the original step. We thus use the LLM itself to extract the target of a step using the question and all previous steps (Steps 0-4 in Figure 1) with the following prompt (we omit some line breaks due to space limitations): The following is a part of the solution to the problem [Question]: [Step 0,..., Step i]. What specific action does the step [Step i] take? Please give a brief answer using a single sentence and do not copy the steps.
2308.00436#17
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00245
18
represent the declaration of 𝑣. signify a use operation involving 𝑣. denote the initialization operation of 𝑣. • • be its path set. Let 𝑝𝑎𝑡ℎ refer to a specific path in 𝑅. Besides, Each path 𝑝𝑎𝑡ℎ has an associated path constraint 𝑝 that dictates its feasibility. These two optimizations can be formed with: Direct Application. For direct application, the post-constraint 𝑝𝑜𝑠𝑡 can be directly applied as a path constraint. A path can be 𝑣 and 𝑢 𝑣 , then 𝑣 is used before initialization if: ( ) < 𝑢 ( ) < 𝑖 < 𝑢 𝑣 : 𝑣 𝑣 𝑣 : 𝑑 𝑣 𝑣 𝑣 (1) # 𝑑 ( 𝑖 )) ∧ ¬(∃ C discarded if: ( where < indicates a temporal sequence in the program execution. ∃ ( ) ( ) ( ) ( ) ( )) 3.1.2 Postcondition. Postconditions encapsulate the expected state or behavior of a system upon the conclusion of a routine [18].
2308.00245#18
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
18
Amsthing variety of game modes... - 2048 by Androbaby: Has a clean i interface and high score tracking... i Engineer ‘Action: WriteCode eeedeeen ean! ? ina] = “om Think: For this project, we will use Pygame, an open-source library ‘## Requirement Analysis designed for making video games with Python. The product should be a 2048 sliding tile number puzzle game which is user-friendly. ## Code: game.py #i game.py import random ## Requirement Pool "Develop a simple and intuitive user interface", "PO", “Implement engaging gameplay mechanics", "PO", Think: List out the task list that meets the requirements and design. | d## Shared Knowledge "game py’ contains the 'Game' class, which includes the game logic, class Gami roject Manager Action: WriteTasks HH Task List [*main.py", "game.py", "gui.py", "database-py"] def start(s “Starts the game by adding two numbers to the boar ## Logic Analysis "main.py", "Contains the main game loop and handles user input.” ® main py ® gamepy © suipy Amiing “game.py", "Implements the game logic, including the score, and
2308.00352#18
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
18
During execution, we copy the question and steps into [Question] and [Step 0, ..., Step i] to form the actual input to the LLM. The reason for requesting a brief answer is to try and keep the amount of information retained to the minimum needed, thereby avoiding unnecessary influence on the regeneration and hopefully reducing correlations in errors in turn. 4 Information collection To reduce the difficulty of the regeneration stage and avoid unrelated information from affecting the result, we filter out information that is not directly related to the current step. Specifically, we ask the LLM to select useful items from the question and all previous items with the following prompt, where [Information j] is simply the j-th sentence in the question: This is a math question: [Question]. The following is information extracted from the question: Information 0: [Information 0] The following are the first a few steps in a solution to the problem: Step 0: [Step 0] Which previous steps or information does the next step [Step i] directly follow from? After retrieving the free-text response from the LLM, we extract step or information ids by regular expression. For example in Figure 1, the current step requires Step 4 and no information from the question as context. The selected steps and information are then fed into the regeneration stage.
2308.00436#18
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
18
Figure 3: The new LLM Cloud Platform command-line toolkit, which is an unseen toolset to existing LLMs based on real-world Google Cloud command-line tools through renaming. information to answer corresponding math questions. On TabMWP, we also follow Chameleon [40] with the same tool set with 9 tools, such as program generator and column lookup. Multi-modal reasoning on NLVRv2. NLVRv2 [60] requires the model to verify whether a statement is true on a pair of images, requiring compositional understanding of both texts and images. On NLVRv2, we follow the setup used in Visual Programming (VisProg) [19] with 20 vision modules (tools) for image understanding and manipulation. Since VisProg only relies on few-shot demonstra- tions and does not utilize documentations for the modules. We generate the documentation for each module by including descriptions on the functionality of the module and the function signature. We provide the full documentations we use for each module in the appendix.
2308.00675#18
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
18
Question: Can you generate the structures with the largest surface area? Thought: | need to generate structures with the largest surface area Action: generator Action Input: “Generate structures with the largest surface area” {generator] Thought: | need to generate structures with the largest surface area [generator] Objective: max [generator] Predict Property: accessible_surface_area bash load model : accessible_surface_area/best.ckpt Predicting DataLoader 0: 100% Senn | 1/1 [00:00<00:00] [generator] Find Parents: extract cif_id and accessible_surface_area of 100 materials ... ... (search table) cif_id Accessible Surface Area (m*2/cm‘’3) 2664 pcu+N47+E33 5808.59 1411 pcu+N274+E32 5714.84 8 pcu+N613+E90 5665.73 {generator] Get Children: generate 100 new materials with the largest surface area from ... [generator] Generate Structures: bash Generate cifs: 100% | HES 8888888) 1/1 [00:00<00:00] ... (run N cycles) [generator] Final Thought: Based on the generated CIF, find the
2308.01423#18
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
19
( ) ( ) ( ) ( ) ( )) 3.1.2 Postcondition. Postconditions encapsulate the expected state or behavior of a system upon the conclusion of a routine [18]. 𝑝 𝑝𝑎𝑡ℎ 𝑝𝑜𝑠𝑡 ¬( ( ) ∧ C ) 3 qualified postcondition, of P refined by Cpost: This implies that if a 𝑝𝑎𝑡ℎ inherently contradicts the post-constraint, it can be removed from consideration. Outcome Conflicts. Let denote the set of all outcomes or effects produced by path 𝑝. A path can be pruned if any of its outcomes conflict with the post-constraint: 𝑜 ¬( This stipulates that if an outcome from 𝑝𝑎𝑡ℎ inherently con- tradicts the post-constraint, that path can be disregarded in the analysis. Correctness. The validity of these optimization methods can be proved by contradiction. Consider an instance where one of these Cpost, it would paths is executed. If this path conflicts with the render 𝑢 unreachable. Thus, it becomes evident that such paths can be pruned without sacrificing the correctness of the analysis. We provide a concrete example of how we perform these opti# 𝑜 ∃
2308.00245#19
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
19
handles user input.” ® main py ® gamepy © suipy Amiing “game.py", "Implements the game logic, including the score, and game over condition.” 2 database py ® test_gamepy test gui py Ve \ ~ e QA Engineer ‘Action: WriteCodeReview Human direct interaction for gameplay. =.) Think:For this project, we will use Pygame, an open-source library @ &D : designed for making video games with Python, > #4 Code quality review test_guipy
2308.00352#19
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]