doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.00675
40
[12] Jiuhai Chen, Lichang Chen, Chen Zhu, and Tianyi Zhou. How many demonstrations do you need for in-context learning? 2023. 10 [13] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022. [14] Ho Kei Cheng and Alexander G Schwing. Xmem: Long-term video object segmentation with an atkinson-shiffrin memory model. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVIII, pages 640–658. Springer, 2022. [15] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. [16] Surís Dídac, Sachit Menon, and Carl Vondrick. Vipergpt: Visual inference via python execution for reasoning. arXiv preprint arXiv:2303.08128, 2023.
2308.00675#40
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.00245
41
5 IMPLEMENTATION We implement the prototype of LLift based on OpenAI’s API [19] (i.e., gpt-4-0613). We describe some implementation details in the following aspects: Interaction with LLMs. LLift’s interaction with LLMs is managed by a simple agent developed in Python, containing roughly 1,000 lines of code. In addition, it uses seven prompts, which altogether constitute about 2,000 tokens in two conversations. All interactions are fully automated via APIs of OpenAI. Besides sending prompts and waiting for responses, our agent also 1) interacts with LLMs according to the progressive prompt design, 2) locates function definitions within the Linux source code, and 3) processes responses from LLMs, then receives and stores to a database. Hyper-Parameters. There are several hyper-parameters in calling the APIs provided by OpenAI. We choose max_token and temperature to 1,024 and 1.0, respectively. max_token controls the output length; since LLMs always predict the next words by the previous output, the longer output can benefit and allow its reasoning. However, too many tokens will exhaust the context window quickly, so we pick 1024 as a reasonable balance. The temperature controls the randomness and also the ability to reason. Intuitively, we want the analysis to be as non-random as possible and reduce the temperature (it can take a value between 0
2308.00245#41
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00436
41
Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. Deductive verification of chain-of-thought reasoning. arXiv preprint arXiv:2306.03872, 2023. Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. Faithful chain-of-thought reasoning. arXiv preprint arXiv:2301.13379, 2023. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023.
2308.00436#41
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
41
[17] Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model. In arXiv preprint arXiv:2303.03378, 2023. [18] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022. [19] Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. arXiv preprint arXiv:2211.11559, 2022.
2308.00675#41
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
41
learning framework for predicting properties of inorganic materials. npj Computational Materials 2, 1-7 (2016). Calfa, B. A. & Kitchin, J. R. Property prediction of crystalline solids from composition and crystal structure. AIChE Journal 62, 2605-2613 (2016). Jablonka, K. M. et al. 14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon. arXiv preprint arXiv:2306.06283 (2023). Zheng, Z., Zhang, O., Borgs, C., Chayes, J. T. & Yaghi, O. M. ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis. arXiv preprint arXiv:2306.11296 (2023). Dunn, A. et al. Structured information extraction from complex scientific text with fine- tuned large language models. arXiv preprint arXiv:2212.05238 (2022). James, S. L. Metal-organic frameworks. Chemical Society Reviews 32, 276-288 (2003). Kitagawa, S. Metal–organic frameworks (MOFs). Chemical Society Reviews 43, 5415-
2308.01423#41
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
42
1https://sites.google.com/view/llift-open/prompt 7 and 2 for GPT models); however, an overly low temperature can result in repetitive or overly simplistic responses. We set it to 1.0 (also the default of gpt-4-0613), which allows for higher-quality responses, and use strategies such as self-validation and majority voting to improve the consistency of responses. 6 EVALUATION Our evaluation aims to address the following research questions. RQ1 (Precision): How accurately is LLift able to identify bugs? RQ2 (Recall): Is there a possibility for LLift to miss real bugs? RQ3 (Comparison): How does the performance of individual components within LLift compare to that of the final design? RQ4 (Model Versatility): How does LLift perform when ap- plied to LLMs other than GPT-4? • • We evaluate RQ1 to RQ3 in GPT-4, under API from OpenAI with version gpt4-0613. For RQ4, we also test GPT-3.5 with version gpt-3.5-turbo-0613 and Claude 2 additionally for comparison.
2308.00245#42
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00436
42
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 10 Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022. Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, and Jun Zhao. Large language models are reasoners with self-verification. arXiv preprint arXiv:2212.09561, 2022. 11 # APPENDIX A A COMPLETE EXAMPLE OF SELFCHECK
2308.00436#42
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
42
[20] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In International conference on machine learning, pages 3929–3938. PMLR, 2020. [21] Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pages 9118–9147. PMLR, 2022. [22] Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Inner monologue: Embodied Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022. [23] IDEA-Research. Grounded-segment-anything. https://github.com/IDEA-Research/ Grounded-Segment-Anything, 2023. Accessed: 05/15/2023. [24] Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398, 2023.
2308.00675#42
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.00245
43
6.1 Dataset Our experiment data, sourced from UBITect, includes all potential bugs labeled by its static analysis stage but experienced timeout or memory exhaustion during its symbolic execution stage. Overall, UBITect’s static analysis stage produced 140,000 potential bugs, with symbolic execution able to process only 60%, leaving 53,000 cases unattended, which means that these cases are generally diffi- cult for static analysis or symbolic execution to decide We craft the following dataset from 53,000 cases to evaluate LLift: (1) Random-1000. We randomly chose 1,000 from the 53,000 cases for testing. However, there are 182 cases where there are no initializers, which are automatically recognized and filtered (see §3). The remaining 818 cases are used in evaluating precision, i.e., the ratio of true positives to false positives. (2) Bug-50. This dataset comprises the 52 confirmed UBI bugs previously identified by UBITect. It is used as ground truth for assessing recall by verifying if any true bugs were overlooked.
2308.00245#43
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
43
T. DeMarco and T.R. Lister. Peopleware: Productive Projects and Teams. Addison-Wesley, 2013. URL https://books.google.co.uk/books?id=DVlsAQAAQBAJ. Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. Self-collaboration code generation via chatgpt. arXiv preprint, 2023. Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate, 2023. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155, 2020. Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rockt¨aschel. arXiv preprint Promptbreeder: Self-referential self-improvement via prompt evolution. arXiv:2309.16797, 2023.
2308.00352#43
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
43
11 # APPENDIX A A COMPLETE EXAMPLE OF SELFCHECK To help readers get a better idea of how SelfCheck works, we show a complete version of the example in Figure 1, including the exact inputs and outputs of different stages. Please note that we have compiled all latex codes for readability, though the inputs and outputs to LLMs are in their original form. The question is Let S be the set of complex numbers z such that the real part of 1 forms a curve. Find the area of the region inside the curve. z is equal to 1 6 . This set We feed the following instruction to the LLM that serves as the generator: Solve the following problem step by step. Please start each step with "Step :" and split sentences in each step with " ". Please finish you response with "So the answer is ...". Problem: Let S be the set of complex numbers z such that the real part of 1 z is equal to 1 6 . This set forms a curve. Find the area of the region inside the curve. After getting the following solution, we start checking Step 5 as an example:
2308.00436#43
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
43
[25] Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. Mapping language to code in programmatic context. arXiv preprint arXiv:1808.09588, 2018. [26] Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896–1907, Online, November 2020. Association for Computational Linguistics. [27] Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp. arXiv preprint arXiv:2212.14024, 2022. [28] Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. arXiv preprint arXiv:2210.02406, 2022. 11
2308.00675#43
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
43
20 21 22 23 24 25 26 27 30 28 Kumar, S. et al. Green synthesis of metal–organic frameworks: A state-of-the-art review of potential environmental and medical applications. Coordination Chemistry Reviews 420, 213407 (2020). Qian, Q. et al. MOF-based membranes for gas separations. Chemical reviews 120, 8161- 8266 (2020). Lee, J. et al. Metal–organic framework materials as catalysts. Chemical Society Reviews 38, 1450-1459 (2009). Zhang, X. et al. A historical overview of the activation and porosity of metal–organic frameworks. Chemical Society Reviews 49, 7406-7427 (2020). Deng, H. et al. Large-pore apertures in a series of metal-organic frameworks. science 336, 1018-1023 (2012). Furukawa, H., Cordova, K. E., O’Keeffe, M. & Yaghi, O. M. The chemistry and applications of metal-organic frameworks. Science 341, 1230444 (2013). 34 Wang, C., Liu, D. & Lin, W. Metal–organic frameworks as a tunable platform for designing functional molecular materials. Journal of the American Chemical Society 135, 13222-13234 (2013).
2308.01423#43
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
44
(3) Cmp-40. This dataset comprises 27 negative and 13 positive cases selected from the Random-1000. We utilize this dataset to illustrate which of our design strategies contributed most to the outcome of our solution. Turns and Conversations. Due to the progressive prompt, each case may require different turns (pairs of a prompt and a response). In Random-1000, the average number of turns is 2.78, with a max of 8 and a variance of 1.20. Cost. On average, it costs 7,000 tokens in GPT-4 to analyze each potential bug. 6.2 RQ1: Precision LLift reports 26 positives among the Random-1000 dataset, where half of them are true bugs based on our manual inspection. This represents a precision of 50%. In keeping with UBITect and we focus on the analysis of Linux v4.14, 12 of the bugs still exist in the latest Linux kernel. We are in the process of reporting the 12 bugs to the Linux community. So far, we have submitted patches for 4 bugs and received confirmation that they are true bugs. Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian Table 3: True bugs identified by LLift from Random-1000, analyzing in Linux v4.14
2308.00245#44
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
44
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pp. 1126–1135. PMLR, 2017. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999, 2022. Irving John Good. Speculations concerning the first ultraintelligent machine. Adv. Comput., 6: 31–88, 1965. doi: 10.1016/S0065-2458(08)60418-0. URL https://doi.org/10.1016/ S0065-2458(08)60418-0. Rui Hao, Linmei Hu, Weijian Qi, Qingliu Wu, Yirui Zhang, and Liqiang Nie. Chatllm network: More brains, more intelligence. arXiv preprint, 2023.
2308.00352#44
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
44
After getting the following solution, we start checking Step 5 as an example: Solution: (Step 0) Let z = x + yi be a complex number, where x and y are real numbers. (Step 1) We are given that the real part of + is equal to de (Step 2) Using the fact that the real part of a complex number a + bi is a, we have Re (1) = Re (stg) = sey 7 1) _ 1 x zo (Step 3) Setting Re (4) = 5 We get aie = ole (Step 4) Cross-multiplying and rearranging, we get the equation 6x = 2? + y?. (Step 5) Completing the square, we obtain (x — 3) +y= 3 (Step 6) This is the equation of a circle with center (3, 0) and radius 3. (Step 7) The area of the region inside the circle is on. So the answer is on, Variable definition extraction This optional stage extracts the definitions of variables that appear in the reasoning chain. SelfCheck feeds the definitions to the regeneration stage to avoid LLM not understanding the meaning of a variable. The input to the LLM is:
2308.00436#44
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
44
11 [29] Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In International Conference on Machine Learning, pages 5583–5594. PMLR, 2021. [30] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. [31] Mojtaba Komeili, Kurt Shuster, and Jason Weston. Internet-augmented dialogue generation. arXiv preprint arXiv:2107.07566, 2021. [32] Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. Internet- augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115, 2022.
2308.00675#44
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
44
35 Wei, J. et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35, 24824-24837 (2022). Yao, S. et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601 (2023). Yao, S. et al. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629 (2022). Karpas, E. et al. MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. arXiv preprint arXiv:2205.00445 (2022). Chung, Y. G. et al. Computation-ready, experimental metal–organic frameworks: A tool to enable high-throughput screening of nanoporous crystals. Chemistry of Materials 26, 6185-6192 (2014). Chung, Y. G. et al. Advances, updates, and analytics for the computation-ready, experimental metal–organic framework database: CoRE MOF 2019. Journal of Chemical & Engineering Data 64,
2308.01423#44
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
45
Initializer Caller File Path Variable Line read_reg regmap_read ep0_read_setup regmap_read bcm3510_do_hab_cmd readCapabilityRid e1e_rphy pci_read_config_dword lan78xx_read_reg t1_tpi_read pci_read_config_dword ata_timing_compute pt_completion get_signal_parameters isc_update_profile ep0_handle_setup mdio_sc_cfg_reg_write bcm3510_check_firmware_version airo_get_range __e1000_resume adm8211_probe lan78xx_write_raw_otp my3126_phy_reset quirk_intel_purley_xeon_ras_cap opti82c46x_set_piomode pt_req_sense drivers/media/dvb-frontends/stv0910.c drivers/media/platform/atmel/atmel-isc.c drivers/usb/mtu3/mtu3_gadget_ep0.c drivers/net/ethernet/hisilicon/hns_mdio.c drivers/media/dvb-frontends/bcm3510.c drivers/net/wireless/cisco/airo.c
2308.00245#45
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
45
S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In Lecture Notes on Comp. Sci. 2130, Proc. Intl. Conf. on Artificial Neural Networks (ICANN-2001), pp. 87– 94. Springer: Berlin, Heidelberg, 2001. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for” mind” exploration of large scale language model society. arXiv preprint, 2023. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R´emi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science, 2022. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. Encouraging divergent thinking in large language models through multi- agent debate. arXiv preprint, 2023.
2308.00352#45
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
45
The following is the solution to a math question: Question: Let S be the set of complex numbers z such that the real part of 1 6 . This set forms a curve. Find the area of the region inside the curve. Solution: (Step 0) Let z = x + yi be a complex number, where x and y are real numbers. ... (Step 7) The area of the region inside the circle is 9π 4 . So the answer is 9π 4 . Are there variables in the solution? the form of "1. variable is defined as...". If so, please list the definition of variable in The underlined parts are the type of question, the question itself and the steps in its solution, respectively. The output from the LLM is: Yes. There are variables in the solution. x + yi, where xxx and yyy are real numbers. x + yi 1. zzz is defined as a complex number of the form x + yi The bold part is then saved to form a part of the input in the regeneration stage. Target extraction To get a brief and clear target of the current step, the input to the LLM is: The following is a part of the solution to the problem: Let S be the set of complex numbers z such that the real part of 1 6 . This set forms a curve. Find the area of the 12 region inside the curve.
2308.00436#45
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
45
[33] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020. [34] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019. [35] Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. What makes good in-context examples for gpt-3? arXiv preprint arXiv:2101.06804, 2021. [36] Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. Tapex: Table pre-training via learning a neural sql executor. arXiv preprint arXiv:2107.07653, 2021.
2308.00675#45
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
45
updates, and analytics for the computation-ready, experimental metal–organic framework database: CoRE MOF 2019. Journal of Chemical & Engineering Data 64, 5985-5998 (2019). Rosen, A. S. et al. Machine learning the quantum-chemical properties of metal–organic frameworks for accelerated materials discovery. Matter 4, 1578-1597 (2021). 42 Moghadam, P. Z. et al. Development of a Cambridge Structural Database subset: a collection of metal–organic frameworks for past, present, and future. Chemistry of Materials 29, 2618-2625 (2017).
2308.01423#45
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
46
drivers/media/dvb-frontends/bcm3510.c drivers/net/wireless/cisco/airo.c drivers/net/ethernet/intel/e1000e/netdev.c drivers/net/wireless/admtek/adm8211.c drivers/net/usb/lan78xx.c drivers/net/ethernet/chelsio/cxgb/my3126.c arch/x86/kernel/quirks.c drivers/ata/pata_legacy.c drivers/block/paride/pt.c tmp sr setup.bRequestType reg_value ver.demod_version cap_rid.softCap phy_data reg buf val capid0 &tp buf 504 664 637 169 666 6936 6580 1814 873 193 562 564 368
2308.00245#46
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
46
Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. arXiv preprint, 2023. 11 Preprint Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960, 2023. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568, 2023. Potsawee Manakul, Adian Liusie, and Mark JF Gales. Selfcheckgpt: Zero-resource black-box hallu- cination detection for generative large language models. arXiv preprint arXiv:2303.08896, 2023.
2308.00352#46
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
46
12 region inside the curve. (Step 0) Let z = x + yi be a complex number, where x and y are real numbers. (Step 5) Completing the square, we obtain (a — 3)? +y= Q. 3)? +y= Q. 2 . . . : 2 What specific action does the step "Completing the square, we obtain (a — 3) take? Please give a brief answer using a single sentence and do not copy the steps. 2 2 +y= 4 ." The underlined parts are the question and reasoning steps before the current one, including the current one. The output of the LLM is: The step completes the square to rewrite the equation in standard form of a circle. The whole sentence is saved and forms the most important part of the input in the regeneration stage. Information Collection To get sentences in the question and previous steps in the solution that are directly related to the current step, the input to the LLM is: This is a math question: Question: Let S be the set of complex numbers z such that the real part of 1 6 . This set forms a curve. Find the area of the region inside the curve. The following is information extracted from the question: Information 0: Let S be the set of complex numbers z such that the real part of 1 1 6 . Information 1: This set forms a curve. Information 2: Find the area of the region inside the curve.
2308.00436#46
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
46
[37] Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and Andrew M Dai. Mind’s eye: Grounded language model reasoning through simulation. arXiv preprint arXiv:2210.05359, 2022. [38] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023. [39] Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neural Information Process- ing Systems (NeurIPS), 2022.
2308.00675#46
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
46
43 McKinney, W. pandas: a foundational Python library for data analysis and statistics. Python for high performance and scientific computing 14, 1-9 (2011). Glasby, L. T. et al. DigiMOF: A Database of Metal–Organic Framework Synthesis Information Generated via Text Mining. Chemistry of Materials (2023). Altintas, C., Altundal, O. F., Keskin, S. & Yildirim, R. Machine learning meets with metal organic frameworks for gas storage and separation. Journal of Chemical Information and Modeling 61, 2131-2146 (2021). Kang, Y., Park, H., Smit, B. & Kim, J. A multi-modal pre-training transformer for universal transfer learning in metal–organic frameworks. Nature Machine Intelligence 5, 309-318 (2023). 44 45 46 31 47 48 49 50 51 52 53 54 55 56 57 58
2308.01423#46
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
47
Imprecise and Failed Cases. Despite the effectiveness of LLift, there are instances where it does not yield precise results, resulting in 13 false positives by mistakenly classifying must_init cases as may_init. Upon a careful examination of these cases, we attribute the imprecision to a variety of factors, which we discuss in detail in §6.7. Briefly, we give a breakdown of them here: Incomplete con- straint extraction (4 cases), Information gaps in UBITect (5 cases), Variable reuse (1 case), Indirect call (1 case), and Additional con- straints (1 case). Additionally, there is one false positive caused by inconsistent output (i.e., two false positives in three runs). Four cases exceed the maximum context length while exploring deeper functions in the progressive prompt. Table 4: Performance evaluation of bug detection tool with progressive ad- dition of design components: Post-Constraint Guided Path Analysis (PCA), Progressive Prompt (PP), Self-Validation (SV), and Task Decomposition (TD). (C) indicates the number of Consistent cases.
2308.00245#47
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
47
Agile Manifesto. Manifesto for agile software development. Snowbird, UT, 2001. John McCarthy. History of lisp. In History of programming languages, pp. 173–185. 1978. Ansong Ni, Srini Iyer, Dragomir Radev, Veselin Stoyanov, Wen-tau Yih, Sida Wang, and Xi Victoria Lin. Lever: Learning to verify language-to-code generation with execution. In ICML, 2023. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis, 2023. OpenAI. Gpt-4 technical report, 2023. Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint, 2023. Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Communicative agents for software development, 2023.
2308.00352#47
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
47
The following is the first a few steps in a solution to the problem: Step 0: Let z = x + yi be a complex number, where x and y are real numbers. ... Step 4: Cross-multiplying and rearranging, we get the equation 6x = x2 + y2. Which previous steps or information does the next step "Completing the square, we obtain (x — 3) _ p= on" directly follow from? _ p= 4 ." directly follow from? 2 The underlined information (contents after Information X:) is simply a list of sentences in the question. The underlined steps are reasoning steps before (not including) the current one in the solution. The output from the LLM is: The next step "Completing the square, we obtain (x — 3)? +y? = 2." directly follows from Step 4. Then we use regular expression to extract the information and step id from LLM’s output. Some steps may only rely on information from the question while others may only use results from previous steps. Step regeneration In this stage, we instruct the LLM to achieve the extracted target with only collected information with the following prompt: We are in a process of solving a math problem. Variables are defined as: 1. z is defined as a complex number of the form x + yi, where x and y are real numbers.
2308.00436#47
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
47
[40] Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842, 2023. [41] Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In International Conference on Learning Representations (ICLR), 2023. [42] Yujie Lu, Pan Lu, Zhiyu Chen, Wanrong Zhu, Xin Eric Wang, and William Yang Wang. Multimodal procedural planning via dual text-image prompting. 2023. [43] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
2308.00675#47
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
47
Park, H., Kang, Y. & Kim, J. PMTransformer: Universal Transfer Learning and Cross- material Few-shot Learning in Porous Materials. (2023). Kim, B., Lee, S. & Kim, J. Inverse design of porous materials using artificial neural networks. Science advances 6, eaax9324 (2020). Long, T. et al. Constrained crystals deep convolutional generative adversarial network for the inverse design of crystal structures. npj Computational Materials 7, 66 (2021). Xie, T., Fu, X., Ganea, O.-E., Barzilay, R. & Jaakkola, T. Crystal diffusion variational autoencoder for periodic material generation. arXiv preprint arXiv:2110.06197 (2021). Lee, S. et al. Computational screening of trillions of metal–organic frameworks for high- performance methane storage. ACS Applied Materials & Interfaces 13, 23647-23654 (2021). Lim, Y., Park, J., Lee, S. & Kim, J. Finely tuned inverse design of metal–organic frameworks with user-desired Xe/Kr selectivity. Journal of Materials Chemistry A 9, 21175-21183 (2021).
2308.01423#47
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
48
Combination TN(C) TP(C) Precision Recall Accuracy F1 Score 12(9) Simple Prompt 13(9) PCA 5(3) PCA+PP 5(2) PCA+PP+SV PCA+PP+TD 22(14) PCA+PP+SV+TD 25(17) 2(1) 5(1) 6(1) 11(8) 6(4) 13(12) 0.12 0.26 0.21 0.33 0.55 0.87 0.15 0.38 0.46 0.85 0.46 1.00 0.35 0.45 0.28 0.40 0.70 0.95 0.13 0.31 0.29 0.48 0.50 0.93 Oracle 27(27) 13(13) - - - Takeaway 2. LLift has proven effective in identifying UBI bugs, consistently detecting all known instances. Takeaway 1. LLift Can effectively summarize initializer behav- ior and discover new bugs with high precision (50%). 6.3 RQ2: Recall Estimate Conceptually, the core optimization (post-constraint guided path analysis) of LLift is sound, and we also prompt a series of rules to let LLMs tend to respond “may_init when uncertain. We expect LLift would not reject true bugs or with a high recall.
2308.00245#48
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
48
Baptiste Rozi`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, J´er´emy Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023. Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint, 2023. J. Schmidhuber. A self-referential weight matrix. In Proceedings of the International Conference on Artificial Neural Networks, Amsterdam, pp. 446–451. Springer, 1993a. J. Schmidhuber. G¨odel machines: self-referential universal problem solvers making provably optimal self-improvements. Technical Report IDSIA-19-03, arXiv:cs.LO/0309048 v3, IDSIA, Manno-Lugano, Switzerland, December 2003.
2308.00352#48
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
48
Variables are defined as: 1. z is defined as a complex number of the form x + yi, where x and y are real numbers. The following are some previous steps: Step 0: Cross-multiplying and rearranging, we get the equation 6x = x2 + y2. 13 The target for next step is: The step completes the square to rewrite the equation in standard form of a circle. Please try to achieve the target with the information from previous steps. The underlined parts are the type of question, variable definition (if any), collected information from the question (if any), collected previous steps (if any) and the extracted target. Please notice that the current step does not directly rely on any information from the question, we removed the corresponding block from the input. The output from the LLM is:
2308.00436#48
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
48
[44] Andrew Y Ng, Stuart Russell, et al. Algorithms for inverse reinforcement learning. In Icml, volume 1, page 2, 2000. [45] OpenAI. Gpt-4 technical report. 2023. 12 [46] Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Tulio Ribeiro. Art: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:2303.09014, 2023. [47] Aaron Parisi, Yao Zhao, and Noah Fiedel. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255, 2022. [48] Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023. [49] Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances in neural information processing systems, 1, 1988.
2308.00675#48
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
48
inverse design of metal–organic frameworks with user-desired Xe/Kr selectivity. Journal of Materials Chemistry A 9, 21175-21183 (2021). Park, J., Lim, Y., Lee, S. & Kim, J. Computational design of metal–organic frameworks with unprecedented high hydrogen working capacity and high synthesizability. Chemistry of Materials 35, 9-16 (2022). Yao, Z. et al. Inverse design of nanoporous crystalline reticular materials with deep generative models. Nature Machine Intelligence 3, 76-86 (2021). Park, H., Majumdar, S., Zhang, X., Kim, J. & Smit, B. Inverse design of metal-organic frameworks for direct air capture of CO2 via deep reinforcement learning. (2023). Katoch, S., Chauhan, S. S. & Kumar, V. A review on genetic algorithm: past, present, and future. Multimedia tools and applications 80, 8091-8126 (2021). Chase Harrison. <https://github.com/hwchase17/langchain> (2022-10-17). Larsen, A. H. et al. The atomic simulation environment—a Python library for working with atoms. Journal of
2308.01423#48
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
49
We sample 300 negative cases from Random-1000 in an effort to see whether we will miss any true bugs. We confirm that all are true negatives. Despite the limited data sampled, this result indicates that integrating GPT-4 into our implementation does not introduce apparent unsoundness. Further, we test LLift on the Bug-50 dataset to see whether it will miss any bugs discovered by UBITect. LLift has demonstrated full effectiveness in identifying all real bugs from Bug-50. This result, while encouraging, does not imply that LLift is flawless. Detailed data analysis reveals that: 1) There remain some inconsistencies in 3 5 cases occasionally, though they are mitigated by majority voting; and 2) all the bugs found by UBITect have trivial post- ) and postcondition of may_init ( constraints : must_init ). Hence, LLift could identify them easily. It is noteworthy that these cases are already those cases detectable by UBITect. Such cases tend to be simpler in nature and can be verified by symbolic execution in UBITect.
2308.00245#49
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
49
J. Schmidhuber. G¨odel machines: Fully self-referential optimal universal self-improvers. In B. Go- ertzel and C. Pennachin (eds.), Artificial General Intelligence, pp. 199–226. Springer Verlag, 2006. Variant available as arXiv:cs.LO/0309048. J. Schmidhuber. Ultimate cognition `a la G¨odel. Cognitive Computation, 1(2):177–193, 2009. J¨urgen Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook. PhD thesis, Technische Universit¨at M¨unchen, 1987. J¨urgen Schmidhuber. A ‘self-referential’weight matrix. In ICANN’93: Proceedings of the Interna- tional Conference on Artificial Neural Networks Amsterdam, The Netherlands 13–16 September 1993 3, pp. 446–450. Springer, 1993b. J¨urgen Schmidhuber. On learning to think: Algorithmic information theory for novel combina- tions of reinforcement learning controllers and recurrent neural world models. arXiv preprint arXiv:1511.09249, 2015.
2308.00352#49
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
49
The output from the LLM is: To rewrite the equation in standard form of a circle, we need to complete the square for both xxx and yyy. 6x = x2 + y2 Starting from the given equation: 6x = x2 + y2 6x = x2 + y2. (6/2)2 = 9 We add and subtract (6/2)2 = 9 (6/2)2 = 9 on the right-hand side: 6x + 9 − 9 = x2 + y2 + 9 − 9 6x + 9 − 9 = x2 + y2 + 9 − 9. 6x + 9 − 9 = x2 + y2 + 9 − 9 (x − 3)2 + y2 = 9 Simplifying: (x − 3)2 + y2 = 9 (x − 3)2 + y2 = 9. This is now in the standard form of a circle with (3, 0) and radius 333. (3, 0) center (3, 0) Then we can directly compare the regenerated step with the original one in the next stage. Result comparison The input to the LLM is: The following are 2 solutions to a math problem:
2308.00436#49
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
49
[49] Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances in neural information processing systems, 1, 1988. [50] Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350, 2022. [51] Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023. [52] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. Toolllm: Facilitating large language models to master 16000+ real-world apis, 2023.
2308.00675#49
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.00245
50
6.4 RQ3: Contributions of Design Strategies In our effort to delineate the contributions of distinct design strate- gies to the final results, we undertook an evaluative exercise against the Cmp-40 dataset, employing varying configurations of our solu- tion, each entailing a unique combination of our proposed strate- gies. As illustrated in Table 4, the strategies under consideration encompass Post-constraint Analysis (PCA), Progressive Prompt (PP), Self-Validation (SV ), and Task Decomposition (TD). The find- ings underscore an overall trend of enhanced performance with the integration of additional design strategies. In this study, the Baseline corresponds to a straightforward prompt, "check this code to determine if there are any UBI bugs", a strategy that has been found to be rather insufficient for discover- ing new vulnerabilities, as corroborated by past studies [17, 21, 31], reflecting a modest recall rate of 0.15 and a precision of 0.12. Incorporating PCA offers a notable enhancement, enabling the LLM to uncover a wider array of vulnerabilities. As shown in Table 4, there is a substantial improvement in recall in comparison to the baseline, an anticipated outcome considering PCA’s pivotal role in our solution. However, solely relying on this strategy still leaves a lot of room for optimization.
2308.00245#50
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
50
J¨urgen Schmidhuber, Jieyu Zhao, and Nicol N Schraudolph. Reinforcement learning with self- modifying policies. In Learning to learn, pp. 293–309. Springer, 1998. 12 Preprint Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint, 2023. Marta Skreta, Naruki Yoshikawa, Sebastian Arellano-Rubach, Zhi Ji, Lasse Bjørn Kristensen, Kourosh Darvish, Al´an Aspuru-Guzik, Florian Shkurti, and Animesh Garg. Errors are useful prompts: Instruction guided task programming with verifier-assisted iterative prompting. arXiv preprint, 2023. Elliot Soloway. Learning to program = learning to construct mechanisms and explanations. Com- munications of the ACM, 29(9):850–858, 1986. Torantulino et al. Auto-gpt. https://github.com/Significant-Gravitas/ Auto-GPT, 2023.
2308.00352#50
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
50
Then we can directly compare the regenerated step with the original one in the next stage. Result comparison The input to the LLM is: The following are 2 solutions to a math problem: Solution 1: To rewrite the equation in standard form of a circle, we need to complete the square for both x and y. Starting from the given equation: 6x = x2 + y2. We add and subtract (6/2)2 = 9 on the right-hand side: 6x + 9 − 9 = x2 + y2 + 9 − 9. Simplifying: (x − 3)2 + y2 = 9. This is now in the standard form of a circle with center (3, 0) and radius 3. Solution 2: Completing the square, we obtain (2 — 2 a)? +y= 8. Compare the key points from both solutions step by step and then check whether Solution 1 "supports", "contradicts" or "is not directly related to" the conclusion in Solution 2. Pay special attention to difference in numbers. The underlined parts are simply the problem type, the regenerated step and the original step. The output is:
2308.00436#50
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
50
[53] Maxim Rabinovich, Mitchell Stern, and Dan Klein. Abstract syntax networks for code genera- tion and semantic parsing. arXiv preprint arXiv:1704.07535, 2017. [54] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695, 2022. [55] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth interna- tional conference on artificial intelligence and statistics, pages 627–635. JMLR Workshop and Conference Proceedings, 2011. [56] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
2308.00675#50
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
50
59 Willems, T. F., Rycroft, C. H., Kazi, M., Meza, J. C. & Haranczyk, M. Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials. Microporous and Mesoporous Materials 149, 134-141 (2012). 32 Supplementary information for: # ChatMOF: An Autonomous AI System for # Predicting and Generating Metal-Organic # Frameworks Yeonghun Kang, Jihan Kim* Department of Chemical and Biomolecular Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291, Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea 33 # Table of Contents Supplementary Note S1. Prompt for planning in searcher Supplementary Note S2. Prompt for planning in predictor Supplementary Note S3. Prompt for single data retrieval in predictor Supplementary Note S4. Prompt for planning in generator Supplementary Note S5. Prompt for genetic algorithm in generator Supplementary Note S6. Prompt for planning in agent
2308.01423#50
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
51
The influence of Progressive Prompt (PP) on the results is quite intriguing. While its impact appears to lower precision initially, the introduction of task decomposition and self-validation in conjunc- tion with PP reveals a substantial boost in performance. Without 8 The Hitchhiker’s Guide to Program Analysis: A Journey with Large Language Models Table 5: Comparison of different LLMs on real bugs, from a subset of Bug-50 Caller GPT 4 3.5 Claude2 Bard ✓ ✓ hpet_msi_resume ✓ ✓ ctrl_cx2341x_getv4lflags ✓ ✓ axi_clkgen_recalc_rate max8907_regulator_probe ✓ ✓ ✓ ✓ ov5693_detect ✓ ✗ iommu_unmap_page ✓ ✓ mt9m114_detect ✓ ✓ ec_read_u8 ✓ ✓ compress_sliced_buf ✓ ✗ ✓ ✓ ✗ ✓ ✓ ✓ ✗ ✗ ✗ ✓ ✓ ✓ ✗ ✓ ✓ ✓
2308.00245#51
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
51
Torantulino et al. Auto-gpt. https://github.com/Significant-Gravitas/ Auto-GPT, 2023. R. J. Waldinger and R. C. T. Lee. PROW: a step toward automatic program writing. In D. E. Walker and L. M. Norton (eds.), Proceedings of the 1st International Joint Conference on Artificial Intel- ligence (IJCAI), pp. 241–252. Morgan Kaufmann, 1969. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint, 2023a. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432, 2023b.
2308.00352#51
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
51
The underlined parts are simply the problem type, the regenerated step and the original step. The output is: In both solutions, the goal is to rewrite the given equation in standard form of a circle. However, in Solution 1, the completion of the square results in the center of the circle being (3, 0) and a radius of 3. In Solution 2, the completion of the square results in the center of the circle being (3/2, 0) and a radius of 3/2. Therefore, Solution 1 contradicts the conclusion in Solution 2 as the centers and radii of the circles are different. If the word ‘support’ or ‘contradict’ appears in the last line of the output, SelfCheck outputs a step score 1 or -1. Otherwise, SelfCheck outputs 0 meaning that two steps are not related. # APPENDIX B EXAMPLAR FOR ONE-SHOT ERROR CHECKING The following example from Ling et al. (2023) is used to perform one-shot error checking. Here is some information: “1. Alice gets 2 apples. 2. Alice gets twice as many oranges as bananas. 3. Original price per apple: $2 14 Discount: 50% Price per apple after discount: $2 * (1 - 0.50) = $2 * 0.50 = $1"
2308.00436#51
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
51
[57] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023. [58] Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, et al. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188, 2022. [59] Michael Sipser. Introduction to the theory of computation. ACM Sigact News, 27(1):27–29, 1996. [60] Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6418–6428, Florence, Italy, July 2019. Association for Computational Linguistics.
2308.00675#51
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
51
Supplementary Figure S1. Histogram for accessible surface area of MOF in the CoREMOF database. Supplementary Figure S2. Depiction of accuracies for three tasks using gpt-3.5-turbo model Supplementary Figure S3. ChatMOF's answer to the question using gpt-4 and gpt-3.5-turbo Supplementary Table S1. 100 questions in search task Supplementary Table S2. 100 questions in prediction task Supplementary Table S3. 10 questions in generation task 34 Supplementary Note S1. Prompt for planning in the searcher You are working with a pandas dataframe in Python. The name of the dataframe is `df`. You should make a valid python command as input. You must use print the output using the `print` function at the end. You should use the `to_markdown` function when you print a pandas object. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Input: the valid python code only using the Pandas library Observation: the result of python code ... (this Thought/Input/Observation can repeat N times) Final Thought: you should think about how to answer the question based on your observation Final Answer: the final answer to the original input question. If you can't answer the question, say `nothing`
2308.01423#51
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
52
PP, the LLM is restricted to deducing the function behavior merely based on the function context’s semantics without further code analysis. Even though this approach can be effective in a range of situations, it confines the reasoning ability to the information available in its training data. By checking the detailed conversation, we notice the omission of TD or SV tends to result in the LLM neglecting the post-constraints, subsequently leading to errors. Beyond influencing precision and recall, Task Decomposition (TD) and Self-Validation (SV ) also play a crucial role in enhancing consistency. In this context, a result is deemed consistent if the LLM yields the same outcome across its initial two runs. A comparison between our comprehensive final design encompassing all compo- nents, and the designs lacking TD and SV, respectively, reveals that both TD and SV notably augment the number of consistent results, and deliver 17 and 23 consistent results in its negative and positive results, respectively, underscoring their importance in ensuring reliable and consistent outcomes. Finally, TD also holds significance in terms of conserving tokens. In our evaluation phase, we identified two instances within the PCA+PP and PCA+PP+SV configurations where the token count surpassed the limitations set by GPT-4. However, this constraint was not breached in any case when TD was incorporated. Takeaway 3. All of LLift’s design strategies contributed to the positive results.
2308.00245#52
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
52
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. Unleashing cognitive synergy in large language models: A task-solving agent through multi-persona self- collaboration. arXiv preprint, 2023c. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 2022. In Pro- ceedings of the Second International Conference on Autonomous Agents, 1998. URL https: //doi.org/10.1145/280765.280867. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint, 2022. Eric Zelikman, Eliana Lorch, Lester Mackey, and Adam Tauman Kalai. Self-taught optimizer (stop): Recursively self-improving code generation. arXiv preprint arXiv:2310.02304, 2023.
2308.00352#52
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
52
14 Discount: 50% Price per apple after discount: $2 * (1 - 0.50) = $2 * 0.50 = $1" Based on the given information, here is a reasoning process: "Calculate Alice’s cost of the 5 apples after a 50% discount. Price per apple after discount: $1 Apples be eaten: 3 Oranges be eaten: 6 Total apples: 5 Total oranges: x Total bananas: 2x Alice’s total number of fruits: 5 + x + 2x = 5 + 4x Alice’s total cost of 5 apples: $1 * (5 - 3) = $2” Double-check the reasoning process, let’s analyze its correctness, and end with "yes" or "no". Answer: Let’s think step by step. Grounding check Purpose: Find references for numbers in the reasoning process sequentially Rule: Any ungrounded number makes the check fails "x" is a variable, we do not care about variables. "y" is a variable, we do not care about variables.
2308.00436#52
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
52
[61] Simeng Sun, Katherine Thai, and Mohit Iyyer. Chapterbreak: A challenge dataset for long-range language models. arXiv preprint arXiv:2204.10878, 2022. [62] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng- Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239, 2022. 13 [63] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
2308.00675#52
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
52
say `nothing` The index of the dataframe must be be one of {df_index}. If it's not in the index you want, skip straight to Final Thought. {information} Begin! Question: What is the head of df? If you extracted successfully, derive 'success' as the final # answer Thought: To get the head of a DataFrame, we can use the pandas function head(), which will return the first N rows. By default, it returns the first 5 rows. Input: ``` import pandas as pd import json print(df.head().to_markdown()) ``` Observation: {df_head} Final Thought: The head() function in pandas provides the first 5 rows of the DataFrame. 35 Final Answer: success Question: {question} {agent_scratchpad} 36 Supplementary Note S2. Prompt for planning in the predictor plan to use machine learning to predict the properties of matter. To answer the question, you have to fill in the following format: Question: the input question you must answer Thought: you should always think about what to do Property: the property you can predict, should be one of [{model_names}] Material: names of materials separated using comma. If you need to proceed for all material, write *. To proceed for a specific topology, append the topology name with an * (ex. pcu*)
2308.01423#52
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
53
Takeaway 3. All of LLift’s design strategies contributed to the positive results. 6.5 RQ4: Alternative Models Table 5 provides a comprehensive view of the performance of our solution, LLift, when implemented across an array of LLMs includ- ing GPT-4.0, GPT-3.5, Claude 2 [2], and Bard [12]. GPT-4 passes all tests, while GPT-3.5, Claude 2, and Bard exhibit recall rates of 89%, 67%, and 67%, respectively. Despite the unparalleled performance of GPT-4, the other LLMs still produce substantial and competitive results, thereby indicating the wide applicability of our approaches. It is imperative to note that not all design strategies in our tool- box are universally applicable across all language models. Bard and GPT-3.5, in particular, exhibit limited adaptability towards the progressive prompt and task decomposition strategies. Bard’s in- teraction patterns suggest a preference for immediate response generation, leveraging its internal knowledge base rather than re- questing additional function definitions, thereby hindering the effec- tiveness of the progressive prompt approach. Similarly, when task 9 # static int sgl_map_user_pages(...){ if ((pages = kmalloc(..., GFP_KERNEL)) == NULL) return -ENOMEM;
2308.00245#53
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
53
Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B Tenenbaum, Tian- min Shu, and Chuang Gan. Building cooperative embodied agents modularly with large language models. arXiv preprint, 2023. Xufeng Zhao, Mengdi Li, Cornelius Weber, Muhammad Burhan Hafez, and Stefan Wermter. Chat with the environment: Interactive multimodal perception using large language models. arXiv preprint, 2023. Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, and Jie Tang. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x, 2023. Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. Webarena: A realistic web environment for building autonomous agents. arXiv preprint, 2023.
2308.00352#53
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
53
"x" is a variable, we do not care about variables. "y" is a variable, we do not care about variables. "5" is a quantity from the given information that represents the number of apples. "50%" is a quantity from the given information that represents the discount. "$1" is a quantity from the given information that represents the price per apple after the discount. "3" is a quantity not from the given information, it is ungrounded. "6" is a quantity not from the given information, it is ungrounded. "$2" is a quantity from the given information that represents the original price per apple. Result: Based on the check, there are ungrounded numbers, so the grounding check fails. Reasoning check: Purpose: Check whether the reasoning logic correctly answers the question Rule: Any logic error makes the check fails To calculate the total cost of apples after a 50% discount, we should multiply the number of apples by the discounted price. But to answer the question, we do not need to calculate the total number of fruit Alice gets. Result: Based on the check, there are some logic mistakes, so the reasoning check fails. Calculation check: Purpose: Check whether the calculation process is consistent Rule: Any inconsistent calculation makes the check fails calculation1:
2308.00436#53
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
53
[64] Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. arXiv preprint arXiv:2305.04091, 2023. [65] Xingyao Wang, Sha Li, and Heng Ji. Code4struct: Code generation for few-shot structured prediction from natural language. arXiv preprint arXiv:2210.12810, 2022. [66] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022. [67] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
2308.00675#53
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
53
... (this Property/Material can repeat N times) Run Machine Learning: nothing to do Final Thought: you should think about how you will derive a final answer from the results of machine learning. Begin! Question: predict the surface area and hydrogen uptake of ACOPEN and ELOBPE. Thought: I need to gather data on ACOPEN and train a machine learning model to predict its surface area and hydrogen uptake. Property: surface_area Material: ACOPEN, ELOBPE Property: hydrogen_uptake_100bar_77K Material: ACOPEN, ELOBPE Final Thought: Based on the result, answer the question using predicted surface area and the predicted hydrogen uptake at 100 bar. Question: which MOF has a highest band-gap? 37 Thought: I need to gather data on the band-gap of different structures and compare them to find the one with the highest value. Property: bandgap Material: * Final Thought: Based on the result, find the structure with the highest predicted band-gap value. Question: Predict surface area and save results in csv format, only pcu topology. Thought: I need to gather data on the surface area of materials with pcu topology and train a machine learning model to predict their surface area. Then, I can use the model to make predictions and save the results in a csv format. Property: surface_area Material: pcu*
2308.01423#53
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
54
9 # static int sgl_map_user_pages(...){ if ((pages = kmalloc(..., GFP_KERNEL)) == NULL) return -ENOMEM; res = get_user_pages_unlocked(..., pages, ...); /x Errors and no page if (res nr_pages) goto out_unmap; < # mapped # should # return # here / out_unmap: if (res > 0) { for (j-0; j < res; 5+) put page (BREESE) 0; # res = # } kfree(pages) ; 3 # Figure 7: Case Study I (Loop and Index). Derived from drivers/scsi/st.c decomposition is implemented, these models often misinterpret or inaccurately collect post-constraints, subsequently compromising the results. To harness their maximum potential, we only apply the PCA design specifically (i.e., without other design strategies) for GPT-3.5 and Bard. Contrasting the GPT series, Bard and Claude 2 demonstrate less familiarity with the Linux kernel and are more prone to failures due to their unawareness of the may_init possibility of initializers. Takeaway 4. GPT-4 remains at the pinnacle of performance for LLift, yet other LLMs can achieve promising results.
2308.00245#54
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
54
Mingchen Zhuge, Haozhe Liu, Francesco Faccio, Dylan R Ashley, R´obert Csord´as, Anand Gopalakrishnan, Abdullah Hamdi, Hasan Abed Al Kader Hammoud, Vincent Herrmann, Kazuki Irie, et al. Mindstorms in natural language-based societies of mind. arXiv preprint arXiv:2305.17066, 2023. 13 Preprint # A OUTLOOK A.1 SELF-IMPROVEMENT MECHANISMS One limitation of the MetaGPT version in the main text of this paper is that each software project is executed independently. However, through active teamwork, a software development team should learn from the experience gained by developing each project, thus becoming more compatible and successful over time.
2308.00352#54
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00436
54
equation: $1 * (5 - 3), answer: $2 (5 - 3) = 2 $1 * 2 = $2 is consistent with the answer, so the calculation is correct. calculation2: equation: 5 + x + 2x, answer: 5 + 4x x + 2x = 3x 5 + 3x is inconsistent with the answer, so the calculation is incorrect. Result: Based on the check, the calculation process is inconsistent, so the calculation check fails. Check results: Ground check fails, Reasoning check fails, Calculation check fails. Rule: Any failed check makes the reasoning incorrect. So the answer is "no". 15
2308.00436#54
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
http://arxiv.org/pdf/2308.00436
Ning Miao, Yee Whye Teh, Tom Rainforth
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230801
20231005
[ { "id": "2206.02336" }, { "id": "2302.13971" }, { "id": "2110.14168" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2301.13379" }, { "id": "2306.03872" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.08774" }, { "id": "2212.09561" } ]
2308.00675
54
[68] Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, et al. Larger language models do in-context learning differently. arXiv preprint arXiv:2303.03846, 2023. [69] Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. Visual chatgpt: Talking, drawing and editing with visual foundation models. arXiv preprint arXiv:2303.04671, 2023. [70] Jinyu Yang, Mingqi Gao, Zhe Li, Shang Gao, Fangjing Wang, and Feng Zheng. Track anything: Segment anything meets videos, 2023. [71] Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, and Ying Shan. Gpt4tools: Teach- ing large language model to use tools via self-instruction. arXiv preprint arXiv:2305.18752, 2023.
2308.00675#54
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
54
Property: surface_area Material: pcu* Final Thought: The predicted surface area values for materials with pcu topology can be found in the saved csv file. Question: {question} 38 Supplementary Note S3. Prompt for single data retrieval in predictor You need to answer the question from the markdown table below Markdown Table: {table} {information} Question: {question} Answer: 39 Supplementary Note S4. Prompt for planning in generator Create a plan to generate material based on the following question. Use the following format: Question: the input question you must to answer Thought: you should always think about what to do Property: the property you can predict, must be one of [{model_names}] Objective: you should decide what criteria you wan to generate by. Search look-up table: plan to extract 100 material for the purpose from the look-up table where the property is pre-calculated. Genetic algorithm: plan to create a new materials using the 100 extracted materials. Final thought: get a final answer based on the structures you generate. Begin! Question: generate a material with a porosity of 0.5 and surface area of 120 m2/g Thought: I need to generate a material with a porosity value of 0.5 and surface area of 120 m2/g. Property: void_fraction, accessible_surface_area
2308.01423#54
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
55
Takeaway 4. GPT-4 remains at the pinnacle of performance for LLift, yet other LLMs can achieve promising results. 6.6 Case Study In this case study, we pick three interesting cases demonstrating the effectiveness of LLift in analyzing function behaviors and detecting uninitialized variables. All these cases are undecided for the previous static analyzer, UBITect. We put the complete conversations on an anonymous online page for reference2. Loop and Index. Figure 7 presents an intriguing case involving the variable pages[j], which is reported by UBITect as used in Line 17 potentially without being initialized. Unfortunately, this case is a false positive which is hard to prune due to loops. Specifically, the initializer function get_user_pages_unlocked(), which is responsible for mapping user space pages into the kernel space, initializes the pages array allocated in Line 3. If get_user_pages_unlocked() is successfully executed, pages[0] through pages[res-1] pointers will be initialized to point to struct page instances. To summarize the behavior, i.e., must_init facts under condi- tions where the use is reachable, we must first extract the post- constraints that lead to the use of pages. Through interacting with ChatGPT, LLift successfully extracts it: {
2308.00245#55
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
55
This is somewhat related to the idea of recursive self-improvement, first informally proposed in 1965 (Good, 1965), with first concrete implementations since 1987 (Schmidhuber, 1987; 1993b; Schmidhuber et al., 1998), culminating in the concept of mathematically optimal self-referential self-improvers (Schmidhuber, 2003; 2009). Generally speaking, a system should learn from experi- ence in the real world, and meta-learn better learning algorithms from experiences of learning, and meta-meta-learn better meta-learning algorithms from experiences of meta-learning, etc., without any limitations except those of computability and physics. More recent, somewhat related work leverages the reasoning ability of Large Language Models (LLMs) and recursively improves prompts of LLMs, to improve performance on certain downstream tasks (Fernando et al., 2023; Zelikman et al., 2023), analogous to the adaptive prompt engineer of 2015 (Schmidhuber, 2015) where one neural network learns to generate sequence of queries or prompts for another pre-trained neural network whose answers may help the first network to learn new tasks more quickly.
2308.00352#55
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00675
55
[72] Sherry Yang, Ofir Nachum, Yilun Du, Jason Wei, Pieter Abbeel, and Dale Schuurmans. Foun- dation models for decision making: Problems, methods, and opportunities. arXiv preprint arXiv:2303.04129, 2023. [73] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023. [74] Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. arXiv preprint arXiv:2207.01206, 2022. [75] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.
2308.00675#55
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
55
Thought: I need to generate a material with a porosity value of 0.5 and surface area of 120 m2/g. Property: void_fraction, accessible_surface_area Objective: near 0.5, near 120 Search look-up table: extract cif_id, void_fraction, and accessible_surface_area of 100 materials with porosity close to 0.5 and surface area near 120 m2/g from look-up tables. Genetic algorithm: create a new material with a porosity close to 0.5 and surface area near 120 m2/g from 100 materials Final Thought: Based on the generated CIF, find the material that is closest to a porosity of 0.5 and a surface area of 120 m2/g. Question: generate a material with a highest band-gap Thought: I need to generate a material with a highest band-gap. # Property: bandgap 40 Objective: max Search look-up table: extract cif_id and bandgap of 100 materials with high band-gap value from look-up tables. Genetic algorithm: generate 100 new materials with the highest band gap from the 100 materials. Final Thought: Based on the generated CIF, find the material that has highest band-gap. Question: {question} 41 Supplementary Note S5. Prompt for genetic algorithm in generator
2308.01423#55
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
56
{ "initializer": "res = get_user_pages_unlocked(uaddr, nr_pages, pages, rw == READ ? FOLL_WRITE : 0)", } "suspicious": ["pages[j]"], "postconstraint": "res < nr_pages && res > 0 && j < res", "postconstraint": "res < nr_pages && res > @ && j < res", After feeding the post-constraints to LLM, LLift then success- fully obtains the result: { 2https://sites.google.com/view/llift-open/case-studies static int hv_pci_enter_do(struct hv_device *hdev){ init_completion(&comp_pkt .host_event) ; pkt->completion_func = hv_pci_generic.compl; pkt->compl_ctxt = &comp_pkt; wait_for_completion(&comp_pkt .host_event) ; # static void hv_pci_generic_compl(void *context, struct hv_pci_compl *comp_pkt context; = ... he if (resp_packet_size offsetofend(...)) comp_pkt->completion_status resp->status; else comp_pkt->completion_status >= =
2308.00245#56
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
56
In our present work, we also explore a self-referential mechanism that recursively modifies the con- straint prompts of agents based on information they observe during software development. Our initial implementation works as follows. Prior to each project, every agent in the software company reviews previous feedback and makes necessary adjustments to their constraint prompts. This en- ables them to continuously learn from past project experiences and enhance the overall multi-agent system by improving each individual in the company. We first establish a handover feedback action for each agent. This action is responsible for critically summarizing the information received dur- ing the development of previous projects and integrating this information in an updated constraint prompt. The summarized information is stored in long-term memory such that it can be inherited by future constraint prompt updates. When initiating a new project, each agent starts with a react action. Each agent evaluates the received feedback and summarizes how they can improve in a constraint prompt. One current limitation is that these summary-based optimizations only modify constraints in the specialization of roles (Sec. 3.1) rather than structured communication interfaces in communication protocols (Sec. 3.2). Future advancements are yet to be explored. # A.2 MULTI-AGENT ECONOMIES In real-world teamwork, the interaction processes are often not hardcoded. For example, in a soft- ware company, the collaboration SOP may change dynamically.
2308.00352#56
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00675
56
[76] Pengcheng Yin and Graham Neubig. A syntactic neural model for general-purpose code generation. arXiv preprint arXiv:1704.01696, 2017. [77] Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chen- guang Zhu, Michael Zeng, and Meng Jiang. Generate rather than retrieve: Large language models are strong context generators. In The Eleventh International Conference on Learning Representations, 2023. [78] Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199, 2023. [79] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. 14
2308.00675#56
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
56
Question: {question} 41 Supplementary Note S5. Prompt for genetic algorithm in generator "You should act as a generator to find the optimal material. A substance consists of a block1, block2, and must maintain the order. I will give you 100 parent materials. Based on these, you must answer as many new children as you expect to answer the question. The block1 and block2 used in the child must be the blocks used in the parent, and you must not create blocks that don't exist. You must generate children diverse. "The children must not be duplicates of existing parents or already created children. You output children only and nothing else. Begin. Question: {question} Parent: V12+T31, V24+T32, V7+T12 4 new Children: V12+T12, V24+T31, V7+T31, V7+T32 Parent: {parents} 100 new Children: Begin. Parent: V12+T31, V24+T32, V7+T12 4 new Children: V12+T12, V24+T31, V7+T31, V7+T32 {parents} 100 new Children: 42 # Supplementary Note S6. Prompt for planning in agent
2308.01423#56
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
57
if (resp_packet_size offsetofend(...)) comp_pkt->completion_status resp->status; else comp_pkt->completion_status >= = 1; = # complete (&comp_pkt->host_event) ; Figure 8: Case Study II (Concurrency and Indirect Call). Derived from drivers/pci/host/pci-hyperv.c "ret": "success", "response": { "must_init": ["pages[j]"], "may_init": [], } } As we can see, GPT-4 exhibits impressive comprehension of this complex function. It perceives the variable pages[j] being used in a loop that iterates from 0 to res-1. This insight leads GPT- 4 to correctly deduce that all elements in the pages array must be initialized, i.e., they are must_init. This example underscores GPT-4’s proficiency in handling loop and even index sensitivity.
2308.00245#57
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
57
In real-world teamwork, the interaction processes are often not hardcoded. For example, in a soft- ware company, the collaboration SOP may change dynamically. One implementation of such self-organization is discussed in the paper on a “Natural Language- Based Society of Mind” (NLSOM) (Zhuge et al., 2023), which introduced the idea of an “Economy of Minds” (EOM), a Reinforcement Learning (RL) framework for societies of LLMs and other agents. Instead of using standard RL techniques to optimize the total reward of the system through modifications of neural network parameters, EOMs use the principles of supply and demand in free markets to assign credit (money) to those agents that contribute to economic success (reward). The recent agent-based platform of DeepWisdom (AgentStore4) is compatible with the credit as- signment concept of EOMs. Each agent in AgentStore provides a list of services with corresponding costs. A convenient API is provided so that human users or agents in the platform can easily pur- chase services from other agents to accomplish their services. Figure 6 displays the User Interface (UI) of AgentStore, where various agents with different skills are showcased. Besides, individual developers can participate in building new agents and enable collaborative development within the community. Specifically, AgentStore allows users to subscribe to agents according to their demands # 4http://beta.deepwisdom.ai 14
2308.00352#57
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00675
57
14 [80] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Mul- timodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923, 2023. [81] Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pages 12697–12706. PMLR, 2021. [82] Victor Zhong, Tim Rocktäschel, and Edward Grefenstette. Rtfm: Generalising to novel environment dynamics via reading. arXiv preprint arXiv:1910.08210, 2019. [83] Shuyan Zhou, Uri Alon, Frank F. Xu, Zhengbao Jiang, and Graham Neubig. Docprompting: Generating code by retrieving the docs. In The Eleventh International Conference on Learning Representations, 2023. 15 # A Broader impacts and limitations
2308.00675#57
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
57
{parents} 100 new Children: 42 # Supplementary Note S6. Prompt for planning in agent You act like a material scientist answering a question. Answer the following questions as best you can. You have access to the following tools: {tools} You must not use tool `generator` when `Question` does not required to generate. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: What is the surface area and bandgap of ACOGEF? Thought: I need to find the surface area of ACOGEF. Action: search_csv Action Input: "Search name ACOGEF and provide information on its surface area" Observation: The surface area of material "ACOGEF" is 1138.35 m^2/g. Check to see if this answer can be you final answer, and if so, you should submit your final answer.
2308.01423#57
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
58
Concurrency and Callback. Consider the case illustrated in Fig- ure 8. At first glance, UBITect flags Line 10 for potentially using the variable comp_pkt.completion_status before initialization. The function’s body seemingly lacks any code that initializes it, leading UBITect to report it as a potential bug. However, the mys- tery unravels when we examine hv_pci_generic_compl(), the actual initializer function assigned to pkt in Line 4. The variable in question is indeed initialized, but intriguingly, its initializer emerges from a concurrent function instead of within its own thread. Here wait_for_completion() is a synchronization primi- tive that pauses the current thread and waits for the new thread (i.e., hv_pci_generic_compl()) to complete. Despite this complexity, GPT-4 adeptly navigates the concurrency and callback handling, pinpointing the accurate initializer and outputting a precise result. It is worth noting that we do not encode any knowledge about the Linux kernel synchronization primitives. LLift prompts LLMs with “The ‘initializer’ must be the ‘actual’ function that initializes the variable.” and then LLMs can automatically identify the function hv_pci_generic_compl() as the initializer of comp_pkt.completion_status.
2308.00245#58
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
58
# 4http://beta.deepwisdom.ai 14 # Preprint and pay according to their usage. Moreover, users can purchase additional capabilities to expand the plug-and-play functions of their existing agents. This allows users to gradually upgrade their agents. Within the MetaGPT framework, AgentStore can support the collaboration of various agents. Users can collect several agents together to carry out more complex tasks or projects, and all the agents share and comply with development and communication protocols defined in MetaGPT. © Conversations 2a aQ2asrzeeetewk® B&B a Ff Marin Minsky Wednesday Plato Software Com, ML Engineer cartoon 30 ComicArist Tutorial Assist. EqutyAnalyst. Beauty Photo. —CeficatePh._-—_Donalé Tru Tutorial Asset. @ Agents ible #3 ‘cartoon 30 a £ Certicate Ph. Beauty Photo EI celebrities’ Planet Al) Westerncelebrty Movie TV. Historica Figure John Coffey eat > tat» a Wednesday Addams cat > Figure 6: AgentStore is a platform dedicated to serving users in the creation and development of agents within the MetaGPT framework. This platform provides users with an operational interface, allowing users to easily manage a variety of agents with different emotions, personalities, and capa- bilities for specific tasks. 15 Preprint B A DEMO OF THE EXECUTION
2308.00352#58
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00675
58
15 # A Broader impacts and limitations This work studies the importance of tool documentations in equipping LLMs with the ability to compose usages of a variety of tools to accomplish complex tasks. However, as discussed in [51], it is imperative to contemplate what tools should be made available to LLMs as well as how one should interpret and rely on the results obtained from the models. We envision tool documentations as a channel to guide LLMs in more safely using the tools, aligning with the original intended use of the tools. # B Implementation details In this section, we provide further implementation details on each task. We conduct all our experi- ments on Debian GNU/Linux 10 machines with 40GB A100 GPUs. # B.1 ScienceQA On ScienceQA [39], we closely follow the original setup 1 used in Chameleon [40], including the tool docs and few-shot demos (when used). We however find that the “Image Captioner” module used in the original work often provides less accurate captions on given images. In the documentation, we thus add the description on this observation for the “Image Captioner” module as shown in Figure 8.
2308.00675#58
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
58
Thought: The search_csv tool provided the surface area of ACOGEF, but not the bandgap. I need to find the bandgap. 43 Action: search_csv Action Input: "Search name ACOGEF and provide information on its bandgap" Observation: The search_csv tool did not provide any information on the bandgap of ACOGEF. I need to find another way to obtain this information. Thought: The search_csv tool provided the surface area of ACOGEF, but not the bandgap. I need to find the bandgap. Action: predictor Action Input: "Predict the bandgap of ACOGEF" Observation: The bandgap of material "ACOGEF" is 3.41139 eV. Check to see if this answer can be you final answer, and if so, you should submit your final answer. Thought: I now know the final answer Final Answer: The bandgap and surface area of ACOGEF is 3.41149 eV and 1138.35 m^2/g. Question: What is the highest bandgap MOF? Thought: I need to find the MOF with the highest bandgap. Action: search_csv Action Input: "find for MOFs with highest bandgap and provide its name and bandgap" Observation: There are no data in database
2308.01423#58
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
59
Unfamiliar Function. As previously delineated in §2.3, LLMs possess the inherent ability to recognize the semantics (e.g., post- conditions) of common functions like sscanf(). However, some argue that “the LLM simply learns everything from the internet and acts merely as a search engine” [6]. This viewpoint is challenged by the case illustrated in Figure 9. Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian int p9_check_zc_errors(...){ err = p9pdu_readf(req->rc, c->proto_version, "d", &ecode); err = 5 3 int p9pdu_readf(struct p9_fcall *pdu, int proto_version, const “char *fmt, ...) ret = p9pdu_vreadf(pdu, proto_version, fmt, ap); return ret; int p9pdu_vreadf(struct p9_fcall *pdu, int proto_version, const # oS char *fmt, va_list ap){ switch (*fmt) { case ‘d':{ int32_t *val if (pdu_read(... errcode break; va_arg(ap, = EFAULT; = int32_t *); # val =
2308.00245#59
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
59
15 Preprint B A DEMO OF THE EXECUTION In this section, we outline the complete process of software development using MetaGPT. It begins with a user’s input command (as shown in Appendix B.1) and ends with software designed according to the user’s specifications. B.1 USER INPUT Upon receiving an instruction from the user, MetaGPT collaborates with a professional development team to fulfill the task. Here is a demo of user input: User input 1 write a python3 GUI app such that you can draw an image with it B.2 METAGPT DEVELOPMENT PROCESS Now we provide a step-by-step explanation of the standardized output process for each agent. 16 # Preprint Product Manager The Product Manager generates a Product Requirement Document (PRD), as detailed in the specified documentation. This document encompasses goals, user stories, competitive analysis, requirement analysis and requirement pool. Additionally, a competitive quadrant chart is produced (see Figure 7). Subsequently, these documents and charts are handed over to the architect for system design. # Product Requirement Document
2308.00352#59
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00675
59
The modules are defined as follows: - Image_Captioner: This module generates a potentially inaccurate caption for the given image. Avoid using the module unless necessary. "Image_Captioner" can be considered when the question involves the semantic understanding of the image. - Text_Detector: This module detects the text in the given image. Normally, we consider using "Text_Detector" when the question involves the unfolding of the text in the image, e.g., diagram, chart, table, map, etc., and the "has_image" field in the metadata is True. - Knowledge_Retrieval: ... Figure 8: Documentations used in ScienceQA datasets. We used the original tool docs in Chameleon [40] and added the description for “Image Captioner” that the generated captions may be inaccurate. # B.2 TabMWP On TabMWP [41], we strictly follow the original setup used in Chameleon [40]. We refer the readers to [40] and their open-sourced implementations for further details. # B.3 NLVRv2 On NLVRv2, we follow the setup used in [19]. However, as tool docs are not used in [19], we create our own docs for the tools used. Figure 9 shows the tool docs we use for several available tools used in VisProg [19].
2308.00675#59
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
59
Action: search_csv Action Input: "find for MOFs with highest bandgap and provide its name and bandgap" Observation: There are no data in database Thought: The search_csv tool did not provide any information on the bandgaps of MOFs. I need to find another way to obtain this information. Action: predictor Action Input: "predict the bandgaps of MOFs and find the name of MOF with highest bandgaps" Observation: The highest bandgap MOF is ACOGEF. Thought: I now know the final answer Final Answer: The highest bandgap MOF is ACOGEF. Question: {input} Thought:{agent_scratchpad} 44 4000 3500 3000 2500 2000 1500 Number of MOFs 1000 500 2000 4000 6000 8000 Accessible surface area (cm3/cm3) Figure S1. Histogram for accessible surface area of MOF in the CoREMOF database 45 100 Gal True False (exceed token limit) | False (logic error) for) © oOo fo) Percentage (%) & fo) 20 Search task Predict task Generate task
2308.01423#59
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
60
= EFAULT; = int32_t *); # val = ...; // # initialization # return # errcode; # Figure 9: Case Study III (Unfamiliar Function), derived from net/9p The case presents an intriguing real-world bug. The function p9pdu_readf() mirrors sscanf() in structure, yet lacks a check of its return value, leaving the parameter ecode at risk of being uninitialized, i.e., if pdu_read() returns non-zero in line 19 (thus “break” early). Notably, unlike sscanf(), where GPT-4 can provide a precise summary of the function without asking for its definition, it does request the function definition of p9pdu_readf(), as it is not as ubiquitous as sscanf(). Furthermore, our solution not only produces the correct out- come for this particular case but also pinpoints that ecode could be initialized when p9pdu_readf() returns 0, demonstrating the efficacy of LLift for unfamiliar cases. The result is as follows: {
2308.00245#60
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
60
# Product Requirement Document ## Original Requirements The boss requires a Python3 GUI color meter that outputs the RGB values when it moves. ## Product Goals ***python [ "Create a user-friendly GUI color meter", "Ensure the color meter accurately outputs RGB values", "Ensure the color meter updates RGB values in real-time as it moves” ## User Stories ***python "As a user, I want to select any color on the screen, so that I can get its RGB values", "As a user, I want the RGB values to update in real-time as I move the color meter, so that I <> can get accurate color information", "As a user, I want a simple and intuitive interface, so that I can easily use the color meter" # Product Requirement Document
2308.00352#60
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.01423
60
45 100 Gal True False (exceed token limit) | False (logic error) for) © oOo fo) Percentage (%) & fo) 20 Search task Predict task Generate task Figure S2. Depiction of accuracies for three tasks using gpt-3.5-turbo model - search, prediction, and generation. Accuracies were evaluated based on three labels: True, False (exceeding token limit), and False (logical error). The number in the bar represent the count of each label . 46
2308.01423#60
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
61
{ "initializer": "err = p9pdu_readf(req->rc, c->proto_version, 'd', &ecode)", "suspicious": ["ecode"], "postconstraint": null, "response": { "must_init": [], "may_init": [{ "name": "ecode", "condition": "p9pdu_readf returns 0" }] } } 6.7 Reason for Imprecision Despite LLift achieving a precision of 50% in real-world applica- tions, the precision can still be improved in the future. Some can be solved with better prompts or better integration with static analysis. Challenges in Constraint Extraction. Beyond the four primary code patterns we addressed in §4.3, there exist additional forms of post-constraints. For instance, during error handling, the checks for failures may involve another function or macro. This problem can be addressed by either more examples during prompts (in-context learning), or lightweight program analysis (e.g., path exploration in symbolic execution to collect the post-constraints). Information Gaps in UBITect. For instance, UBITect does not provide explicit field names within a structure when a specific 10 The Hitchhiker’s Guide to Program Analysis: A Journey with Large Language Models
2308.00245#61
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
61
Product Requirement Document ## Competitive Analysis ***python [ "Color Cop: A popular color picker for Windows. It’s simple and easy to use, but lacks real- <>» time RGB value updates", "Just Color Picker: Offers real-time color picking and multiple color code formats. However, <» its interface is cluttered and not very user-friendly", "Instant Eyedropper: A simple color picker that lives in the system tray. It’s convenient but <» lacks advanced features", "ColorZilla: A color picker extension for browsers. It’s convenient for web design but not <>» suitable for desktop applications", "ColorPic: Offers color palettes and mixer tools. It’s feature-rich but can be overwhelming <» for simple tasks" ## Requirement Analysis The product should be a Python3 GUI application that serves as a color meter. It should allow <> users to select any color on the screen and display the corresponding RGB values. The RGB <> values should update in real-time as the color meter moves. The interface should be <> simple and user-friendly. ## Requirement Pool ***python [ ("Design a user-friendly GUI for the color meter", "PO"), ("Implement color selection functionality", "P0O"),
2308.00352#61
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00675
61
Function: VQA Description The VQA function calls the BLIP Model for visual question answering. The model consists of a vision encoder, a text encoder, and a text decoder. The vision encoder will encode the input image, the text encoder will encode the input question together with the encoding of the image, and the text decoder will output the answer to the question. Syntax VQA(image: IMAGE, question: TEXT) -> TEXT or INTEGER or FLOAT or BOOL Parameters image: An IMAGE type input representing the image to be analyzed. question: A TEXT type input representing the question to be answered about the image. Returns The function returns a TEXT, INTEGER, FLOAT or BOOL value, representing the answer to the input question about the image. The return variables would be INTEGER, FLOAT, or BOOL type when possible, otherwise it would be TEXT. Use case: Use VQA when you want to answer a question related to an image Example: ANSWERY1 = VQA(image=IMAGE1, question='What color is the car?') ANSWER2 = VQA(image=IMAGE2, question='Does the image show a dog?’) ANSWER3 = VQA(image=IMAGE3, question='How many
2308.00675#61
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
61
FAEHEHBEEH Question #4 Yara TaYeTe Tre taT oe How does the pore limiting diameter of YUSGID_clean compare with other materials? > Entering new AgentExecutor chain... | need to find the pore limiting diameter of YUSGID_clean and compare it with other materials. Action: search_csv Action Input: "Search name YUSGID_clean end provide information on its pore limiting diameter” > Entering new TableSearcher chain... [Table Searcher] Thought: To find the information for a specific name in the DataFrame, we can use the loc function in pandas. This function allows us to access a group of rows and columns by label(s) or a boolean array. [Table Searcher] Input: print(df.loc[df['name’] == "YUSGID'|['Pore limiting diameter (A)'].to_markdown()) [Table Searcher] Observation: |_| Pore limiting diameter (A) | |e ene | | 11739 | 3.71515 | [Table Searcher] Final Thought: The 'Pore limiting diameter (A) for the material with the name 'YUSGID' is 3.71515A. > Finished chain. Observation:
2308.01423#61
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
62
10 The Hitchhiker’s Guide to Program Analysis: A Journey with Large Language Models field is in use. This information gap can result in LLift lacking precision in its analysis. Additionally, UBITect only reports the variable utilized, not necessarily the same variable passed to an initializer. For example, consider an uninitialized variable a passed to an initializer, which is then assigned to variable b for usage. In such a scenario, LLift may fail to identify the initializer due to this incomplete information correctly. These challenges, primarily due to the interface design in UBITect, can be addressed with focused engineering efforts to enrich the output information from UBITect. Variable Reuse. Varaible reuse is an interesting problem of LLM. In general, LLM usually confuses different variables in different scopes (e.g., different function calls). For example, if the suspicious variable is ret and passed as a argument to its initializer (say, func(&ret)) and there is another stack variable defined in func also called ret, LLM will confuse them. Explicitly prompting and teaching LLM to note the difference does not appear to work. One solution is to leverage a simple static analysis to normalize the source code to ensure each variable has a unique name.
2308.00245#62
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
62
and user-friendly. ## Requirement Pool ***python [ ("Design a user-friendly GUI for the color meter", "PO"), ("Implement color selection functionality", "P0O"), ("Display RGB values of the selected color", "PO"), ("Update RGB values in real-time as the color meter moves", "PO"), ("Test the application for accuracy and performance", "P1") ## UI Design draft The UI should be minimalistic and intuitive. It should primarily consist of a color meter, which <>» could be a magnified area of the screen that the user can move around. The RGB values of <>» the currently selected color should be prominently displayed, updating in real-time as <>» the color meter moves. The layout should be clean and uncluttered, with a focus on <— functionality and ease of use. ## Anything UNCLEAR There are no unclear points.
2308.00352#62
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]
2308.00675
62
VQA(image=IMAGE2, question='Does the image show a dog?’) ANSWER3 = VQA(image=IMAGE3, question='How many cups in the image?') Function: EVAL Description The EVAL function calls the eval() function in Python. The expr argument is parsed and evaluated as a Python expression. Variables can be expressed in {}. When evaluating expressions involving the results of other functions, such as VOA, always use the EVAL function. The EVAL function also accepts the xor operator as the exclusive-or operator, which returns true only when exactly one argument is true. Syntax: EVAL(expr: TEXT) -> TEXT or INTEGER or FLOAT or BOOL. Parameters: expr: A TEXT type input representing a Python expression to be evaluated. The expression can include normal operators in Python, as well as the additional xor operator for exclusive-or operat ions. Returns The function returns a TEXT, INTEGER, FLOAT or BOOL value, representing the result of the evaluated Python expression Use case: Use EVAL when you want to evaluate a Python expression, especially when the expression involves results from other functions. Example ANSWERO=EVAL(expr="{X} + 4 * 2> 1 == False’)
2308.00675#62
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.
http://arxiv.org/pdf/2308.00675
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister
cs.CL, cs.AI, cs.CV, cs.LG
null
null
cs.CL
20230801
20230801
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2204.10878" }, { "id": "1910.08210" }, { "id": "2107.07653" }, { "id": "2201.11903" }, { "id": "2305.17126" }, { "id": "1704.07535" }, { "id": "2205.01068" }, { "id": "2203.05115" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2305.04091" }, { "id": "2303.05499" }, { "id": "2107.07566" }, { "id": "2110.14168" }, { "id": "2210.03350" }, { "id": "2303.11381" }, { "id": "2101.06804" }, { "id": "2304.08354" }, { "id": "2212.14024" }, { "id": "2305.18752" }, { "id": "2211.10435" }, { "id": "2303.04671" }, { "id": "2210.12810" }, { "id": "1808.09588" }, { "id": "2304.11062" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2210.02406" }, { "id": "2212.10560" }, { "id": "2303.04129" }, { "id": "1704.01696" }, { "id": "2302.00923" }, { "id": "2211.12588" }, { "id": "1908.03557" }, { "id": "2210.05359" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2303.16199" }, { "id": "2304.09842" }, { "id": "2204.01691" }, { "id": "2305.01625" }, { "id": "2303.12712" }, { "id": "2207.05608" }, { "id": "2303.03846" }, { "id": "2211.11559" }, { "id": "2207.01206" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2208.03188" } ]
2308.01423
62
The 'Pore limiting diameter (A) for the material with the name 'YUSGID' is 3.71515A. > Finished chain. Observation: The ‘Pore limiting diameter (Aj' for 'YUSGID’ is 3.71515 A Thought:| have the pore limiting diameter for YUSGID_clean. Now | need to compare this with the pore limiting diameters of other materials. Action: search_cesv Action Input: "Provide information on the pore limiting ciameters of all materials” > Entering new TableSearcher chain... [Table Searcher] Thought: To get information an the pore limiting diameters of all materials, we can use the pandas function describe(), which provides descriptive stetistics of the DataFrame or Series [Table Searcher] Input: print(dff'Pore limiting diameter (A)'].describe().to_markdown()) [Table Searcher] Observation: | Pore limiting diameter (A) | | | count | 12020 | | mean | 4.87758 | [std | 2.78185 | [min | 2.40006 | | 25% | 3.24153 | | 50% | 4.10336 | | 75% |
2308.01423#62
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity for rigid structured queries. The system is comprised of three core components (i.e. an agent, a toolkit, and an evaluator) and it forms a robust pipeline that manages a variety of tasks, including data retrieval, property prediction, and structure generations. The study further explores the merits and constraints of using large language models (LLMs) AI system in material sciences using and showcases its transformative potential for future advancements.
http://arxiv.org/pdf/2308.01423
Yeonghun Kang, Jihan Kim
cs.CL, cs.AI, cs.LG, physics.chem-ph
null
null
cs.CL
20230801
20230825
[ { "id": "2302.13971" }, { "id": "2306.11296" }, { "id": "2303.17580" }, { "id": "2305.18365" }, { "id": "2305.10601" }, { "id": "1810.04805" }, { "id": "2211.09085" }, { "id": "2304.05376" }, { "id": "2212.05238" }, { "id": "2108.07258" }, { "id": "2110.06197" }, { "id": "2306.06283" }, { "id": "2008.02275" }, { "id": "2303.12712" }, { "id": "2210.03629" }, { "id": "2205.00445" }, { "id": "2009.03300" } ]
2308.00245
63
Indirect Call. As mentioned §4.4, LLift follows a simple but impre- cise strategy to handle indirect calls. Theoretically, existing static analysis tools, such as MLTA [16], can give possible targets for in- direct calls. However, each indirect call may have multiple possible targets and dramatically increase the token usage. We leave the exploration of such an exhaustive strategy for future work. LLift may benefit from a more precise indirect call resolution. Additional Constraints. There are many variables whose values are determined outside of the function we analyze, e.g., precondi- tions capturing constraints from the outer caller. Since our analysis is fundamentally under-constrained, this can lead LLift to incor- rectly determine a must_init case to be may_init. Mitigating this imprecision relies on further analysis to provide more information. # 7 DISCUSSION AND FUTURE WORK Post-Constraint Analysis. Our approach prioritizes post- constraints over other constraints, such as preconditions. By focusing on the post-constraints, we enhance the precision and scalability significantly. Importantly, our utilization of large language models in program analysis suggests strong abilities in summarizing complex function behaviors involving loops, a classic hurdle in program analysis.
2308.00245#63
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalability. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in comprehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable precision (50%) and appearing to have no missing bugs. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets.
http://arxiv.org/pdf/2308.00245
Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
cs.SE, cs.AI
null
null
cs.SE
20230801
20231115
[ { "id": "2305.10601" }, { "id": "2107.03374" }, { "id": "2210.11610" }, { "id": "2305.16291" }, { "id": "2305.16151" }, { "id": "2303.18223" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "2203.02155" }, { "id": "2304.11938" }, { "id": "2304.03262" }, { "id": "2304.10513" }, { "id": "2201.11903" }, { "id": "2305.12138" }, { "id": "2305.12865" }, { "id": "2303.08774" }, { "id": "2306.01987" }, { "id": "2304.06815" } ]
2308.00352
63
17 Preprint Reach and engagement of campaigns Need to promote We should expand Core! Painter 5 ‘Adobe SPotoshop € 8 3 3 Krita SketchBook D> 2 wi e < Our Target ProdBtiiP 2 tT ° Re-evaluatePaint.NET May be improved = as Baint oo € © D oO D> 2 in = co} 4S Low Reach High Reach Figure 7: The quadrant chart for the “Drawing App” generated by the product manager agent. Architect Based on the requirements in PRD, the Architect agent devises technical specifications including system architecture diagrams and interface definitions. Initially, the Architect defines the overarching technical trajectory. Subsequently, the project’s architecture, including files, classes (Figure 8) and the sequence flow chart (Figure 12), is designed. The Architect’s documentation is then given to the project manager for task allocation and execution.
2308.00352#63
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On collaborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
http://arxiv.org/pdf/2308.00352
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber
cs.AI, cs.MA
null
null
cs.AI
20230801
20231106
[ { "id": "2308.12950" }, { "id": "2305.17066" }, { "id": "1511.09249" }, { "id": "2308.11432" }, { "id": "2306.08568" }, { "id": "2310.02304" }, { "id": "2303.08896" }, { "id": "2204.05999" }, { "id": "2309.16797" }, { "id": "2002.08155" }, { "id": "2305.16960" } ]