doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.00675 | 63 | a Python expression, especially when the expression involves results from other functions. Example ANSWERO=EVAL(expr="{X} + 4 * 2> 1 == Falseâ) ANSWER1=EVAL (expr="{A} and {B} xor {C} or not {D}â) Important Note: When evaluating expressions involving the results of other functions, always use the EVAL function. For example: # Correct usage ANSWER=EVAL (expr="{ANS FINAL_RESULT=RESULT(v. alseâ and {ANS2}=='False") # Incorrect usage FINAL_RESULT=RESULT(var=AN | 2308.00675#63 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 63 | | [std | 2.78185 | [min | 2.40006 | | 25% | 3.24153 | | 50% | 4.10336 | | 75% | 5.64609 | [max | 71.502 | [Table Searcher] Final Thought: The describe(} function in pandas provides a statistical summary of the âPore limiting diameter (A)' column in the DataFrame. This includes the count, mean, standard deviation, minimum, 25th percentile, median (50th percentile), 75th percentile, and maximum > Finished chain. Observation: The pore limiting diameters of all materials range from approximately 2.40A to 71.50A, with an average diameter of approximately 4.88A. Thought:! naw know the final answer Final Answer: The pore limiting diameter of YUSGID_clean is 3.71515 A, which is below the average pore limiting diameter of approximately 4.884 for all materials. The range of pore limiting diameters for all materials is approximately 2.40A to 71.50A. > Finished chain. FRBAHRHEH# Output RASHES EEE Ee Se The pore limiting diameter of YUSGID_clean is 3.71515 A, which is below | 2308.01423#63 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 64 | Better Integration with Static Analysis. Our work presents op- portunities for greater integration and synergy with static analysis methods. Currently, our proposed solution operates largely inde- pendently of the static analysis methods, taking only inputs from static analysis initially. Looking into the future, we can consider integrating static analysis and LLMs in a holistic workflow. For example, this could involve selectively utilizing LLM as an assistant to overcome certain hurdles encountered by static analysis, e.g., difficulty in scaling up the analysis or summarizing loop invari- ants. In turn, further static analysis based on these findings can provide insights to refine the queries to the LLM. This iterative process could enable a more thorough and accurate analysis of complex cases. We believe such a more integrated approach is a very promising future direction.
11
Deploying on Open-sourced LLMs. The reproducibility of LLift could be potentially challenged, considering its dependency on GPT-4, a closed-source API subject to frequent updates. At the time of writing, Meta introduced Llama 2, an open-source language model with capabilities rivaling GPT-3.5. Our initial assessments suggest that Llama 2 can understand our instructions and appears well-suited to support LLift. The open-source nature of Llama 2 provides us with opportunities to deploy and refine the model further. We plan to leverage these prospects in future studies. | 2308.00245#64 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 64 | System design ## Implementation approach We will use Pythonâs Tkinter library to create the GUI for the color meter. Tkinter is a standard Python interface to the Tk GUI toolkit and is included with most Python installations. It â+ is simple to use and powerful enough for our needs. The color selection functionality will be implemented using the PIL (Pillow) library, which allows â us to capture the screenâs pixel data. By moving the mouse, we can get the RGB values of â+ the pixel under the mouse cursor in real-time The RGB values will be displayed on the GUI using Tkinterâs Label widget. We will use Python's © threading library to update the RGB values in real-time as the color meter moves The application will be tested for accuracy and performance using Pythonâs unittest library ## Python package name *âspython "color_meter" ## File list ââspython C "main.py", "color_picker.py", "gui-py", "tests.py" ## Anything UNCLEAR The requirement is clear to me
18
Preprint | 2308.00352#64 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.01423 | 64 | FRBAHRHEH# Output RASHES EEE Ee Se The pore limiting diameter of YUSGID_clean is 3.71515 A, which is below the average pore limiting diameter of approximately 4.88A for all materials. The range of pore limiting diameters for all materials is approximately 2 40A to 71.50A. FRRHHHEHH Question #tHHE vavewere How does the pore limiting diameter of YUSGID_clean compare with other materials? > Entering new AgentExecutor chain... I need to compare the pore limiting diameter of YUSGID_clean with other materials. Action: search_csv Action Input: "Search name YUSGID_clean and provide information on its pore limiting diameterâ > Entering new TableSearcher chain... [Table Searcher] Thought: To search for a specific value in the "nemeâ column, we can use the pandas function loc[], which allows us to access a group of rows and columns by label(s) or a boolean array. We can then extract the information on the pore limiting diameter for the material with the name "YUSGID". [Table Searcher] Input: print(df.loc[df['name'] == | 2308.01423#64 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 65 | 8 RELATED WORK Techniques of Utilizing LLMs. Wang et al. [33] propose an em- bodied lifelong learning agent based on LLMs. Pallagani et al. [23] explores the capabilities of LLMs for automated planning. Weng [35] summarizes recent work in building an autonomous agent based on LLMs and proposes two important components for plan- ning: Task Decomposition and Self-reflection, which are similar to the design of LLift. Beyond dividing tasks into small pieces, task decomposition techniques also include some universal strategies such as Chain-of-thought [34] and Tree-of-thought [38]. The gen- eral strategy of self-reflection has been used in several flavors: ReAct [39], Reflexion [29] and Chain of Hindsight [15]. Despite the similarity in name, self-reflection is fundamentally different from self-validation in LLift where the former focuses on using external sources to provide feedback to their models. Huang et al. [10] let an LLM self-improve its reasoning without supervised data by asking the LLM to lay out different possible results. | 2308.00245#65 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 65 | 18
Preprint
ColorPicker +get_rgb_at(x: int, y: int) : Tuple[int, int, int] uses GUI +__init__(color_picker: ColorPicker) +update_rgb_label(rgb: Tuple[int, int, int]) +start()
Figure 8: Data structures and interface definitions for the âDrawing Appâ generated by the architect agent.
Project Manager The Project Manager breaks down the project into a task list. Furthermore, each code file is analyzed based on its intended functionality and then treated as a separate task assigned to Engineers.
main ColorPicker GUI Instantiate >» Instantiate(CP) 7 start() âââSSSSS SS Ss User moves mouse get_rgb_at(x, y) update_rgb_label(rgb) main ColorPicker GUI
Figure 9: The program call flow for the âDrawing Appâ generated by the architect agent.
19
# Preprint
# Tasks | 2308.00352#65 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 65 | JaudS0K > Dovueriaton > Ree wantin pts? GP gcloud compute scp 1 - Send eedback ane {cloud compute sep copy fs to and from Geogle Compute Engine vital m synopsis geloud compute asp [[ USER @] INSTANCE ] sn¢ ll USER @] INSTANCE'] SAC -|\| USER @] INSTANCE: DEST [--conpress ][ --ery-run] [-force-key-fite-overwrite} [plain] [=-port = 70Rr] | --recurse] yFatLon = SSH.KEY.EXPIRATION | --84h-Key-exp EXPIRE AFTER) i [octovo.mroe FLAG |) DESCRIPTION seloud compute sep securely copes files between avitusl machine instance and yout local mactine use the sep command. This command works for Linux VMs and Windows Server 2019 ard later VMs that have SSH enabled. gcloud compute ssh © - name gcloud compute ssh - SSH into a virtual machine instance synopsis (cloud compute ssh [ USER @] INSTANCE [--comand = COnANO] [--container = COWTAINER] [--éry-run] [ --force-key-file-overwrite]}| --plain}{ ~-ssh-flag = SSH. | 2308.00675#65 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 65 | the information on the pore limiting diameter for the material with the name "YUSGID". [Table Searcher] Input: print(df.loc[df['name'] == 'YUSGID', 'Pore limiting diameter (A)'].to_markcown()) [Table Searcher] Observation: | | Pore limiting diameter (A) | -â----"| Lippi sspanyitnne ne SA enact E| [11739] 3.71515 | [Table Searcher] Final Thought: The pore limiting diameter for the materia! with the name "YUSGID" is 3.71515 A. > Finished chain. Observation: The answer for question "Search name YUSGID and provide information on its pore limiting diameterâ is 3.71515 A. Thought:The search_csv tool provided the pore limiting diameter of YUSGID_clean. Now I need to find the pore limiting diameter of other materials for comparison. Action: search_csv Action Input. âSearch for other materials and provide information on their pore limiting diameterâ > Entering new TableSearcher chain... [Table Searcher] Thought: To search for other materials and provide information on their pore limiting | 2308.01423#65 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 66 | LLMs for Program Analysis. Ma et al. [17] and Sun et al. [30] explore the capabilities of LLMs when performing various program analysis tasks such as control flow graph construction, call graph analysis, and code summarization. They conclude that while LLMs can comprehend basic code syntax, they are somewhat limited in performing more sophisticated analyses such as pointer analysis and code behavior summarization. In contrast to their findings, our research with LLift has yielded encouraging results. We conjecture that this might be due to several reasons: (1) benchmark selection, i.e., Linux kernel vs. others. (2) Prompt designs. (3) GPT-3.5 vs. GPT- 4.0 â prior work only evaluated the results using only GPT-3.5. Pei et al. [26] use LLMs to reason about loop invariants with decent performance. In contrast, LLift leverages LLMs for a variety of tasks (including program behavior summarization) and integrates them successfully into a static analysis pipeline. | 2308.00245#66 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 66 | ## Required Python third-party packages ***python now tkinter==8.6 Pillow==8.3.1 won Vv ## Required Other language third-party packages ***python now No third-party packages required in other languages. now Vv ## Full API spec ***python now No APIs are used in this project. now Vv ## Logic Analysis ***python [ ("main.py", "Contains the main application loop and initializes all the other components."), ("canvas.py", "Implements the drawing canvas, responds to mouse events, and uses the selected <+ tool and color."), ("tools.py", "Implements the tool selection interface."), ("color_picker.py", âImplements the color selection interface."), ("file_manager.py", âImplements the file saving and opening functionality.") Product Requirement Document | 2 3 4 aADM -â-OwO eo ROS ## Task list ***python [ "main.py", "canvas.py", "tools.py", "color_picker.py", "file_manager.py"â ## Shared Knowledge ***python now The âTkinterâ library is used for creating the GUI of the application. It provides various widgets <>» like frames and canvases which are used in this project. The | 2308.00352#66 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 66 | [--container = COWTAINER] [--éry-run] [ --force-key-file-overwrite]}| --plain}{ ~-ssh-flag = SSH. FLAG] | --ssh-key-file = SSH_KEY.FILE] triet-host-Key-checking = STRICT.MOST_KEY_CHECKING]|--troubleshoct ]{ --zone = Z0ME] âennel: network = NETHORK ~-region = REGION Jp * DEST. GROUP} [--28h ation = SSHMEYLEXPIRATION | pepire-af ter = SSH_KEY_EXPIRE_AFTER] | GCLOUO_NIDE.FLAG J{~ SSH.ARGS | DESCRIPTION geloud compute ssh is athn wrapper around the ssh(1) command that takes care of authemtication and the translation of the instance name into an IP address To use SSH to connect to a Windows VM refer to this guide tps: cloud google com/compute/dees/connect/windows-ssh | 2308.00675#66 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 66 | limiting diameterâ > Entering new TableSearcher chain... [Table Searcher] Thought: To search for other materials and provide information on their pore limiting diameter, we can use the pandas function loc{] to filter the dataframe based on a condition. We can then select the "name" and âPore limiting diameter (A}" columns to display the information. [Table Searcher] Input: print(df.loc[:, ["nameâ, "Pore limiting diameter (A)']].to_mark- down()) ValueError: The number of tokens has been exceeded. | 2308.01423#66 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 67 | LLMs for Software Engineering. Xia et al. [36] propose an au- tomated conversation-driven program repair tool using ChatGPT, achieving nearly 50% success rate. Pearce et al. [25] examine zero- shot vulnerability repair using LLMs and found promise in synthetic and hand-crafted scenarios but faced challenges in real-world ex- amples. Chen et al. [5] teach LLMs to debug its own predicted program to increase its correctness, but only performs on relatively simple programs. Lemieux et al. [14] leverages LLM to generate tests for uncovered functions when the search-based approach got coverage stalled. Feng and Chen [7] use LLM to replay Android bug automatedly. Recently, LangChain proposed LangSimith [13], a LLM-powered platform for debugging, testing, and evaluating. These diverse applications underline the vast potential of LLMs in
software engineering. LLift complements these efforts by demon- strating the efficacy of LLMs in bug finding in the real world.
9 CONCLUSION This work presents a novel approach that utilizes LLMs to aid static analysis using a completely automated agent. By carefully considering the scope and designing the interactions with LLMs, our solution has yielded promising results. We believe our effort only scratched the surface of the vast design space, and hope our work will inspire future research in this exciting direction. | 2308.00245#67 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 67 | library is used for creating the GUI of the application. It provides various widgets <>» like frames and canvases which are used in this project. The âPillowâ library is used for handling image files. It is used in the âfile_manager.pyâ for <+ saving and opening image files. The âmain.pyâ file is the entry point of the application. It initializes all the other components <>» and starts the application loop. ## Anything UNCLEAR The project requirements and implementation approach are clear. However, we need to ensure that <> all team members are familiar with the Tkinter and Pillow libraries. If not, they may <> need some time to learn these libraries before starting the development. | 2308.00352#67 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 67 | Figure 10: The documentation examples from GCP CLI. We crawl the website, remove the HTML tags and apply the renaming procedure as the documentation of the created LLM-Cloud CLI.
# B.4 LLM-Cloud CLI
More examples. on GCP CLI. In Table 2, we show more examples of the created LLM-Cloud CLI dataset, based
Creating tool documentations. On the LLM-Cloud CLI dataset, we create tool documentations using the widely-used BeautifulSoup 2 library to scrape the GCP CLI documentation. We removed HTML tags and implemented the renaming procedures for LLM-Cloud CLI documentation. We note that we purposely do not eliminate unrelated content such as terms and hyperlinks. An example documentation from GCP before our renaming procedure is shown in Figure 10. This is to prevent excessive engineering of the documentation for better assessing the robustness of LLM documentation reading ability.
Documentation retrieval details. Given the extensive number of command-line tools in our exper- iments (200 in total), the complete documentation cannot fit within a single prompt. Consequently, for each query, we employ a simple TF-IDF search to retrieve the top 10 relevant documentations. We then truncate the length to a maximum of 600 words. We note that the actual token count depends on the tokenizer used by each LLM and is typically more than 600.
# 2https://pypi.org/project/beautifulsoup4/ | 2308.00675#67 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 67 | | | count | 12020 | | mean | 4.87758 | [std | 2.78185 | [min | 2.40006 | | 25% | 3.24153 | | 50% | 4.10336 | | 75% | 5.64609 | 71.502
Figure S3. ChatMOF's answer to the question "How does the pore limiting diameter of
YUSGID_clean compare with other materials?" using gpt-4 and gpt-3.5-turbo. The answer is
successfully derived for gpt-4 but raises a tokenError for gpt-3.5-turbo.
47
What is the largest cavity diameter of ASOVEL_clean? | 2308.01423#67 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 68 | REFERENCES [1] Toufique Ahmed, Kunal Suresh Pai, Premkumar Devanbu, and Earl T. Barr. Improving Few-Shot Prompts with Relevant Static Analysis Products.
2023. http://arxiv.org/abs/2304.06815 arXiv:2304.06815 [cs].
[2] Anthropic (2023). 2023. Claude 2. https://www.anthropic.com/index/claude-2 Jiuhai Chen, Lichang Chen, Heng Huang, and Tianyi Zhou. 2023. When do you [3] need Chain-of-Thought Prompting for ChatGPT? http://arxiv.org/abs/2304.032 62 arXiv:2304.03262 [cs].
[4] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021). | 2308.00245#68 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 68 | 20
# Preprint
Engineer Given the provided file structure and function definitions, an Engineer agent requires only fundamental development skills to complete the development tasks. Due to the large number of files, we present only one auto-generated code file here.
Generated code (color picker.py)
1 import tkinter as tk 2 from tkinter import colorchooser 3 class ColorPicker: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 def __init__(self, root: tk.Tk, color: str = âblackâ): self.frame = tk.Frame(root) self.color = color self.color_button = tk.Button(self.frame, text=âColorâ, command=self.select_color) self.color_button.pack(fill=âxâ) def select_color(self): color = colorchooser.askcolor()[1] if color is not None: self.color = color def pack(self, **kwargs): self.frame.pack(**kwargs) def get_color(self): return self.color
QA Engineer Upon receiving the code output from the Engineer, the QA Engineer generates unit test code and reviews it to identify and fix any bugs, ensuring high-quality software.
# Generated Unit tests (test.py) | 2308.00352#68 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.01423 | 68 | successfully derived for gpt-4 but raises a tokenError for gpt-3.5-turbo.
47
What is the largest cavity diameter of ASOVEL_clean?
What's the pore limiting diameter of XEXLUJ_clean? How large is the free pore diameter for DIBQUC? What's the density of ROLCEC19? Can you tell me the accessible surface area of ALIBUT? What is the non-accessible surface area of ARAHIM02? What is the accessible volume fraction for EREGOY_clean? Can you tell me about the accessible volume of UZANOZ? What's the non-accessible volume for NAHFOU? What type of metal is present in PITPEP_clean? Does DUBKAO have an open metal site? Can you tell me the type of open metal in FECYUJ? Can you provide the top 5 materials with the highest accessible surface area? Which materials have the highest non-accessible volume? What's the largest cavity diameter in materials with a density greater than a 7.0?
Can you provide a list of materials with an open metal site?
What's the accessible volume fraction for materials with a density less than a 0.25?
How does the pore limiting diameter of YUSGID_clean compare with other materials?
48
What materials have the largest free pore diameter? | 2308.01423#68 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 69 | [5] Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2023. Teaching Large Language Models to Self-Debug. http://arxiv.org/abs/2304.05128
[6] Ted Chiang. 2023. ChatGPT Is a Blurry JPEG of the Web. The New Yorker (Feb. 2023). https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a- blurry-jpeg-of-the-web Section: annals of artificial intelligence.
[7] Sidong Feng and Chunyang Chen. 2023. Prompting Is All Your Need: Automated Android Bug Replay with Large Language Models. https://doi.org/10.48550/arX iv.2306.01987 arXiv:2306.01987 [cs].
[8] Github. 2023. GitHub Copilot documentation. https://ghdocs-prod.azurewebsit es.net/_next/data/mHA_XfBBaMPyfcP0Q05C5/en/free-pro-team@latest/copi lot.json?versionId=free-pro-team%40latest&productId=copilot | 2308.00245#69 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 69 | # Generated Unit tests (test.py)
1 import unittest 2 import tkinter as tk 3 # You might want to move the ColorPicker class to another file, letâs say color_picker.py 4 # from color_picker import ColorPicker 5 6 class TestColorPicker(unittest.TestCase): 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 if __name__ == â__main__â: unittest.main() 28
Output Ultimately, as shown in Figure 10, MetaGPT generates a functional application named âDrawing Appâ.
21
Preprint
Drawing App Colors Color Pencil Brush Eraser aFigure 10: The âDrawing Appâ generated by MetaGPT.
C EXPERIMENTS
C.1 DETAILS OF THE SOFTWAREDEV DATASET
The SoftwareDev dataset includes 70 diverse software development tasks. Table 5 displays the names and detailed prompts of 11 tasks within the dataset. Note that the first seven tasks listed are used in the main experiments of this paper.
C.2 ADDITIONAL RESULTS | 2308.00352#69 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 69 | Question Commands in GCP Commands after renaming (Final Answer) Show me how to deploy ocr-xer container and invoke it with a schedule every 2 hours on a project âtest_projâ in sdk command lines. The ocr-xer container is located at âus-docker.pkg.dev/gcr- cleaner/ocr-xer/ocr-xerâ. ⢠gcloud config set project test_proj ocr-xer ⢠gcloud run deploy --image=us-docker.pkg.dev/gcr- cleaner/ocr-xer/ocr-xer ⢠gcloud scheduler jobs create http NAME --schedule --schedule="0 */2 * * *" ⢠llmcloud test_proj ⢠llmcloud config set run deploy project ocr-xer --image=us-docker.pkg.dev/gcr- cleaner/ocr-xer/ocr-xer ⢠llmcloud scheduler jobs make http NAME --schedule --schedule="0 */2 * * *" How to deploy a machine learning model model.pt saved in my local to cloud via sdk command line? ⢠gsutil cp model.pt LOC/model.pt ⢠gcloud ai-platform versions cre- ate VERSION --model MODEL -- origin gs://LOC/model.pt ⢠llmutil cp | 2308.00675#69 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 69 | How does the pore limiting diameter of YUSGID_clean compare with other materials?
48
What materials have the largest free pore diameter?
Is there an open metal site in NELVAC?
Can you provide a list of materials with the highest pore limiting diameter?
What materials have the largest accessible volume?
What's the non-accessible surface area for WAWGOQ_clean?
What is the type of open metal in YUNJIB?
Can you list the top 10 materials with the highest non-accessible surface area?
What's the largest cavity diameter in materials with a density less than a 0.8? Can cg901114e_si_002 have an open metal site? What materials have the largest accessible volume fraction? What is the pore limiting diameter of LOLREL? How does the accessible volume of OSIXEU compare with other materials? Can you provide the top 10 materials with the highest density? What's the largest free pore diameter in materials with a density less than a 0.8? How high is the non-accessible volume for FIJDIM05? What materials have the largest cavity diameter? What is the metal type in FOTNIN? How high is the accessible volume fraction for WOBCAT? Can you list materials with a pore limiting diameter greater than a 6.0? What is the density of materials with an accessible surface area above a 3000 m2/g?
49 | 2308.01423#69 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 70 | [9] Anjana Gosain and Ganga Sharma. 2015. Static Analysis: A Survey of Tech- niques and Tools. In Intelligent Computing and Applications (Advances in Intelli- gent Systems and Computing), Durbadal Mandal, Rajib Kar, Swagatam Das, and Bijaya Ketan Panigrahi (Eds.). Springer India, New Delhi, 581â591. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022. Large Language Models Can Self-Improve. http: //arxiv.org/abs/2210.11610 arXiv:2210.11610 [cs]. | 2308.00245#70 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 70 | C.2 ADDITIONAL RESULTS
Quantitative results of MetaGPT As shown in Table 4, MetaGPT achieves an average score of 3.9, surpassing ChatDevâs score of 2.1 Zhao et al. (2023), which is based on the Chat chain. Compare the scores of general intelligent algorithms, including AutoGPT Torantulino et al. (2023), which all score 1.0, failing to generate executable code. We observe that the generated code is often short, lacks comprehensive logic, and tends to fail to handle cross-file dependencies correctly.
While models such as AutoGPT (Torantulino et al., 2023), Langchain (Chase, 2022), and Agent- Verse (Chen et al., 2023) display robust general problem-solving capabilities, they lack an essential element for developing complex systems: systematically deconstructing requirements. Conversely, MetaGPT simplifies the process of transforming abstract requirements into detailed class and func- tion designs through a specialized division of labor and SOPs workflow. When compared to Chat- Dev (Zhao et al., 2023), MetaGPTâs structured messaging and feedback mechanisms not only reduce loss of communication information but also improve the execution of code. | 2308.00352#70 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 70 | gsutil cp model.pt LOC/model.pt ⢠gcloud ai-platform versions cre- ate VERSION --model MODEL -- origin gs://LOC/model.pt ⢠llmutil cp model.pt LOC/model.pt ⢠llmcloud ai-platform versions cre- ate VERSION --model MODEL -- origin gs://LOC/model.pt How to get transcript of a video test.mp4 at local via the cloud SDK? ⢠ffmpeg -i test.mp4 -ac 2 -f wav out- put.wav ⢠ffmpeg -i test.mp4 -ac 2 -f wav out- put.wav ⢠gsutil cp test.wav LOC/test.wav ⢠gcloud ml speech recognize-long- ⢠llmutil cp test.wav LOC/test.wav ⢠llmcloud ml speech recognize-long- running --uri LOC/test.wav running --uri LOC/test.wav How to create a composer enviroment with a private ip network? ⢠gcloud composer environments cre- ate my_env ⢠llmcloud composer environments make my_env ⢠gcloud compute networks subnets update default --enable-private-ip- google-access ⢠llmcloud compute networks sub- --enable- nets update default private-ip-google-access How to create a service account [email protected] with the name | 2308.00675#70 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 70 | 49
What is the type of open metal in IYAFIX? Can JILXOS have an open metal site? What materials have the largest non-accessible surface area? Can you list the top 5 materials with the highest largest free pore diameter? What is the accessible volume for MOJJUR? Can you provide the top 10 materials with the highest largest cavity diameter? What is the accessible surface area for materials with a pore limiting diameter below a 2.0? What is the non-accessible volume for WABTOK? What materials have the largest pore limiting diameter? What's the type of open metal in QOWRAV11? Can PODVUB have an open metal site? Can you list the top 10 materials with the highest accessible volume fraction? What is the metal type in QULMIU? How does the non-accessible surface area of YICTUZ compare with other materials? Can you provide the top 10 materials with the highest accessible volume? What's the pore limiting diameter in materials with a density greater than a 2.5? How high is the largest free pore diameter for DUFKAS? What materials have the largest density? What is the largest cavity diameter for TONTIB_clean? Can you list the top 5 materials with the highest non-accessible volume?
What is the accessible surface area for UXUPEJ?
What is the density of NATXIR? | 2308.01423#70 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 71 | [11] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of Hallucination in Natural Language Generation. Comput. Surveys 55, 12 (Dec. 2023), 1â38. https://doi.org/10.1145/3571730 Jack Krawczyk and Amarnag Subramanya. 2023. Bardâs latest update: more features, languages and countries. https://blog.google/products/bard/google- bard-new-features-update-july-2023/
[13] LangChain (2023). 2023. Announcing LangSmith, a unified platform for de- bugging, testing, evaluating, and monitoring your LLM applications. https: //blog.langchain.dev/announcing-langsmith/
[14] Caroline Lemieux, Jeevana Priya Inala, Shuvendu K Lahiri, and Siddhartha Sen. 2023. CODAMOSA: Escaping Coverage Plateaus in Test Generation with Pre- trained Large Language Models. (2023). | 2308.00245#71 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 71 | Quantitative results of MetaGPT w/o executable feedback Table 6 presents the performance of MetaGPT with GPT-4 32K on 11 tasks within the SoftwareDev dataset. It also shows the average performance across all 70 tasks (in the last line). Note that the version of MetaGPT used here is the basic version without the executable feedback mechanism.
Qualitative results Figure 11 and Figure 12 illustrate the outcomes of the Architect agentâs ef- forts to design a complex recommender system. These figures showcase the comprehensive system interface design and program call flow. The latter are essential for creating a sophisticated auto- mated system. It is crucial to emphasize the importance of this division of labor in developing an automated software framework.
22
Preprint
Table 4: Executability comparison. The executability scores are on a grading system ranging from â1â to â4â. A score of â1â signifies complete failure, â2â denotes executable code, â3â represents largely satisfying expected workflow, and â4â indicates a perfect match with expectations. | 2308.00352#71 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 71 | google-access ⢠llmcloud compute networks sub- --enable- nets update default private-ip-google-access How to create a service account [email protected] with the name âAutoMLâ âBigQuery Data Editorâ and â"AutoML Recommen- dations Service Accountâ permissions? ⢠gcloud iam service-accounts [email protected] --display-name AutoML ⢠gcloud projects add-iam- -- PROJ_ID policy-binding member="[email protected]" --role "roles/bigquery.dataEditor" ⢠gcloud projects add-iam-policy- PROJ_ID --member --role ⢠llmcloud iam service-accounts [email protected] --display-name AutoML ⢠llmcloud projects add-iam- -- PROJ_ID policy-binding member="[email protected]" --role "roles/bigquery.dataEditor" ⢠llmcloud projects add-iam-policy- PROJ_ID --member --role | 2308.00675#71 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 71 | What is the accessible surface area for UXUPEJ?
What is the density of NATXIR?
How does the largest free pore diameter of LITDAV compare with other materials?
50
Can you provide the top 10 materials with the highest largest free pore diameter?
What's the pore limiting diameter for CSMNOX?
How high is the non-accessible surface area for MUZKAV_clean?
What materials have the largest accessible surface area?
What's the type of open metal in cm503311x_F-mil-53-Al-lp-mono?
Can TAHYUZ have an open metal site?
Can you list the top 5 materials with the highest density? | 2308.01423#71 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 72 | [15] Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. 2023. Chain of Hindsight http://arxiv.org/abs/2302.02676 Aligns Language Models with Feedback. arXiv:2302.02676 [cs].
[16] Kangjie Lu and Hong Hu. 2019. Where Does It Go?: Refining Indirect-Call Targets with Multi-Layer Type Analysis. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. ACM, London United Kingdom. https://doi.org/10.1145/3319535.3354244
[17] Wei Ma, Shangqing Liu, Wenhan Wang, Qiang Hu, Ye Liu, Cen Zhang, Liming Nie, and Yang Liu. 2023. The Scope of ChatGPT in Software Engineering: A Thorough Investigation. http://arxiv.org/abs/2305.12138 arXiv:2305.12138 [cs]. [18] Bertrand Meyer. 1997. Object-Oriented Software Construction, 2nd Edition.
Prentice-Hall.
[19] OpenAI (2022). 2022. Introducing ChatGPT. https://openai.com/blog/chatgpt [20] OpenAI (2023). 2023. Function calling and other API updates. https://openai.c | 2308.00245#72 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00675 | 72 | 19
# B.5 Image editing and video tracking
As discussed in Section 4.3, by providing tool documentations, we can easily add on new tools to enable LLMs in solving novel tasks such as image editing and video tracking. Here, we leverage the recent advancements in vision models and expand the tool set used in VisProg [19] with three new tools: GroundingDINO [38], Segment Anything (SAM) [30], and XMem [14]. We provide their corresponding documentations in Figure 11. | 2308.00675#72 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 72 | Can TAHYUZ have an open metal site?
Can you list the top 5 materials with the highest density?
What is the metal type in YUBTUM? How does the accessible surface area of TATFOL compare with other materials? Can you provide the top 5 materials with the highest accessible surface area? What's the largest cavity diameter for NELVAC? How high is the accessible volume for EPUJIJ_clean? What materials have the largest non-accessible surface area? What's the type of open metal in WAWGOQ_clean? Can YUNJIB have an open metal site? Can you list the top 5 materials with the highest accessible volume fraction? What is the metal type in JUKPAI_clean? How does the non-accessible surface area of HUZFIS compare with other materials? Can you provide the top 5 materials with the highest largest cavity diameter? What's the pore limiting diameter for XIPTAS_clean? How high is the largest free pore diameter for LOLREL? What materials have the largest density? What is the largest cavity diameter for YAVVUP?
51
Can you list the top 5 materials with the highest non-accessible volume? | 2308.01423#72 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 73 | om/blog/function-calling-and-other-api-updates
[21] OpenAI (2023). 2023. GPT-4 Technical Report. http://arxiv.org/abs/2303.08774 arXiv:2303.08774 [cs].
[22] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John
12
Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian
Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. http://arxiv.org/ abs/2203.02155 arXiv:2203.02155 [cs]. | 2308.00245#73 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 73 | ContentBasedFiltering Model . Advertising Privacy y +DataFrame data \ +int advertiser_id +User user +str model_type \ +str target_audience \ + init_(user: User): -> _init_(user_id: int, user_profile: UserProfile, + init (advertiser_id: int, target_audience: str): -> +ensure_privacy(): -> f_model: CollaborativeFilteringModel, 174 cbf_model: ContentBasedFilteringModel) -> | | Content User eT +int content id ae name 4str ttle | mal 4+str description +str category | SETS EC EA +DateTime published_at | / (/ +-init_(user_id: int, name: str,email: str): -> +_init_(content_id: int, tite: str, description: str, category: str, published_at: DateTime) :-> _â| i/ I~ [a â | ââ 1 User Profile Collaborative FilteringModel _ Feedback tint user_id +dict preferences +DataFrame data +int user_id ++dict history +str model_type +int content id +int rating +_init_(user_id: int, +predict(user_id: int, \ preferences: dict, nLrecommendations: int) -> | 2308.00352#73 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 73 | Function: BETTERLOC Description The BETTERLOC function calls the GroundingDINO model to perform object localization. GroundingDINO is a zero-shot text-conditioned object detection model. It returns all bounding boxes of the queried object. To make multiple queries at one time, separate different object names with ââ. Syntax BETTERLOC(image: IMAGE, object: TEXT) -> BOX Parameters image: An IMAGE type input representing the image to be analyzed. object: A TEXT type input representing the object to be localized in the image. Returns The function returns a BOX value, representing the bounding boxes of the queried object. Use case: Use BETTERLOC when you want to locate an object in an image and retrieve its bounding box(es). Example: BOXO = BETTERLOC(image=IMAGE, object='cat') Function: BETTERSEG Description The BETTERSEG function calls the Segment Anything Model (SAM) for image segmentation. It returns all objects detected in the images as a list of OBJECT instances. Each OBJECT instance contain s its bounding box and mask. Syntax BETTERSEG(image: IMAGE, box: BOX) -> LISTIOBJECT] Parameters image: An IMAGE type input representing the | 2308.00675#73 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 73 | 51
Can you list the top 5 materials with the highest non-accessible volume?
What is the accessible surface area for FIJDIM05? What is the density of TUFTAR? How does the largest free pore diameter of FOTNIN compare with other materials? Can you provide the top 5 materials with the highest largest free pore diameter? What's the pore limiting diameter for LAXGOH02? How high is the non-accessible surface area for GEDQOX? What materials have the largest accessible surface area? What's the type of open metal in JILXOS? Can VAHSON have an open metal site? Can you list the top 5 materials with the highest density? What is the metal type in MOJJUR? How does the accessible surface area of REHVEH_clean compare with other materials? Can you provide the top 5 materials with the highest accessible surface area? What's the largest cavity diameter for NELVAC?
Table S1. 100 questions in search task.
52
At 77 K, what's the hydrogen uptake for ZAXQOG_clean at 100bar? | 2308.01423#73 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 74 | [23] Vishal Pallagani, Bharath Muppasani, Keerthiram Murugesan, Francesca Rossi, Biplav Srivastava, Lior Horesh, Francesco Fabiano, and Andrea Loreggia. 2023. Understanding the Capabilities of Large Language Models for Automated Plan- ning. http://arxiv.org/abs/2305.16151 arXiv:2305.16151 [cs]. Jihyeok Park, Hongki Lee, and Sukyoung Ryu. 2022. A Survey of Parametric Static Analysis. ACM Comput. Surv. 54, 7 (2022), 149:1â149:37. https://doi.org/ 10.1145/3464457
[25] Hammond Pearce, Benjamin Tan, Baleegh Ahmad, Ramesh Karri, and Bren- dan Dolan-Gavitt. 2023. Examining Zero-Shot Vulnerability Repair with Large Language Models. In 2023 IEEE Symposium on Security and Privacy (S&P). IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/SP46215.20 23.00001 | 2308.00245#74 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 74 | id +int rating +_init_(user_id: int, +predict(user_id: int, \ preferences: dict, nLrecommendations: int) -> Listlint] + init (user_id: int, content_id: int, rating: int) : -> history: dict) -> fl Recommender tint user_id _ââ || #UserProfile user_profile â 4Collaborative Filtering Model cf_model 1 +ContentBasedFiltering Model cbf_model +_init_(user_id: int, user_profile: UserProfile, cf_model: CollaborativeFilteringModel, cbf_model: ContentBasedFilteringModel) -> â ON ⢠aA \i 1 Experimentation Platform Optimization Monitoring +List{Recommender] recommenders +Recommender recommender +Recommender recommender +_init_(recommenders: List[Recommender]) : -> +_init (recommender: Recommender) : -> _+_init_ (recommender: Recommender): -> -+run_experiment(user recommendations: int): -> Dictlstr, Listlint]] +optimize() : -> +generate_report() : -> | 2308.00352#74 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 74 | bounding box and mask. Syntax BETTERSEG(image: IMAGE, box: BOX) -> LISTIOBJECT] Parameters image: An IMAGE type input representing the image to be analyzed. box: The bounding boxes where we want to segment. Returns The function returns a LIST of OBJECT instances, each representing a detected object and including its bounding box, and mask. Use case: Use BETTERSEG when you want to segment an object in a bounding box. Then the returned objects can be used by other functions such as REPLACE, COLORPOP, BGBLUR. Example: BOXO = BETTERLOC(image: IMAGE, object: âfishâ) OBJO=BETTERSEG(image=IMAGE, box=BOX0) Function: TRACK Description The TRACK function calls the XMem model for video object tracking. It takes an OBJECT instance from the first frame of the video as input then returns all frames where the object is highlight ed witha mask. Syntax TRACK(video: LIST[IMAGE], object: LISTT[OBJECT]) -> LIST[IMAGE] Parameters video: A list of IMAGE type input representing the video to be analyzed. object: The bounding boxes and masks of the objects which we want to track in the first frame | 2308.00675#74 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 74 | Under standard temperature conditions of 298K and dilute system, can you list the materials with the highest O2 diffusivity? Is XEGKUR_clean thermally stable? What's the thermal stability of QINSUD_clean? Which materials have a bandgap between 1 and 2 eV? At room temperature (298K), what's the CO2 Henry coefficient for XEGKUR_clean? Can you provide the top 10 materials with the highest N2 diffusivity at 298K and infinity dilute system? Is the solvent removal stability of XEGKUR_clean strong? Can you list 5 materials with a bandgap near 1.6 eV when evaluated at 298K and 1bar? Under 77K and infinity dilute conditions, what's the hydrogen diffusivity for NISPEL_charged? What materials show the best performance in hydrogen uptake at 100bar considering a temperature of 77 K? How high is the O2 uptake of LOQLIN_SL at 298K and normal pressure? Can XAVDUV_clean stably store hydrogen under 100bar pressure when observed at 77K? At 298K and standard atmospheric pressure, how high is the N2 uptake for UXABUR_clean? What's the O2 diffusivity for OCUVUF_clean under conditions of 298K and dilute system? Is XAHWAG_clean stable when the solvent is removed? | 2308.01423#74 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 75 | [26] Kexin Pei, David Bieber, Kensen Shi, Charles Sutton, and Pengcheng Yin. 2023. Can Large Language Models Reason about Program Invariants?. In Proceedings of the 40th International Conference on Machine Learning.
[27] Luke Salamone. 2021. What is Temperature in NLP? https://lukesalamone.git
hub.io/posts/what-is-temperature/ Section: posts. Jessica Shieh. 2023. Best practices for prompt engineering with OpenAI API | OpenAI Help Center. https://help.openai.com/en/articles/6654000-best- practices-for-prompt-engineering-with-openai-api
[28]
[29] Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: Language Agents with Ver- bal Reinforcement Learning. http://arxiv.org/abs/2303.11366 arXiv:2303.11366 [cs]. | 2308.00245#75 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 75 | Figure 11: The system interface design for ârecommendation engine developmentâ generated by the architect agent (zoom in for a better view).
23
Preprint
User UserProfile Recommender RST SRN TRASH eee Feedback Optimization Monitoring Privacy Advertising create UserProfile iatali create Recommender âfit model âht model run_experiment() get_recommendations() â I" predict() predict() submit feedback update models " Ft model fit model optimize() > update models I generate_report() ensure_privacy() | ehaurete jacy() â 4 User UserProfle Recommender ey er ead Feedback Optimization Monitoring Privacy Advertising
Figure 12: The program call flow for ârecommendation engine developmentâ generated by the architect agent (zoom in for a better view).
24
Preprint | 2308.00352#75 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 75 | Parameters video: A list of IMAGE type input representing the video to be analyzed. object: The bounding boxes and masks of the objects which we want to track in the first frame of the video. Returns The function returns a list of a list of OBJECT instances representing the bounding boxes and masks of tracked objects in each frame. Use case: Use TRACK when you want to track an object in a video. Then the returned list of objects can be used by other functions. Example: VIDEOO=TRACK(video=VIDEO, object=OBJ) Important note: A video is a list of images. Use "IMAGE=EVAL(expr="{VIDEO)[i]")â in a separate line to get the i-th frame of the video | 2308.00675#75 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.00245 | 76 | [30] Weisong Sun, Chunrong Fang, Yudu You, Yun Miao, Yi Liu, Yuekang Li, Gelei Deng, Shenghan Huang, Yuchen Chen, Quanjun Zhang, Hanwei Qian, Yang Liu, and Zhenyu Chen. 2023. Automatic Code Summarization via ChatGPT: How Far Are We? http://arxiv.org/abs/2305.12865 arXiv:2305.12865 [cs].
[31] Haoye Tian, Weiqi Lu, Tsz On Li, Xunzhu Tang, Shing-Chi Cheung, Jacques Klein, and Tegawendé F. Bissyandé. 2023. Is ChatGPT the Ultimate Programming Assistant â How far is it? http://arxiv.org/abs/2304.11938 arXiv:2304.11938 [cs]. [32] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc. | 2308.00245#76 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 76 | Table 5: Examples of SoftwareDev dataset. Task ID Task 0 1 2 3 Snake game Brick breaker game 2048 game Flappy bird game 4 5 Tank battle game Excel data process 6 CRUD manage 7 Music transcriber Prompt Create a snake game. Create a brick breaker game. Create a 2048 game for the web. Write p5.js code for Flappy Bird where you control a yellow bird continu- ously flying between a series of green pipes. The bird flaps every time you left click the mouse. If it falls to the ground or hits a pipe, you lose. This game goes on indefinitely until you lose; you get points the further you go. Create a tank battle game. Write an excel data processing program based on streamlit and pandas. The screen first shows an excel file upload button. After the excel file is uploaded, use pandas to display its data content. The program is required to be concise, easy to maintain, and not over-designed. It uses streamlit to process web screen displays, and pandas is sufficient to process excel reading and display. Please make sure others can execute directly without introducing additional packages. Write a management program based on the crud addition, deletion, modifi- cation and query processing of the customer business entity. The customer | 2308.00352#76 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 76 | Figure 11: Documentation of new tools introduced in VisProg. BETTERLOC, BETTERSEG, TRACK calls GroundingDINO, Segment Anything, XMem, respectively.
20
# C Experimental results
In this section, we show the experimental results on each task with comparisons to more baselines.
ScienceQA. In Table 3, we compare zero-shot prompting with tool documentations to other baseline methods. We include the following baseline methods that are finetuned on the ScienceQA training set for performance reference: ViLT [29], VisualBERT [34], UnifiedQA CoT [39], MM-CoT [80], and LLaMA-Adapter [78]. We report the results obtained from [40] for the finetuned methods. For fair comparison, we shall focus on zero/few-shot settings. Thus, we include Chain-of-Thought (CoT) [67] and Chameleon [40] as the few-shot baselines to compare to. We see that with tool docs, we can not only achieve better performance than the few-shot methods without any demos, but we can also match (outperform) several models specifically finetuned on the dataset. | 2308.00675#76 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 76 | How high is the bandgap for XEGKUR_clean?
53
What materials have the highest thermal stability? Can you show materials with a bandgap greater than 3.0 eV? At 298K and standard atmospheric pressure, how much O2 can XEGKUR_clean take up? What is the CO2 Henry coefficient for materials with bandgap less than 1.5 eV at 298K? At 77K, how does the hydrogen uptake of OCUVUF_clean compare with other materials at 100bar? Can you list materials with hydrogen diffusivity greater than 0.1 cm2/s at 77K and dilute system? What is the N2 diffusivity in XEGKUR_clean under 298K and dilute conditions? What is the thermal stability of materials with bandgap above 2.5 eV? Which materials demonstrate a high level of O2 diffusivity at 298K and dilute condition? How does the N2 uptake of XEGKUR_clean compare with others at 298K and 1 bar? Can QINSUD_clean remove solvents stably? How high is the thermal stability for NISPEL_charged? At 298K and 1 bar pressure, what is the O2 uptake of MIFROK_clean? Under standard conditions of 298K and 1 bar, which materials have the highest N2 uptake?
Can you list materials that show solvent removal stability? | 2308.01423#76 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 77 | [33] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023. Voyager: An Open-Ended Em- bodied Agent with Large Language Models. http://arxiv.org/abs/2305.16291 arXiv:2305.16291 [cs]. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. http://arxiv.org/abs/2201.11903 arXiv:2201.11903 [cs].
[35] Lilian Weng. 2023. LLM-powered Autonomous Agents. lilianweng.github.io (Jun 2023). https://lilianweng.github.io/posts/2023-06-23-agent
[36] Chunqiu Steven Xia and Lingming Zhang. 2023. Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each using ChatGPT. http://arxiv.org/abs/ 2304.00385 | 2308.00245#77 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 77 | without introducing additional packages. Write a management program based on the crud addition, deletion, modifi- cation and query processing of the customer business entity. The customer needs to save this information: name, birthday, age, sex, and phone. The data is stored in client.db, and there is a judgement whether the customer table ex- ists. If it doesnât, it needs to be created first. Querying is done by name; same for deleting. The program is required to be concise, easy to maintain, and not over-designed. The screen is realized through streamlit and sqliteâno need to introduce other additional packages. Develop a program to transcribe sheet music into a digital format; provid- ing error-free transcribed symbolized sheet music intelligence from audio through signal processing involving pitch and time slicing then training a neural net to run Onset Detected CWT transforming scalograms to chroma- grams decoded with Recursive Neural Network focused networks. 8 9 Gomoku game | 2308.00352#77 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 77 | Table 3: Comparing zero-shot prompting with tool docs to existing baseline methods on ScienceQA. We see that zero-shot prompting with tool docs performs competitively, outperforming the two few-shot baselines and several finetuned models.
Finetuned methods Few-shot methods Zero-shot methods Benchmark ViLT VisualBERT UnifiedQA CoT MM-CoT LLaMA-Adapter CoT Chameleon 0-shot with docs ScienceQA 61.14 61.87 74.11 84.91 85.19 78.54 79.20 79.91
TabMWP. Similarly, in Table 4, we compare zero-shot prompting with tool docs to various finetuned models and few-shot baselines, inlcuding: UnifiedQA [26], TAPEX [36], Chain-of-Thought (CoT) [67], Program-of-Thought (PoT) [13], and Chameleon [40]. We report the results obtained from [40] for UnifiedQA, TAPEX, and CoT. We see that with tool docs, zero-shot prompting significantly outperforms finetuned models, and baseline few-shot methods, CoT and PoT. When compared to Chameleon that utilizes 16 few-shot tool-usage demos, tool docs enable the model to perform comparably without relying on any demos. | 2308.00675#77 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 77 | Can you list materials that show solvent removal stability?
How does the hydrogen diffusivity for QINSUD_clean compare with others at 77K and dilute system?
Under conditions of 298K, what is the CO2 Henry coefficient for WOYJOL_clean?
What materials have the best N2 diffusivity when measured at 298K and dilute system?
54
Is the bandgap of XEGKUR_clean within the range 0.5-1.5 for photovoltaic applications?
At standard conditions of 77K and 100 bar, which materials have the highest hydrogen uptake?
How high is the bandgap for XEGKUR_clean when measured at 298K and under a pressure of 1 bar?
Can you provide a list of materials with high thermal stability?
At 77K and 100bar pressure, what is the hydrogen uptake of OCUVUF_clean?
What is the solvent removal stability of XAHWAG_clean? | 2308.01423#77 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 78 | [37] Frank F. Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. 2022. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming. ACM, San Diego CA USA, 1â10. https://doi.org/10.1145/3520312.3534862
[38] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. http://arxiv.org/abs/2305.10601 arXiv:2305.10601 [cs]. | 2308.00245#78 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 78 | Custom press releases Create custom press releases; develop a Python script that extracts rele- vant information about company news from external sources, such as social media; extract update interval database for recent changes. The program should create press releases with customizable options and export writings to PDFs, NYTimes API JSONs, media format styled with interlink internal fixed character-length metadata. Implement a Gomoku game using Python, incorporating an AI opponent with varying difficulty levels. Create a Python program to develop an interactive weather dashboard.
10
25
Preprint | 2308.00352#78 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 78 | Table 4: Comparing zero-shot prompting with tool docs to existing baseline methods on TabMWP. We see that with tool docs, even zero-shot prompting without any tool-usage demos achieves better performance than finetuned models and few-shot CoT and PoT baseline. It also performs comparably to Chameleon that employs 16-shot tool-usage demos.
Finetuned methods Few-shot methods Zero-shot methods Benchmark UnifiedQA TAPEX CoT PoT Chameleon 0-shot with docs TabMWP 57.35 58.52 82.03 89.28 93.88 92.69 | 2308.00675#78 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 78 | At 77K and 100bar pressure, what is the hydrogen uptake of OCUVUF_clean?
What is the solvent removal stability of XAHWAG_clean?
Can GUCJAQ_clean diffusively handle N2 when evaluated at 298K and dilute system? What materials show high O2 uptake capabilities under conditions of 298K and 1bar? At 298K and dilute system, what is the diffusivity of O2 in XEGKUR_clean? How much N2 can DUVNIS01_charged take up at 298K and 1 bar? Under conditions of 298K, which materials have a high CO2 Henry coefficient? How much hydrogen can XEGKUR_clean take up at 100bar pressure when observed at 77 K? Under 298K and dilute conditions, how high is the O2 diffusivity for DUVNIS01_charged? At standard temperature (298K) and 1 bar pressure, can IZEHAX_clean hold a high volume of N2? At 298K and 1bar, how stable is JALCAD_clean when the solvent is removed? What is the thermal stability of COKMUM_clean? Can AYOYOE_clean handle a high volume of O2 when tested at 298K and 1 bar? At 298K and dilute system, what is the diffusivity of N2 in IHAJIL_clean?
55 | 2308.01423#78 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 79 | [39] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. ReAct: Synergizing Reasoning and Acting in Language Models. International Conference on Learning Representations (ICLR) (2023). [40] Yizhuo Zhai, Yu Hao, Hang Zhang, Daimeng Wang, Chengyu Song, Zhiyun Qian, Mohsen Lesani, Srikanth V. Krishnamurthy, and Paul Yu. 2020. UBITect: A Precise and Scalable Method to Detect Use-before-Initialization Bugs in Linux Kernel. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2020). | 2308.00245#79 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 79 | e r a s k s a t d e t c e l e s y l m o d n a r 0 1 d n a d e t a l u c l a c e r a s k s a t 0 7 f o ) . g v A ( s e g a r e v A . v e D e r a w t f o S n o k c a b d e e f . â D I k s a T â o / w T P G a t e s i â D I â e l i h w M e r u p y t i l i b a t u c e x e e d o C n o i s i v e r f o t s o C s c i t s i t a t s t s o C s c i t s i t a t s c o D s t s o c y e n o m s t s o c e m i t s n e k o t n o i t e l p m o c # s n e k o t t p m o r p # e l fi c o d r e p s e n i l # c o d f o s e n i l # s e l fi c o d # e l fi e d o c 4 r o r r E e p y T . 1 9 0 1 $ . 4 0 . 2 8 5 0 0 . | 2308.00352#79 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 79 | NLVRv2. In Table 5, we compare zero-shot prompting with tool docs to a finetuned model on NLVRv2 and various few-shot baselines. Specifically, we consider ViLT [29] as the finetuned baseline and VisProg [19] with varying numbers of tool-usage demos as the few-shot baselines. We report the result obtained from [19] for ViLT. Since VisProg does not utilize tool docs, we see that its performance is very sensitive to the number of demos used. In addition, we also observe large performance variances when we randomly select different demos used for prompting, e.g., the standard deviation for 2-shot prompting reaches 16.1 percentage point. This indicates that the few-shot demos may require careful curation for the model to achieve good performance. On the other hand, with tool docs, zero-shot prompting can already achieve decent performance compared to only using few-shot demos.
Table 5: Comparing zero-shot prompting with tool docs to existing baseline methods on NLVRv2. Zero-shot methods | 2308.00675#79 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 79 | 55
What materials show a high level of N2 diffusivity under conditions of 298K and 1 bar pressure? Is the bandgap of XEGKUR_clean higher or lower than 1.5 eV? At a temperature of 298K and a pressure of 1 bar, how does the O2 uptake of DUVNIS01_charged compare with others? Under standard ambient temperature (298K), what is the CO2 Henry coefficient for XEGKUR_clean? At 77K and 100 bar, can DUVNIS01_charged take up a significant volume of hydrogen? Considering conditions of 298K and 1 bar, how much N2 can IZEHAX_clean hold? At 77 K and dilute system, what is the hydrogen diffusivity for JALCAD_clean? Can COKMUM_clean remain stable when the solvent is removed? Which materials exhibit the best O2 diffusivity at 298K and dilute system? How high is the bandgap for XEGKUR_clean? What is the thermal stability of DUVNIS01_charged? Under a temperature of 298K and pressure of 1 bar, how high is the O2 uptake for IZEHAX_clean? At 298K and standard pressure, what materials have the highest N2 uptake?
Can you list the top materials that demonstrate thermal stability? | 2308.01423#79 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00245 | 80 | [41] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A Survey of Large Language Models. arXiv:2303.18223 [cs.CL]
[42] Shen Zheng, Jie Huang, and Kevin Chen-Chuan Chang. 2023. Why Does ChatGPT Fall Short in Providing Truthful Answers? http://arxiv.org/abs/2304.10513 arXiv:2304.10513 [cs]. | 2308.00245#80 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets. | http://arxiv.org/pdf/2308.00245 | Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian | cs.SE, cs.AI | null | null | cs.SE | 20230801 | 20231115 | [
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2210.11610"
},
{
"id": "2305.16291"
},
{
"id": "2305.16151"
},
{
"id": "2303.18223"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "2203.02155"
},
{
"id": "2304.11938"
},
{
"id": "2304.03262"
},
{
"id": "2304.10513"
},
{
"id": "2201.11903"
},
{
"id": "2305.12138"
},
{
"id": "2305.12865"
},
{
"id": "2303.08774"
},
{
"id": "2306.01987"
},
{
"id": "2304.06815"
}
] |
2308.00352 | 80 | e l fi c o d # e l fi e d o c 4 r o r r E e p y T . 1 9 0 1 $ . 4 0 . 2 8 5 0 0 . 7 5 1 6 0 0 . 7 8 0 4 2 0 0 . 0 7 0 0 . 0 1 2 0 0 . 3 0 2 . 9 3 4 r o r r E e p y T . 1 5 3 1 $ . 0 3 . 6 6 5 0 0 . 8 3 2 6 0 0 . 7 1 5 2 3 7 6 . 6 7 0 0 . 0 3 2 0 0 . 3 3 8 . 1 3 3 k c a l . 1 4 0 1 $ . 1 1 . 3 5 5 0 0 . 6 1 3 6 0 0 . 4 3 9 1 2 3 3 . 8 7 0 0 . 5 3 2 0 0 . 3 0 0 . 6 6 ) â / â ( e t u o r . p p a @ 2 - s s i m e l fi G N P . 1 1 0 1 $ . 4 3 . 1 8 4 0 0 . 2 1 3 5 0 0 . 1 5 9 2 2 3 3 . 7 6 0 0 . 2 0 2 0 0 . 3 0 8 . 2 3 g u b e l i p m o C . 2 g n i s e x fi 3 e l fi G N P . 1 0 3 1 $ . 8 5 . 9 9 5 | 2308.00352#80 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 80 | Table 5: Comparing zero-shot prompting with tool docs to existing baseline methods on NLVRv2. Zero-shot methods
Finetuned methods Few-shot methods Benchmark ViLT VisProg (0-shot) VisProg (2-shot) VisProg (4-shot) VisProg (12-shot) 0-shot with docs NLVRv2 76.30 0 43.1 ± 16.1 66.5 ± 1.4 69.1 ± 0.1 63.4
21
LLM Cloud-CLI. In Table 6, we present the results on LLM-Cloud CLI with different underlying LLM planners. On both text-davinci-002 and gpt-3.5-turbo, when there is a large number of tools, we see documentation is much more important than few-shot demonstrations, where zero- shot with docs achieves significantly better performances than few-shot without docs. Additionally, when provided with docs, the LLMs are able to figure out how to use the tools without the need of demonstrations. | 2308.00675#80 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.00352 | 81 | 3 g u b e l i p m o C . 2 g n i s e x fi 3 e l fi G N P . 1 0 3 1 $ . 8 5 . 9 9 5 0 0 . 7 6 5 6 0 0 . 7 8 0 0 3 0 0 . 0 7 0 0 . 0 1 2 0 0 . 3 3 8 . 3 3 - m o C . 2 g n i s s i m . 3 s e x fi g u b e l i p t o n e c a f r u s . e m a g y p e z i l a i t i n i 4 - r e y c n e d n e p e d - t o N e l u d o M . 2 . 1 r o r 1 5 1 $ . 0 1 . 5 8 5 0 0 . 6 3 3 7 0 0 . 0 9 5 5 3 0 0 . 6 9 0 0 . 4 9 2 0 0 . 3 0 5 . 6 3 r o r r E d n u o F 4 0 0 9 0 $ . 3 8 . 8 9 3 0 0 . 2 3 8 5 0 0 . 3 7 6 5 2 0 0 . 7 8 0 0 . 1 6 2 0 0 . 3 5 2 . 8 1 4 0 2 9 0 $ . 3 8 . 5 3 4 0 0 . 4 0 1 7 0 0 . 9 3 1 9 2 7 6 . 0 1 1 0 0 | 2308.00352#81 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 81 | LLM text-davinci-002 gpt-3.5-turbo Table 6: Results on the LLM-Cloud CLI. Number of Demos Documentations 0 5 10 15 5 10 15 0 0 5 10 15 5 10 15 0 No No No No Yes Yes Yes Yes No No No No Yes Yes Yes Yes F1 0.02 0.02 ± 0.02(0.05) 0.05 ± 0.02(0.11) 0.05 ± 0.05(0.1) 0.35 ± 0.02(0.37) 0.35 ± 0.02(0.37) 0.35 ± 0.01(0.37) 0.37 0.13 0.18 ± 0.06(0.21) 0.19 ± 0.06(0.23) 0.18 ± 0.06(0.22) 0.44 ± 0.02(0.47) 0.44 ± 0.02(0.48) 0.42 ± 0.04(0.49) 0.45
Image editing. We provide more image editing examples achieved by zero-shot prompting with tool docs in Figure 12. In particular, we show that with tool docs, we are able to reproduce the image editing examples achieved by VisProg [19] without using any few-shot demos, wherein VisProg relies on 10 task-specific few-shot demos.
22 | 2308.00675#81 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 81 | What materials are thermally stable?
For a metal-organic framework at 298K and 1 bar, what is the CO2 Henry coefficient for KUGQIN_clean?
56
What materials have a high bandgap? Considering 77K and 100bar conditions, how high is the hydrogen uptake of XEGKUR_clean? Under ambient conditions of 298K and 1 bar, what is the N2 uptake for DUVNIS01_charged? Can OCUVUF_clean remove solvents without compromising its stability? At a standardized temperature of 77K and dilute system, how does XAHWAG_clean's hydrogen diffusivity measure? What is the O2 uptake of GUCJAQ_clean at the typical conditions of 298K and 1 bar? Can GAJTUI_clean take up a significant volume of N2 under conditions of 298K and 1 bar? How stable is JEDJUY_clean at elevated temperatures? At standard ambient conditions, which materials exhibit a high CO2 Henry coefficient? Under the set conditions of 77K and dilute, how high is the hydrogen diffusivity of XEGKUR_clean? What is the solvent removal stability of DUVNIS01_charged? At 298K and a infinity dilute condition, how does OCUVUF_clean's N2 diffusivity compare to other materials? Which materials show a pronounced hydrogen uptake at 77K and 100 bar? | 2308.01423#81 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00352 | 82 | 2 . 8 1 4 0 2 9 0 $ . 3 8 . 5 3 4 0 0 . 4 0 1 7 0 0 . 9 3 1 9 2 7 6 . 0 1 1 0 0 . 2 3 3 0 0 . 3 0 0 . 9 7 2 - r e v w o fl r o s n e t . 1 7 2 1 $ . 3 7 . 1 2 6 0 0 . 9 9 4 6 0 0 . 2 7 3 9 2 3 3 . 0 0 1 0 0 . 1 0 3 0 0 . 3 0 0 . 3 4 l e d o m . 2 r o r r e n o i s t o n d o h t e m g n i n i a r t t n e m e l p m i 3 - r e y c n e d n e p e d . 1 7 2 1 $ . 8 8 . 0 5 5 0 0 . 4 3 7 5 0 0 . 9 9 7 4 2 0 0 . 0 9 0 0 . 0 7 2 0 0 . 3 0 0 . 3 4 - r e 3 0 4 L R U . 2 r o r r o r 4 - r e y c n e d n e p e d . 1 2 9 0 $ . 0 5 . 8 3 4 0 0 . 3 6 3 5 0 0 . 9 0 1 4 2 7 6 . 4 8 0 0 . 4 5 2 0 0 . 3 0 0 . 1 3 | 2308.00352#82 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2308.00675 | 82 | Instruction: Replace man in black henley (person) with brick wall Program: OBJO=SEG(image=IMAGE) OBJ1=SELECT(image=IMAGE,object=OBJO,query='man wearing a black henleyâ,category='personâ) IMAGEO=REPLACE(image=IMAGE,object=OBJ1,prompt='brick wallâ) FINAL_ANSWER=RESULT(var=IMAGEO) Instruction: Hide Salman and Aamir's faces with :ps, Shahrukhâ's faces with 8) and Hritikâs with ;) Program: OBJO=FACEDET(image=|MAGE) OBJ1=SELECT(image=|MAGE,object=OBJO,query='Salman Khan,Aamir Khan,Shahrukh Khan,Hritik Roshanâ,category='faceâ) OBJ2=SELECT(image=IMAGE,object=OBJ1,query='Salman Khan,Aamir. Khan',category="faceâ) OBJ3=SELECT(image=IMAGE,object=OBJ1,query='Shahrukh Khan',category="faceâ) | 2308.00675#82 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 82 | Can XEGKUR_clean stably take up a large volume of O2 when examined at 298K and 1 bar?
What is the thermal stability of DUVNIS01_charged?
At standard conditions (298K and 1 bar), which materials are known for high O2 uptake?
Can XEGKUR_clean stably remove solvents?
57
How high is the bandgap of DUVNIS01_charged? Which materials exhibit the highest N2 diffusivity at a temperature of 298K and infinity dilute condition? Under conditions of 298K and 1 bar, how elevated is the N2 uptake for XEGKUR_clean? At the standardized 298K and infinity dilute system, what is the O2 diffusivity of DUVNIS01_charged? Under standard conditions, how high is the CO2 Henry coefficient for IZEHAX_clean? At 77K and 100 bar pressure, can JALCAD_clean stably accommodate a large volume of hydrogen? Under the given conditions of 77K and dilute system, which materials have the highest hydrogen diffusivity? Is XEGKUR_clean thermally stable? Can you provide a list of materials with a bandgap exceeding 2 eV?
Table S2. 100 questions in prediction task.
58 | 2308.01423#82 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00675 | 83 | OBJ3=SELECT(image=IMAGE,object=OBJ1,query='Shahrukh Khan',category="faceâ) OBJ4=SELECT(image=IMAGE,object-OBJ1,query='Hritik Roshanâ,category="faceâ) IMAGEO=EMOJI(image=IMAGE, object=OBJ2,emoji="face_with_tongueâ) IMAGE1=EMOJIimage=IMAGEO,object=OBJ3,emoji='smiling_face_with_sunglasses') IMAGE2=EMOJI(image=IMAGE1,object=OBJ4,emoji='winking_faceâ) FINAL_ANSWER=RESULT(var=IMAGE2) Instruction: Create a colorpop of the man in black henley and also blur the background Program: OBJO=SEG(image=IMAGE) OBJ1=SELECT(image=IMAGE, object=OBJO, query='man wearing black henleyâ, category='personâ) IMAGEO=COLORPOP(image=IMAGE, object=OBJ1) IMAGE1=BGBLUR(image=IMAGEO, object=OBJ1) FINAL_ANSWER=RESULT(var=IMAGE1) | 2308.00675#83 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models. | http://arxiv.org/pdf/2308.00675 | Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister | cs.CL, cs.AI, cs.CV, cs.LG | null | null | cs.CL | 20230801 | 20230801 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2204.10878"
},
{
"id": "1910.08210"
},
{
"id": "2107.07653"
},
{
"id": "2201.11903"
},
{
"id": "2305.17126"
},
{
"id": "1704.07535"
},
{
"id": "2205.01068"
},
{
"id": "2203.05115"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2305.04091"
},
{
"id": "2303.05499"
},
{
"id": "2107.07566"
},
{
"id": "2110.14168"
},
{
"id": "2210.03350"
},
{
"id": "2303.11381"
},
{
"id": "2101.06804"
},
{
"id": "2304.08354"
},
{
"id": "2212.14024"
},
{
"id": "2305.18752"
},
{
"id": "2211.10435"
},
{
"id": "2303.04671"
},
{
"id": "2210.12810"
},
{
"id": "1808.09588"
},
{
"id": "2304.11062"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2210.02406"
},
{
"id": "2212.10560"
},
{
"id": "2303.04129"
},
{
"id": "1704.01696"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "1908.03557"
},
{
"id": "2210.05359"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2201.08239"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2204.01691"
},
{
"id": "2305.01625"
},
{
"id": "2303.12712"
},
{
"id": "2207.05608"
},
{
"id": "2303.03846"
},
{
"id": "2211.11559"
},
{
"id": "2207.01206"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
},
{
"id": "2208.03188"
}
] |
2308.01423 | 83 | Table S2. 100 questions in prediction task.
58
Can you generate the structures with the largest surface area? I'm looking for structures with a hydrogen uptake of about 500 cm3/cm3 at 100bar and 77K, can you generate those? Can you generate structures with a density close to 0.4? What structures can you generate with a smallest accessible pore volume? I'm interested in structures with a low O2 uptake at 298K and 1bar, can you generate these? I'm curious about structures where accessible volume fraction is less than 0.4, can the model generate these? Can the model generate structures where the largest cavity diameter is in the bottom quartile? I am curious about structures where the thermal stability is the highest among all, can you generate those? What structures can be generated with a N2 uptake in the 3rd quartile range? Can you create a MOF with a void fraction greater than 0.5 that has the highest CO2 Henry coefficient at 298 K?
Table S3. 10 questions in generation task
59 | 2308.01423#83 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements. | http://arxiv.org/pdf/2308.01423 | Yeonghun Kang, Jihan Kim | cs.CL, cs.AI, cs.LG, physics.chem-ph | null | null | cs.CL | 20230801 | 20230825 | [
{
"id": "2302.13971"
},
{
"id": "2306.11296"
},
{
"id": "2303.17580"
},
{
"id": "2305.18365"
},
{
"id": "2305.10601"
},
{
"id": "1810.04805"
},
{
"id": "2211.09085"
},
{
"id": "2304.05376"
},
{
"id": "2212.05238"
},
{
"id": "2108.07258"
},
{
"id": "2110.06197"
},
{
"id": "2306.06283"
},
{
"id": "2008.02275"
},
{
"id": "2303.12712"
},
{
"id": "2210.03629"
},
{
"id": "2205.00445"
},
{
"id": "2009.03300"
}
] |
2308.00352 | 84 | # f o
# s t l u s e r
l a n o i t i d d A
: 6
e l b a T
# , â f o r e b m u n
# e h T â
# s e t o n e d
â # â
# . d e d u l c n i
s c i t s i t a t s
# e d o C
# D
# I
r e p s e n i l # e d o c
# f o
s e n i l # s e l fi e d o c #
0 0 . 6 9 1
0 0 . 5
0
0 0 . 1 9 1
0 0 . 6
1
0 0 . 8 9 1
0 0 . 3
2
4 6 1
0 0 . 5
3
0 0 . 3 0 2
0 0 . 6
4
0 0 . 9 1 2
0 0 . 6
5
0 0 . 3 7
0 0 . 4
6
0 0 . 6 1 3
0 0 . 4
7
0 0 . 5 1 2
0 0 . 5
8
0 0 . 5 1 2
0 0 . 5
9
0 0 . 3 9
0 0 . 3
0 1
26
7 5 . 1 9 1
1 7 . 4
. g v A | 2308.00352#84 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through
societies of agents based on large language models (LLMs). Existing LLM-based
multi-agent systems can already solve simple dialogue tasks. Solutions to more
complex tasks, however, are complicated through logic inconsistencies due to
cascading hallucinations caused by naively chaining LLMs. Here we introduce
MetaGPT, an innovative meta-programming framework incorporating efficient human
workflows into LLM-based multi-agent collaborations. MetaGPT encodes
Standardized Operating Procedures (SOPs) into prompt sequences for more
streamlined workflows, thus allowing agents with human-like domain expertise to
verify intermediate results and reduce errors. MetaGPT utilizes an assembly
line paradigm to assign diverse roles to various agents, efficiently breaking
down complex tasks into subtasks involving many agents working together. On
collaborative software engineering benchmarks, MetaGPT generates more coherent
solutions than previous chat-based multi-agent systems. Our project can be
found at https://github.com/geekan/MetaGPT | http://arxiv.org/pdf/2308.00352 | Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, Jürgen Schmidhuber | cs.AI, cs.MA | null | null | cs.AI | 20230801 | 20231106 | [
{
"id": "2308.12950"
},
{
"id": "2305.17066"
},
{
"id": "1511.09249"
},
{
"id": "2308.11432"
},
{
"id": "2306.08568"
},
{
"id": "2310.02304"
},
{
"id": "2303.08896"
},
{
"id": "2204.05999"
},
{
"id": "2309.16797"
},
{
"id": "2002.08155"
},
{
"id": "2305.16960"
}
] |
2307.16364 | 0 | # Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
Paul Denny The University of Auckland Auckland, New Zealand [email protected]
Juho Leinonen The University of Auckland Auckland, New Zealand [email protected]
James Prather Abilene Christian University Abilene, TX, USA [email protected]
Thezyrie Amarouche University of Toronto Scarborough Toronto, ON, Canada [email protected]
Andrew Luxton-Reilly The University of Auckland Auckland, New Zealand [email protected]
Brett A. Becker University College Dublin Dublin, Ireland [email protected]
3 2 0 2
Brent N. Reeves Abilene Christian University Abilene, TX, USA [email protected]
l u J 1 3 ] C H . s c [
# ABSTRACT | 2307.16364#0 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 0 | 3 2 0 2
t c O 3 ] I A . s c [
2 v 9 8 7 6 1 . 7 0 3 2 : v i X r a
Preprint
u/s @
TOOLLLM: FACILITATING LARGE LANGUAGE MODELS TO MASTER 16000+ REAL-WORLD APIS
Yujia Qin1â, Shihao Liang1â, Yining Ye1, Kunlun Zhu1, Lan Yan1, Yaxi Lu1, Yankai Lin3â , Xin Cong1, Xiangru Tang4, Bill Qian4, Sihan Zhao1, Lauren Hong1, Runchu Tian1, Ruobing Xie5, Jie Zhou5, Mark Gerstein4, Dahai Li2,6, Zhiyuan Liu1â , Maosong Sun1â 1Tsinghua University 2ModelBest Inc. 3Renmin University of China 4Yale University 5WeChat AI, Tencent Inc. 6Zhihu Inc. [email protected]
# ABSTRACT | 2307.16789#0 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 0 | 3 2 0 2
l u J 1 3 ] L C . s c [
1 v 7 7 8 6 1 . 7 0 3 2 : v i X r a
# Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
# Vaibhav Adlakha1,2 Parishad BehnamGhader1,2,â Xing Han Lu1,2,â Nicholas Meade1,2,â Siva Reddy1,2,3
1Mila â Quebec AI Institute 2McGill University 3Facebook CIFAR AI Chair {firstname.lastname}@mila.quebec
# Abstract
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. | 2307.16877#0 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 1 | l u J 1 3 ] C H . s c [
# ABSTRACT
involving many small problems [1, 7] or through scaffolding via ac- tivities such as Parsons problems [10, 11]. However, LLMs are now capable of producing code automatically and have demonstrated im- pressive performance on problems that are typical in introductory programming courses [12, 13, 28]. In addition to the opportunities they present, educators have voiced concerns around the poten- tial misuse of these models for plagiarism, and over-reliance on AI-generated code by beginners [3], leading to a possible erosion of traditional coding skills [9]. New pedagogical approaches are needed to develop the changing skillsets that students require in the era of generative AI [5]. | 2307.16364#1 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 1 | Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using exter- nal tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the con- struction can be divided into three stages: (i) API collection: we collect 16, 464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruc- tion generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a | 2307.16789#1 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 1 | In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both au- tomatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the userâs information need (correct- ness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional met- rics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true perfor- mance of these models. Our analysis reveals that instruction-following models are competi- tive, and sometimes even outperform fine-tuned models for correctness. However, these mod- els struggle to stick to the provided knowl- edge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https: //github.com/McGill-NLP/instruct-qa
# Introduction | 2307.16877#1 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 2 | With their remarkable ability to generate code, large language mod- els (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in fo- cus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging â the ability to construct good prompts for code-generating mod- els. This paper introduces a novel pedagogical concept known as a âPrompt Problemâ, designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report em- pirical findings from a field study in which Promptly was deployed in a | 2307.16364#2 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 2 | involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evalu- ator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demon- strates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench. The codes, trained models, and demo are publicly available at https://github.com/OpenBMB/ToolBench. | 2307.16789#2 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 2 | # Introduction
One of the goals of natural language processing (NLP) is to enable systems to perform tasks based on natural language instructions as this would em- power users to interact in an intuitive and flexi? Question Where are One Direction from? @Knowledge One Direction, often shortened to 1D, are an English-lrish pop boy band formed in London, England in 2010. The group are composed of Niall Horan, Liam Payne, Harry Styles and Louis Tomlinson; former member Zayn Malik departed from the group in 2015. The group signed with Simon Cowell's record label Syco Records after forming and finishing third in the seventh series of the British televised singing competition "The X Factor" in 2010. Propelled to... (Faithfulness Human â Partially K-Precision â 0.77 K-F1 â 0.09 --: Response One Direction are from London, (WCorrectness England and Mullingar, Ireland. Human â Yes Recall > 1.00 ~ âReference Answer F1 â 0.36 London, England | 2307.16877#2 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 3 | Problems and automates the evaluation of prompt-generated code. We report em- pirical findings from a field study in which Promptly was deployed in a first-year Python programming course (ð = 54). We explore stu- dent interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice. | 2307.16364#3 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 3 | # INTRODUCTION
Tool learning (Qin et al., 2023b) aims to unleash the power of large language models (LLMs) to effec- tively interact with various tools (APIs) to accomplish complex tasks. By integrating LLMs with APIs, we can greatly expand their utility and empower them to serve as efficient intermediaries between users and the vast ecosystem of applications. Although open-source LLMs, e.g., LLaMA (Touvron et al., 2023a), have achieved versatile capabilities through instruction tuning (Taori et al., 2023; Chiang et al., 2023), they still lack the sophistication in performing higher-level tasks, such as appro- priately interacting with tools (APIs) to fulfill complex human instruction. This deficiency is because current instruction tuning largely focuses on basic language tasks, with a relative neglect of the tool-use domain. On the other hand, current state-of-the-art (SOTA) LLMs (e.g., ChatGPT (OpenAI,
â Indicates equal contribution. â Corresponding author.
1
Preprint | 2307.16789#3 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16364 | 4 | 1 v 4 6 3 6 1 . 7 0 3 2 : v i X r a
Teaching students to read and understand code are longstanding goals of introductory courses, and they are becoming increasingly important skills given the ease with which code can be generated by LLM-based tools. An equally important emerging skill is the ability to formulate effective prompts for LLMs to generate code. Recent work has shown that although many typical introductory problems can be solved by LLMs using verbatim textbook or exam problem statements [12, 13], this approach is not always sufficient. For example, manual modification of the prompts to include explicit algorithmic hints greatly improves code-generation performance [30]. In recent work, Denny et al. argue that the ability to engineer effective prompts that generate correct solutions is now an essential skill for students, yet they do not propose concrete approaches for how this skill can be taught [6].
To address this concern, in the current paper we introduce the concept of a âPrompt Problemâ â a new exercise paradigm in which students solve programming exercises by formulating natural lan- guage prompts for code-generating LLMs. Students are presented with a visual representation of a problem that illustrates how input values should be transformed to an output. Their task is to devise a prompt that would guide an LLM to generate the code required to solve the problem.
# 1 INTRODUCTION | 2307.16364#4 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 4 | â Indicates equal contribution. â Corresponding author.
1
Preprint
Figure 1: Three phases of constructing ToolBench and how we train our API retriever and ToolLLaMA. During inference of an instruction, the API retriever recommends relevant APIs to ToolLLaMA, which performs multiple rounds of API calls to derive the final answer. The whole reasoning process is evaluated by ToolEval.
2022) and GPT-4 (OpenAI, 2023)), which have demonstrated impressive competencies in utilizing tools (Bubeck et al., 2023), are closed-source with their inner mechanisms opaque. This limits the democratization of AI technologies and the scope of community-driven innovation and development. In this regard, we deem it urgent to empower open-source LLMs to skillfully master diverse APIs. | 2307.16789#4 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 4 | ble manner. Instruction-following models are a type of language models that aim to achieve this goal. Training these models usually involves ex- posing large language models (LLMs; Brown et al. 2020; Zhang et al. 2022; Thoppilan et al. 2022; Rae et al. 2022; Touvron et al. 2023a) to thousands of tasks formulated as natural language instructions through supervised examples (Sanh et al., 2022; Wei et al., 2022; Chung et al., 2022; Ouyang et al., 2022a; Iyer et al., 2023; Touvron et al., 2023b) or other forms of supervision (Ouyang et al., 2022b; Wang et al., 2022a; Taori et al., 2023; Peng et al., 2023). These are known to generalize to many tasks with little exposure to examples of those tasks (Mishra et al., 2022). In this paper, we evaluate instruction-following models for their ability to perform question-answering (QA) on a given set of text passages.
*Core contributor. | 2307.16877#4 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 5 | # 1 INTRODUCTION
The advent of large language models (LLMs) that can generate code is having a rapid and significant impact on computing education practice, particularly at the introductory level. Traditional peda- gogical approaches have focused on helping students learn how to write code. This is typically achieved through frequent practice
In addition to conceptualizing the problem type, we make two other contributions: we introduce a tool for delivering Prompt Problems and we present empirical findings from the use of this tool by introductory progamming students. To understand how
Prompt Problems work in practice, we have developed a web-based tool called Promptly. This tool displays a problem representation, converts a prompt written by a student to code (via an API call to an LLM), and then executes the code against a suite of test cases. If the code fails to solve the problem, the student must revise and resubmit their prompt. This iterative process continues until the problem is solved. We hypothesize that solving Prompt Problems will help students learn how to craft effective prompts. | 2307.16364#5 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 5 | Although prior works have explored building instruction tuning data for tool use (Li et al., 2023a; Patil et al., 2023; Tang et al., 2023; Xu et al., 2023b), they fail to fully stimulate the tool-use capabilities within LLMs and have inherent limitations: (1) limited APIs: they either fail to in- volve real-world APIs (e.g., RESTAPI) (Patil et al., 2023; Tang et al., 2023) or consider only a small scope of APIs with poor diversity (Patil et al., 2023; Xu et al., 2023b; Li et al., 2023a); (2) constrained scenario: existing works are confined to instructions that only involve one single tool. In contrast, real-world scenarios may require that multiple tools are in- terleaved together for multi-round tool execution to solve a complex task. Besides, they often assume that users manually specify the ideal API set for a given instruction in advance, which is infeasible with a large collection of real-world APIs; (3) inferior planning and reasoning: existing works adopted either CoT (Wei et al., 2023) or ReACT (Yao et al., | 2307.16789#5 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 5 | *Core contributor.
Instruction-following models can perform QA when provided with a prompt describing the task, the question, and relevant text passages to reason upon retrieved by a retriever (Chung et al., 2022). These model-generated answers are known to be natural, informative, and verbose, a useful trait that helps to build usersâ trust and engagement but these models also generate hallucinate information that can mislead users (Dziri et al., 2022b; Chiesurin et al., 2023). Moreover, many QA datasets have short reference answers that render traditional eval- uation metrics like exact match (EM) and F1 word overlap unreliable when evaluating these verbose answers (Kamalloo et al., 2023). | 2307.16877#5 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 6 | We begin by presenting an illustrative example of a Prompt Prob- lem, and we draw from the results of a pilot study to demonstrate the difficulty that students commonly face in formulating effec- tive prompts. We then describe the design of our tool, Promptly, for delivering Prompt Problems at scale and we deploy it in an introductory Python programming course (ð = 54). We report the results of our analysis of student interactions with the tool and their perceptions of the activity. We also discuss possible variations of the problem design, and suggest ideas for future work.
# 2 RELATED WORK | 2307.16364#6 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 6 | APIs; (3) inferior planning and reasoning: existing works adopted either CoT (Wei et al., 2023) or ReACT (Yao et al., 2022) for model reasoning, which can- not fully elicit the capabilities stored in LLMs and thus fail to handle complex instructions. In addition, some works do not even execute APIs to obtain real responses (Patil et al., 2023; Tang et al., 2023), which serve as important information for subsequent model planning. | 2307.16789#6 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 6 | Consider, for instance, the scenario in Figure 1, where the user question is âWhere are One Direc- tion from?â. A comparison between the reference response âLondon, Englandâ and the first part of the modelâs response âOne Direction are from Lon- don, Englandâ yields an EM score of 0 and F1 score of only 0.5, despite both answers being effec- tively equivalent (The entire response gets 0.36 F1 score). Moreover, the second part of the response asserts that One Direction is from Mullingar, Ire- land, a fact which despite being correct, is not en- tailed by the provided knowledge. As EM and F1 only compare against reference answers, they are unsuitable to estimate the alignment of the modelâs response with the provided knowledge.
the perfor- mance of instruction-following models for retrieval- augmented QA should be evaluated along two dimensions â 1) correctness w.r.t information need, which measures a modelâs efficacy in sat- isfying a userâs information needs, and 2) faith- fulness w.r.t provided knowledge, which measures a modelâs capability to ground responses in pro- vided knowledge. A model demonstrating robust performance across both these dimensions can po- tentially be considered useful and safe for the user in information-seeking scenarios. | 2307.16877#6 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 7 | It has been less than a year since LLMs began to dominate conver- sations in the computing education community and a little more than that since the first research papers began to emerge in the computing education literature. Early work centered on the capa- bilities of these tools, largely driven by concerns that they would lead to a flood of cheating [23] and the effect that would have on student learning. Sometimes, such work involved comparing LLM and student performance, for example in generating explanations of code [17]. Finnie-Ansley et al. demonstrated that Codex (based on GPT-3) ranked in the top quartile of real introductory program- ming (CS1) students on real exams [12]. A year later Finnie-Ansley et al. extended this work to data structures and algorithms (CS2) exams with very similar results [13]. Other studies on the capa- bilities of LLMs have revealed impressive proficiency in dealing with object-oriented programming tasks [4], Parsons problems [28], and compiler error messages [18]. Many of these explorations also revealed that LLMs are not infallible and can produce solutions that donât align with best programming practice [4], struggle with | 2307.16364#7 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 7 | To facilitate tool-use capabilities within open-source LLMs, we introduce ToolLLM, a general tool-use frame- work including data construction, model training, and eval- uation. As illustrated in Figure 1, we collect a high-quality instruction-tuning dataset ToolBench. It is constructed automatically using ChatGPT (gpt-3.5-turbo-16k), which has been upgraded with function call (link) capabilities. The comparison between ToolBench and prior works is listed in Table 1. Specifically, the construction of ToolBench entails three phases:
⢠API Collection: we gather 16,464 representational state transfer (REST) APIs from RapidAPI (link), a platform that hosts massive real-world APIs provided by developers. These APIs span 49 diverse categories such as social media, e-commerce, and weather. For each API, we crawl detailed API documents from RapidAPI, including the functionality descriptions, required parameters, code snippets for API calls, etc. By comprehending these documents to learn to execute APIs, LLMs can generalize to new APIs unseen during training;
⢠Instruction Generation: we first sample APIs from the whole set and then prompt ChatGPT to generate diverse instructions for these APIs. To cover practical scenarios, we curate instructions
2
# Preprint | 2307.16789#7 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 7 | Along these dimensions, we evaluate several re- cent instruction-following models such as Llama-2 (Touvron et al., 2023b), GPT-3.5 (sibling model of Ouyang et al. 2022a), Flan-T5 (Chung et al., 2022), and Alpaca (Taori et al., 2023) on three popular QA datasets that correspond to three diverse QA tasks â Natural Questions (NQ; Kwiatkowski et al. 2019) for open-domain QA, HotpotQA (Yang et al., 2018) for multi-hop QA, and TopiOCQA (Adlakha et al., 2022) for conversational QA. We conduct a human
analysis of 900 model responses and correlate them with several automatic metrics for correctness and faithfulness. | 2307.16877#7 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 8 | Many of these explorations also revealed that LLMs are not infallible and can produce solutions that donât align with best programming practice [4], struggle with longer and higher-level specifications [12], include unnecessary elements [32], and cause students to become confused reading code that they didnât write themselves [14, 27]. Babe et al. showed that LLMs can mislead students, causing them to believe that their own prompts are more (or less) effective than they are in reality [2]. | 2307.16364#8 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 8 | ⢠Instruction Generation: we first sample APIs from the whole set and then prompt ChatGPT to generate diverse instructions for these APIs. To cover practical scenarios, we curate instructions
2
# Preprint
Resource Real-world API? Real API Call&Response? Multi-tool Scenario? API Retrieval? Multi-step Reasoning? Number of tools Number of APIs Number of Instances Number of Real API Calls Avg. Reasoning Traces ToolBench (this work) â â â â â 3451 16464 126486 469585 4.0 APIBench (Patil et al., 2023) â â â â â 3 1645 17002 0 1.0 API-Bank (Li et al., 2023a) â â â â â 53 53 274 568 2.1 ToolAlpaca (Tang et al., 2023) â â â â â 400 400 3938 0 1.0 ToolBench (Xu et al., 2023b) â â â â â 8 232 2746 3926 5.9
~
Table 1: A comparison of our ToolBench to notable instruction tuning dataset for tool learning. | 2307.16789#8 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 8 | analysis of 900 model responses and correlate them with several automatic metrics for correctness and faithfulness.
Our findings suggest that, for correctness, recall â the proportion of tokens in the reference answer also present in the model response â exhibits the highest correlation than lexical overlap metrics like EM or F1. For faithfulness, K-Precision â the pro- portion of model response tokens that appear in the knowledge snippet â correlates better with hu- man judgments than any other token-overlap metric. Among model-based metrics, i.e., using a model to determine the correctness/faithfulness of an an- swer w.r.t. reference answer/knowledge, GPT-4 correlates the most but it is expensive and prone to systematic biases (Wang et al., 2023). However, we find that lexical overlap metrics are close to model-based metrics, allowing us to evaluate sev- eral instruction-following models at a large-scale. A faithful model should not only answer a ques- tion when relevant knowledge is provided, but it should also abstain from answering when irrelevant knowledge is provided. Hence, we also measure the modelâs ability to abstain from answering as an evaluation for faithfulness.
To summarize, our contributions are as follows: | 2307.16877#8 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.