id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.01423#8 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 10 Question: How high is the accessible surface area of JUKPAI? Thought: | need to find the accessible surface area of JUKPAI Action: search_csv Action Input: â Search name JUKPAI and provide information of its accessible surface areaâ {Table Searcher] Thought: To search for a specific value in the DataFrame, we can use the ... [Table Searcher] Input: python (df.loc[dff '].to_markdown()) [Table Searcher] Observation: | 2308.01423#7 | 2308.01423#9 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#9 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Accessible Surface Area (m*2/cmâ 3) 4837 1474.22 [Table Searcher] Final Thought: The accessible surface area of name â JUKPAIâ is 1474.22 ... Thought: | now know the final answer Answer : The accessible surface area for JUKPAI is 1474.22 m?/cm* Figure 3. Example of a Table Searcher for the question â How high is the surface area of JUKPAI?â | 2308.01423#8 | 2308.01423#10 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#10 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | . The human's question prompts the system to devise a strategy. From this, Python code is generated and executed to extract the desired information, which is then returned as the final answer. 11 The accuracy of the look-up table search is contingent on the pre-calculated values available in the specific files. And for queries regarding the properties of MOFs that are not available, computational simulation can serve as an attractive alternative method, but unfortunately, simulations are a time-intensive process and an abundance of computational resources45. The best resolution to such challenges is the application of machine learning models, which enable high- accuracy predictions grounded in extensive data. In the case of pre-trained machine learning models, predictions are quick and can be made for a significant volume of substances simultaneously, making it an excellent toolkit for integration into ChatMOF. As an appropriate toolkit for the prediction task, ChatMOF uses the MOFTransformer46,47 model that has been developed in our group for the universal prediction of MOF properties. This model leverages both local features, such as atoms and bonds, and global features like surface area and topology. Having undergone pre-training with one million hypothetical MOFs and subsequent fine-tuning for specific properties, MOFTransformer shows high performance in predicting various properties. Moreover, it affords the convenience of predicting the desired property by retaining the model structure while altering the weights of a model that's been fine-tuned for the desired property. The key to generating accurate responses is selecting the appropriate fine-tuned model with the MOFTransformer and the material to which it will be applied, based on the query. Similar to HuggingGPT10, ChatMOF does not directly compute the material properties, but it rather selects the appropriate machine learning model. Figure 4 shows the example prompt for ChatMOF using the MOFTransformer predictor. When a user asks a question such as â At room temperature (298K), what's the CO2 Henry coefficient for XEGKUR?â | 2308.01423#9 | 2308.01423#11 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#11 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | , generator develops a strategy for solving the problem. The optimal fine-tuned model of MOFTransformer that aligns with the plan, in this case, 12 â CO2_henry_coefficient_298Kâ is chosen, and the material to predict (XEGKUR) is decided. Following this, the selected model is employed to make prediction. Finally, it make a response to the query using a language learning model based on the observed values from the machine learning model. Since this is a logarithmic value, ChatMOF selects the â | 2308.01423#10 | 2308.01423#12 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#12 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Python_REPLâ tool and calculates an exponential value to get the final answer. 13 Question: At room temperature (298K), what's the COz Henry coefficient for XEGKUR? Thought: | need to find the COz Henry coefficient for XEGKUR at room temperature Action: search_csv Thought: The search_csv tool did not provide any information on the COz Henry coefficient... Action: predictor Action Input: Predict the COz Henry coefficient for XEGKUR at 298K [predictor] Thought: | need to gather data on XEGKUR and train a machine learning to pre... [predictor] Property: CO2_henry_coefficient_298K [predictor] Materials: XEGKUR bash load model : CO2_henry_coefficient_298K/best.ckpt Predicting DataLoader 0: 100% | AE SSS 8888) 1/1 [00:00<00:00] [predictor] Final Thought: Based on the results, answer the question using the predicted ... Observation: The predicted CO2 Henry coefficient at 298K for XEGKUR_clean is -3.62769 mol/Kg-Pa. However, this is a logarithmic value. To get the original value, an exponential must be applied. Thought: | need to convert the logarithmic value to the original value Action: Python_REPL Action Input: python math (math.exp(-3.62769)) Observation: 0.026577507595890823 Thought: | now know the final answer Answer : The CO2 Henry coefficient for XEGKUR at room temperature (298K) approximately 0.027 mol/Kg-Pa | 2308.01423#11 | 2308.01423#13 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#13 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Figure 4. Example of a predictor for the question "At room temperature (298K), what's the CO2 Henry coefficient for XEGKUR?" The predictor sets up a plan to solve the question, an appropriate model, and target material. Based on this, it uses machine learning to predict the value, which is then used to derive the final answer. 14 Finally, a key aspiration among researchers in the field of MOFs is the inverse design of MOFs exhibiting desired properties. In materials science, various generative models, including Generative Adversarial Networks (GAN)48,49 and Diffusion models50, have been employed for inverse design. However, due to the inherent complexity of MOFs, which includes a large number of atoms, large void fraction, and complex topologies, an atom-by-atom inverse design approach has been elusive. As a workaround, MOF inverse design has been facilitated top-down approaches leveraging through genetic algorithms51-53, Variational Autoencoders54 (VAE), or reinforcement learning55 for the selection of building blocks and their placement into suitable topologies. Genetic algorithms are notably suitable for integration with LLMs. As a bio-inspired optimization methodology, genetic algorithms operate on a selection, mutation, and crossover principle, making them adaptable and efficient.56 For their application to MOFs, these frameworks must be delineated by genes comprising topology and building blocks. For instance, a representative MOF, HKUST-1, can be depicted as tbo+N17+N10, with tbo representing topology and N17 and N10 representing the building block notations. As these gene representations are textual, they facilitate the application of genetic algorithms using an LLM. Figure 5 showcases the utilization of a genetic algorithm by ChatMOF to fabricate a MOF per user specifications. Upon a user query, the system develops a strategy optimized for the genetic algorithm. It also identifies the target property and determines the loss function most suited for the objective, such as choosing the maximum, minimum, or closest value. Guided by this plan, ChatMOF selects parent genes from the existing database that align with the loss function. Using LLM, it produces children genes from the selected parent genes that are most likely to achieve the target. These children are then transformed into a structure file, and their properties are estimated through machine learning. This procedure is reiterated a fixed number of times, generating | 2308.01423#12 | 2308.01423#14 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#14 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 15 multiple generations of children with each generation yielding MOFs progressively nearer to the target. From the created structures, the one that aligns most closely with the question is finally chosen and presented as the response. chosen and presented as the response. 16 Question: Can you generate the structures with the largest surface area? Thought: | need to generate structures with the largest surface area Action: generator Action Input: â Generate structures with the largest surface areaâ {generator] Thought: | need to generate structures with the largest surface area [generator] Objective: max [generator] Predict Property: accessible_surface_area bash load model : accessible_surface_area/best.ckpt Predicting DataLoader 0: 100% Senn | 1/1 [00:00<00:00] [generator] Find Parents: extract cif_id and accessible_surface_area of 100 materials ... ... (search table) cif_id Accessible Surface Area (m*2/cmâ | 2308.01423#13 | 2308.01423#15 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#15 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | â 3) 2664 pcu+N47+E33 5808.59 1411 pcu+N274+E32 5714.84 8 pcu+N613+E90 5665.73 {generator] Get Children: generate 100 new materials with the largest surface area from ... [generator] Generate Structures: bash Generate cifs: 100% | HES 8888888) 1/1 [00:00<00:00] ... (run N cycles) [generator] Final Thought: Based on the generated CIF, find the materials that has the ... ... (search table) 12718 cif_id Ttl+N535+N234 accessible_surface_area 6411.28173828125 Observation: The answer for question is rtl+N535+N234 Thought: | now know the final answer Answer : The structure with the largest surface area is rtl+N535+N234 cif_id Accessible Surface Area (m*2/cmâ | 2308.01423#14 | 2308.01423#16 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#16 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | â 3) 2664 pcu+N47+E33 5808.59 1411 pcu+N274+E32 5714.84 8 pcu+N613+E90 5665.73 Figure 5. Example of a generator for the question "Can you generate the structures with the largest surface area". The generator establishes a plan, objective and property for the human question. Based on this, it finds parents that satisfy the objective. It uses a genetic algorithm to create children genes and generate structures. This is repeated for a number of cycles to generate new MOFs, which are used to derive the final answer. | 2308.01423#15 | 2308.01423#17 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#17 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 17 Moreover, ChatMOF is engineered to perform a diverse set of toolkits, which extend beyond the realms of LLMs. This includes capabilities such as file search, Internet search, and even simple calculations. These additional functionalities are primarily enabled by leveraging the varied capabilities provided by LangChain57, enhancing the overall functionality and utility of ChatMOF. Thus, it is not merely a material analysis tool, but a comprehensive system that can accommodate a wide array of tasks and operations. In addition, ChatMOF incorporates the Atomic Simulation Environment (ASE)58 library as an integral toolkit to facilitate diverse operations on material structure data. The ASE library holds considerable importance in the field of materials science due to its capabilities, including atom manipulation, cell information acquisition, and visualization, among others. Similar to the function of a table searcher, when confronted with a query, ChatMOF devises a strategic plan and constructs suitable Python code utilizing the ASE library to fulfil the query's demands. Subsequently, this code is executed. code is executed. | 2308.01423#16 | 2308.01423#18 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#18 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 18 # Evaluation To evaluate performance of ChatMOF, analysis was conducted for â search taskâ , â prediction taskâ , and â generation taskâ . For evaluation purposes, questions for ChatMOF were created utilizing GPT-4.0 to generate various sentences about the given properties of a MOF. The respective questions for each task can be found in Table S1-3. Accuracy, gauging how adequately the logic responded to the question, was measured for each task. An analysis of the accuracy of ChatMOF utilized three labels: "True", "False (token limit exceeded)", and "False (logic error)". The label "True" signifies that ChatMOF's logic was precise and the yielded answer was accurate. The term "False (Token Limit Exceeded)" was used when the token count in LLM surpassed the maximum allowance of 4,000, thus obstructing further progress. Lastly, the "False (Logic Error)" label designated situations where an error in ChatMOF's logic resulted in an incorrect response or an anomaly. Such situations typically occur when an erroneous plan for obtaining an answer was devised or when an error in output interpretation diverts the system from the desired direction. Figure 6 presents the accuracy measurements for the three tasks using ChatMOF with GPT-4. Accuracy was measured for 100 sample questions for the search and prediction tasks, and 10 sample questions for the generation task. The number in the bar graph indicates the number of each question in each class. Both the search and prediction tasks rendered accurate answers with high frequency. Excluding 'Token Limit Exceeded' instances (4 out of 100, 6 out of 100, and 2 out of 100, for search, prediction, and generation tasks respectively), they exhibit high accuracies of 96.9% and 95.7%, respectively. For the generation task, the accuracy stood at 87.5%. Given its complexity, relative to the other two tasks, the accuracy rate is lower. Regardless, all three tasks report high accuracy rates, and these tasks carry significance weight because these are tasks that a typical LLM fails to execute flawlessly. A single LLM fails to provide precise information since | 2308.01423#17 | 2308.01423#19 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#19 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 19 it lacks specific information about the material, particularly for obtaining property information, which is challenging to source via an internet search. Also ChatMOF, when integrated with GPT-4, exhibits superior performance compared to its integration with GPT-3.5-turbo. As evidenced in Figure S2, the accuracy of ChatMOF with GPT- 3.5-turbo stands at 95%, 91%, and 77.8% for the search, prediction, and generation tasks respectively, excluding instances of "Token Limit Exceeded". Across all tasks, GPT-4 consistently outperforms GPT-3.5-turbo in accuracy. This enhanced accuracy of GPT-4 can be attributed to its refined reasoning and comprehension capabilities, particularly during the planning phase. Figure S3 illustrates the distinct approaches that GPT-4 and GPT-3.5-turbo take when presented with the same query: | 2308.01423#18 | 2308.01423#20 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#20 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | "How does the pore limiting diameter of YUSGID_clean compare with other materials?". While GPT-3.5-turbo seeks the values for all materials mentioned in the query, leading to a token error and subsequent inability to provide an answer, GPT-4 adopts a more holistic strategy. It assesses the distribution of all materials, leveraging metrics such as mean, variance, and quartile values of the property in question. This approach enables GPT-4 to determine the relative position of the target material in the overall distribution, thus delivering a # more informative response to the user. For the "search task," the writing of code utilizing the pandas library significantly impacts the accuracy. 'Token Limit Exceeded' generally occurs when the output code surpasses the permissible token count. This frequently arises when all relevant materials that satisfy a given condition are provided (for example, when a list of materials with a particular property is listed), or when the question contains a comparative clause such as "compared to other materials." 'Logic Error' typically surfaces when there is a flawed strategic approach or a code error. An instance of this | 2308.01423#19 | 2308.01423#21 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#21 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 20 would be when a request to provide 10 specific items is met with a misguided strategy that solely aims to "extract high values," failing to retrieve the specified number of items. During the "prediction task," difficulties often occur in the interpretation process of the observed values using machine learning techniques. Both the 'Token Limit Exceeded' and 'Logic Error' occurrences can stem from the effort to draw the correct answer from the table based on the estimated values. 'Logic Errors' can manifest not only during the table search phase but also during the strategy formulation stage. An erroneous strategy could either lead to the loading of an unsuitable model or to the generation of an input that is incompatible with the intended model. The "generation task" presents a more intricate set of challenges, inviting a variety of errors. A frequently observed 'Logic Error' appears when no parent genes can be retrieved from database. If the objective function aims for maximum or minimum values, a satisfying parent gene can always be found. However, if the goal is to get close to a certain value or to fall within a specific range, the selected range might not yield any satisfying parent genes. In such scenarios, the strategy is adapted to incorporate more data. However, if no suitable parent genes are found even after modifying the strategy, it results in an error. Further, both 'Token Limit Exceeded' and 'Logic Error' might occur during the extraction of the most suitable MOF from the generated MOFs, aligning with the objective function. | 2308.01423#20 | 2308.01423#22 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#22 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 21 100 (mm True False (exceed token limit) |" False (logic error) 80 60 40 Percentage (%) 20 Search task Prediction task Generation task Figure 6. Depiction of accuracies for three tasks using GPT-4 model - search, prediction, and generation. Accuracies were evaluated based on three labels: True, False (exceeding token limit), and False (logical error). The number in the bar represent the count of each label. 22 | 2308.01423#21 | 2308.01423#23 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#23 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | # Inverse Design Validation One notable observation is that with each generation, the genetic algorithm refines the distribution of material properties to better align with the target value. Figure 7 illustrates the outcomes of the generation task for two different scenarios. Figure 7(a) reveals the structures generated in response to the question, "Can you generate structures with the largest surface area?" In this case, ChatMOF interpretes the property as accessible surface area, with the objective as maximizing this parameter. The initial generation MOF (0th generation) displays a broad distribution of surface area with an average value of 3,748 m2/g. However, with each subsequent generation, the peak at a higher position amplifies. By the third generation, the offspring MOF exhibits a significantly elevated average peak value of 5,554 m2/g. Analogously, Figure 7(b) portrays the distribution of structures generated to fulfill the request "I'm looking for structures with a hydrogen uptake of about 500 cm3/cm3 at 100 bar and 77 K, can you generate those?". Here, ChatMOF sets the property to hydrogen uptake at 100bar and 77 K with the objective of achieving close proximity to 500 cm3/cm3. The distribution of the initial structure spans evenly from 250 cm3/cm3 to 650 cm3/cm3. However, the structure created in the final generation displays the most pronounced and narrow peak at 500 cm3/cm3. This indicates the efficiency of the genetic algorithm utilizing the LLMs. Figures 7(c) and 7(d) depict the final structures for the queries in 7(a) and 7(b). The optimal structure in 7(c), rtl+N535+N234, boasts the highest surface area amongst the generated MOFs. The predicted value stands at 6411.28 m2/g. Upon performing a geometric optimization and calculating accessible surface area using Zeo++59, the surface area is revealed to have 7647.62 m2/g. This value is notably higher when compared to the CoREMOF database. Figure S1 illustrates | 2308.01423#22 | 2308.01423#24 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#24 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 23 the distribution of accessible surface areas within CoREMOF. This particular structure's surface area ranks the third-highest position in the CoREMOF ranking. In a similar vein, the optimal configuration of dia+N719+E186, showcased in Figure 7(d), possesses a surface area of 499.998 cm3/cm3, mirroring the stipulated target of 500 cm3/cm3. Following geometric optimization of this structure, its uptake was calculated using RASPA, yielding a value strikingly close to the goal, at 495.823 cm3/cm3. Despite its successes, the generation task of ChatMOF does present some limitations. Chief among these is the decrease in gene diversity due to constraints on input and output tokens. The token count restricts the number of parent and child structures to around 100, a fraction compared to inversed design studies that employ conventional genetic algorithm procedures that generate upwards of 100,000 structures for each generation. Other constraints, such as the limited number of topologies and cycles, stem from resource and time restrictions. Yet, despite these limitations, ChatMOF excels in generating MOFs fitting the objective function, attesting to its efficacy. | 2308.01423#23 | 2308.01423#25 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#25 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 24 (a) (b) Question: Can you generate the structures with Question: I'm looking for structures with a hydrogen uptake of the largest surface area? about 500 cm*/cm* at 100bar and 77K, can you generate those? 0.012 3 Initial structure 0.0007 ; HE) Generated structure â © Initial structure (3 Generated structure 0.0006 0.0005 ity 0.0004 Densi 0.0003 0.0002 0.0001 0.0000 + 0 1000 2000 3000 4000 5000 6000 7000 250 300 350 400 450 500 550 600 650 Accessible surface area (m?/g) Hydrogen uptake at 100bar, 77K (cm3/cm?3) (c) (d) rtl+N535+N234 dia+N719+E186 Predicted ASA: 6411.28 m7/g Predicted Hz uptake: 499.998 cm*/cm? Calculated ASA (after opt) : 7647.62 m*/g Calculated Hz uptake (after opt): 495.823 cm*/cm* Figure 7. (a) Histogram depicting the initial structure and the generated structure for a question concerning the maximum value of surface area. (b) Histogram showing the initial structure and the generated structure for a query where the value of hydrogen uptake is set close to 500. (c) Illustration of the MOF with the largest surface area as generated by ChatMOF. ASA stand for accessible surface area. (d) Representation of the MOF with an H2 uptake value closest to 500 cm3/cm3 at 298K, 1bar, as generated by ChatMOF. | 2308.01423#24 | 2308.01423#26 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#26 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 25 # Collaborative Online Platforms One limiting factor of ChatMOF is the performance reliance on the number of pre-trained weights in the MOFTransformer used in the predictor task. An increased quantity of fine-tuned weights allows for the prediction of more properties, thereby enabling more active prediction and generation processes. However, each user faces constraints on the number of models that can be utilized, given that it is unrealistic for one individual to possess all the data. To train a model, the collection of experimental data or the execution of computational simulations is necessary. While some calculations, such as pore limiting diameter or surface area, demand less time, other tasks such as band-gap, homo, and lumo calculations are considerably more computationally demanding. The generation and training of data for these complex tasks can be quite cumbersome. To address these issues, there is a need to create an online platform akin to HuggingFace, where users can freely post their learned weights. An example of this model would be HuggingGPT10, which functions by selecting the most appropriate model among those posted on HuggingFace. Should users upload their trained models built on data they have uploaded onto this platform, it will enable other users to access them. Upon the posting of new weights online, ChatMOF will review them and if the required data is available online, the model will be downloaded automatically. The existence of this online platform will reinforce ChatMOF as a potent toolkit for predicting MOF properties. Moreover, pre-calculated data, such as those from multiple mining, can also be employed for table searches. If data sharing is executed effectively, superior results can be achieved collectively. | 2308.01423#25 | 2308.01423#27 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#27 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 26 # Conclusion The investigation into the role of generative AI in materials science, specifically through the lens of ChatMOF, unveils substantial potential for predicting and generating MOFs. This unique system, which bridges the transformative capabilities of AI and the intricate facets of materials science, demonstrates exceptional performance across various tasks. The accuracy analysis reports high success rates, notably 96.9% and 95.7% for the search and prediction tasks, respectively. Meanwhile, the more complex structure generation task, despite its intricacy, yields a notable accuracy rate of 87.5%. These promising results underline the efficacy of ChatMOF, even when confronted with the most demanding tasks. Despite certain limitations, such as dependence on the number of pre-trained weights, ChatMOF symbolizes a significant stride towards fully autonomous AI in the realm of materials science. As the technology evolves, and with a systematic enhancement of the model's capacity and data sharing across an online platform, ChatMOF's performance could be further optimized, paving the way for unprecedented advancements in MOF # research. | 2308.01423#26 | 2308.01423#28 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#28 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 27 # Method ChatMOF operates via the LangChain57 library. LangChain serves as a platform for accessing diverse modules within a Large Language Model (LLM), streamlining prompt engineering in an LLM environment. ChatMOF integrates various toolkits from LangChain alongside its distinct toolkit. For the roles of agent, evaluator, and toolkit within ChatMOF, OpenAI's Chatmodel, GPT- 4, and GPT-3.5-turbo LLMs are employed. | 2308.01423#27 | 2308.01423#29 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#29 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | During the experiments, the temperature parameter was calibrated to 0.1. The searcher component of ChatMOF adopts the CoreMOF structure, enriched by geometric features derived through ZEO++59. In instances of code discrepancies, corrections are made up to a threshold of three attempts. The predictor module within ChatMOF leans on MOFTransformer, trained on insights from four academic articles. Notably, MOFTransformer operates under version 2.1.2. The generative aspect of ChatMOF is structured around three iterative cycles. This generator employs a genetic algorithm across nine unique topologies, namely pcu, dia, acs, rtl, cds, srs, ths, bcu, and fsc. For every topology, a batch of 100 offspring genes arises from a set of 100 parental genes, chosen from a foundational group of 2000 MOFs. Structures are then formulated based on these newly minted genes, followed by value computation via the predictor. This cycle refines the pool of parental genes, and after the designated cycles, an optimized target structure is procured from the cumulative data. | 2308.01423#28 | 2308.01423#30 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#30 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 28 # Conflicts of interest There are no conflicts to declare. # Author Contributions Y.K developed ChatMOF and wrote the manuscript with J.K. The manuscript was written through the contributions of all authors. All authors have given approval for the final version of the manuscript. # Code availability The ChatMOF library is available at https://github.com/Yeonghun1675/ChatMOF.git. # Acknowledgements Y. K., and J. K. acknowledge funding from the National Research Foundation of Korea (NRF) under Project Number 2021M3A7C208974513 and 2021R1A2C2003583. This work was supported by the National Supercomputing Center with supercomputing resources including technical support (KSC-2022-CRE-0515). # References 1 2 3 4 5 | 2308.01423#29 | 2308.01423#31 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#31 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Bommasani, R. et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021). Brown, T. et al. Language models are few-shot learners. Advances in neural information processing systems 33, 1877-1901 (2020). Touvron, H. et al. | 2308.01423#30 | 2308.01423#32 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#32 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023). Bubeck, S. et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 (2023). 29 8 9 10 Vaswani, A. et al. Attention is all you need. Advances in neural information processing systems 30 (2017). Liu, P. et al. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys 55, 1-35 (2023). https://github.com/yoheinakajima/babyagi. https://github.com/Significant-Gravitas/Auto-GPT. Shen, Y. et al. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580 (2023). | 2308.01423#31 | 2308.01423#33 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#33 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Khan, R. A., Jawaid, M., Khan, A. R. & Sajjad, M. ChatGPT-Reshaping medical education and clinical management. Pakistan Journal of Medical Sciences 39, 605 (2023). Taylor, R. et al. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085 (2022). Hendrycks, D. et al. Aligning ai with shared human values. arXiv preprint arXiv:2008.02275 (2020). Hendrycks, D. et al. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 (2020). Bran, A. M., Cox, S., White, A. D. & Schwaller, P. ChemCrow: Augmenting large- language models with chemistry tools. arXiv preprint arXiv:2304.05376 (2023). Guo, T. et al. What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks. arXiv preprint arXiv:2305.18365 (2023). | 2308.01423#32 | 2308.01423#34 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#34 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Bucior, B. J. et al. Identification schemes for metalâ organic frameworks to enable rapid search and cheminformatics analysis. Crystal Growth & Design 19, 6682-6697 (2019). Hu, T., Song, H., Jiang, T. & Li, S. Learning representations of inorganic materials from generative adversarial networks. Symmetry 12, 1889 (2020). 19 Ward, L., Agrawal, A., Choudhary, A. & Wolverton, C. | 2308.01423#33 | 2308.01423#35 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#35 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | A general-purpose machine learning framework for predicting properties of inorganic materials. npj Computational Materials 2, 1-7 (2016). Calfa, B. A. & Kitchin, J. R. Property prediction of crystalline solids from composition and crystal structure. AIChE Journal 62, 2605-2613 (2016). Jablonka, K. M. et al. 14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon. arXiv preprint arXiv:2306.06283 (2023). Zheng, Z., Zhang, O., Borgs, C., Chayes, J. T. & Yaghi, O. M. ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis. arXiv preprint arXiv:2306.11296 (2023). Dunn, A. et al. Structured information extraction from complex scientific text with fine- tuned large language models. arXiv preprint arXiv:2212.05238 (2022). James, S. L. Metal-organic frameworks. Chemical Society Reviews 32, 276-288 (2003). | 2308.01423#34 | 2308.01423#36 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#36 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Kitagawa, S. Metalâ organic frameworks (MOFs). Chemical Society Reviews 43, 5415- 5418 (2014). Zhou, H.-C., Long, J. R. & Yaghi, O. M. Vol. 112 673-674 (ACS Publications, 2012). Freund, R. et al. The current status of MOF and COF applications. Angewandte Chemie International Edition 60, 23975-24001 (2021). | 2308.01423#35 | 2308.01423#37 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#37 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 20 21 22 23 24 25 26 27 30 28 Kumar, S. et al. Green synthesis of metalâ organic frameworks: A state-of-the-art review of potential environmental and medical applications. Coordination Chemistry Reviews 420, 213407 (2020). Qian, Q. et al. MOF-based membranes for gas separations. Chemical reviews 120, 8161- 8266 (2020). Lee, J. et al. Metalâ organic framework materials as catalysts. Chemical Society Reviews 38, 1450-1459 (2009). Zhang, X. et al. A historical overview of the activation and porosity of metalâ organic frameworks. Chemical Society Reviews 49, 7406-7427 (2020). Deng, H. et al. Large-pore apertures in a series of metal-organic frameworks. science 336, 1018-1023 (2012). Furukawa, H., Cordova, K. E., Oâ Keeffe, M. & Yaghi, O. M. The chemistry and applications of metal-organic frameworks. Science 341, 1230444 (2013). 34 Wang, C., Liu, D. & Lin, W. Metalâ organic frameworks as a tunable platform for designing functional molecular materials. Journal of the American Chemical Society 135, 13222-13234 (2013). 35 Wei, J. et al. | 2308.01423#36 | 2308.01423#38 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#38 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35, 24824-24837 (2022). Yao, S. et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601 (2023). Yao, S. et al. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629 (2022). Karpas, E. et al. MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. arXiv preprint arXiv:2205.00445 (2022). | 2308.01423#37 | 2308.01423#39 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#39 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Chung, Y. G. et al. Computation-ready, experimental metalâ organic frameworks: A tool to enable high-throughput screening of nanoporous crystals. Chemistry of Materials 26, 6185-6192 (2014). Chung, Y. G. et al. Advances, updates, and analytics for the computation-ready, experimental metalâ organic framework database: CoRE MOF 2019. Journal of Chemical & Engineering Data 64, 5985-5998 (2019). Rosen, A. S. et al. Machine learning the quantum-chemical properties of metalâ organic frameworks for accelerated materials discovery. Matter 4, 1578-1597 (2021). 42 Moghadam, P. Z. et al. Development of a Cambridge Structural Database subset: a collection of metalâ organic frameworks for past, present, and future. Chemistry of Materials 29, 2618-2625 (2017). | 2308.01423#38 | 2308.01423#40 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#40 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 43 McKinney, W. pandas: a foundational Python library for data analysis and statistics. Python for high performance and scientific computing 14, 1-9 (2011). Glasby, L. T. et al. DigiMOF: A Database of Metalâ Organic Framework Synthesis Information Generated via Text Mining. Chemistry of Materials (2023). Altintas, C., Altundal, O. F., Keskin, S. & Yildirim, R. | 2308.01423#39 | 2308.01423#41 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#41 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Machine learning meets with metal organic frameworks for gas storage and separation. Journal of Chemical Information and Modeling 61, 2131-2146 (2021). Kang, Y., Park, H., Smit, B. & Kim, J. A multi-modal pre-training transformer for universal transfer learning in metalâ organic frameworks. Nature Machine Intelligence 5, 309-318 (2023). 44 45 46 31 47 48 49 50 51 52 53 54 55 56 57 58 Park, H., Kang, Y. & Kim, J. PMTransformer: Universal Transfer Learning and Cross- material Few-shot Learning in Porous Materials. (2023). | 2308.01423#40 | 2308.01423#42 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#42 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Kim, B., Lee, S. & Kim, J. Inverse design of porous materials using artificial neural networks. Science advances 6, eaax9324 (2020). Long, T. et al. Constrained crystals deep convolutional generative adversarial network for the inverse design of crystal structures. npj Computational Materials 7, 66 (2021). Xie, T., Fu, X., Ganea, O.-E., Barzilay, R. & Jaakkola, T. Crystal diffusion variational autoencoder for periodic material generation. arXiv preprint arXiv:2110.06197 (2021). Lee, S. et al. Computational screening of trillions of metalâ organic frameworks for high- performance methane storage. ACS Applied Materials & Interfaces 13, 23647-23654 (2021). Lim, Y., Park, J., Lee, S. & Kim, J. Finely tuned inverse design of metalâ organic frameworks with user-desired Xe/Kr selectivity. Journal of Materials Chemistry A 9, 21175-21183 (2021). Park, J., Lim, Y., Lee, S. & Kim, J. Computational design of metalâ organic frameworks with unprecedented high hydrogen working capacity and high synthesizability. Chemistry of Materials 35, 9-16 (2022). Yao, Z. et al. Inverse design of nanoporous crystalline reticular materials with deep generative models. Nature Machine Intelligence 3, 76-86 (2021). Park, H., Majumdar, S., Zhang, X., Kim, J. & Smit, B. Inverse design of metal-organic frameworks for direct air capture of CO2 via deep reinforcement learning. (2023). Katoch, S., Chauhan, S. S. & Kumar, V. A review on genetic algorithm: past, present, and future. Multimedia tools and applications 80, 8091-8126 (2021). Chase Harrison. <https://github.com/hwchase17/langchain> (2022-10-17). Larsen, A. H. et al. The atomic simulation environmentâ a Python library for working with atoms. Journal of Physics: Condensed Matter 29, 273002 (2017). 59 Willems, T. | 2308.01423#41 | 2308.01423#43 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#43 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | F., Rycroft, C. H., Kazi, M., Meza, J. C. & Haranczyk, M. Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials. Microporous and Mesoporous Materials 149, 134-141 (2012). 32 Supplementary information for: # ChatMOF: An Autonomous AI System for # Predicting and Generating Metal-Organic # Frameworks Yeonghun Kang, Jihan Kim* Department of Chemical and Biomolecular Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291, Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea | 2308.01423#42 | 2308.01423#44 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#44 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 33 # Table of Contents Supplementary Note S1. Prompt for planning in searcher Supplementary Note S2. Prompt for planning in predictor Supplementary Note S3. Prompt for single data retrieval in predictor Supplementary Note S4. Prompt for planning in generator Supplementary Note S5. Prompt for genetic algorithm in generator Supplementary Note S6. Prompt for planning in agent Supplementary Figure S1. Histogram for accessible surface area of MOF in the CoREMOF database. Supplementary Figure S2. Depiction of accuracies for three tasks using gpt-3.5-turbo model Supplementary Figure S3. ChatMOF's answer to the question using gpt-4 and gpt-3.5-turbo Supplementary Table S1. 100 questions in search task Supplementary Table S2. 100 questions in prediction task Supplementary Table S3. 10 questions in generation task | 2308.01423#43 | 2308.01423#45 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#45 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 34 Supplementary Note S1. Prompt for planning in the searcher You are working with a pandas dataframe in Python. The name of the dataframe is `df`. You should make a valid python command as input. You must use print the output using the `print` function at the end. You should use the `to_markdown` function when you print a pandas object. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Input: the valid python code only using the Pandas library Observation: the result of python code ... (this Thought/Input/Observation can repeat N times) Final Thought: you should think about how to answer the question based on your observation Final Answer: the final answer to the original input question. If you can't answer the question, say `nothing` | 2308.01423#44 | 2308.01423#46 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#46 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | The index of the dataframe must be be one of {df_index}. If it's not in the index you want, skip straight to Final Thought. {information} Begin! Question: What is the head of df? If you extracted successfully, derive 'success' as the final # answer Thought: To get the head of a DataFrame, we can use the pandas function head(), which will return the first N rows. By default, it returns the first 5 rows. Input: ``` import pandas as pd import json print(df.head().to_markdown()) ``` Observation: {df_head} Final Thought: The head() function in pandas provides the first 5 rows of the DataFrame. | 2308.01423#45 | 2308.01423#47 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#47 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 35 Final Answer: success Question: {question} {agent_scratchpad} 36 Supplementary Note S2. Prompt for planning in the predictor plan to use machine learning to predict the properties of matter. To answer the question, you have to fill in the following format: Question: the input question you must answer Thought: you should always think about what to do Property: the property you can predict, should be one of [{model_names}] Material: names of materials separated using comma. If you need to proceed for all material, write *. To proceed for a specific topology, append the topology name with an * (ex. pcu*) ... (this Property/Material can repeat N times) | 2308.01423#46 | 2308.01423#48 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#48 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Run Machine Learning: nothing to do Final Thought: you should think about how you will derive a final answer from the results of machine learning. Begin! Question: predict the surface area and hydrogen uptake of ACOPEN and ELOBPE. Thought: I need to gather data on ACOPEN and train a machine learning model to predict its surface area and hydrogen uptake. Property: surface_area Material: ACOPEN, ELOBPE Property: hydrogen_uptake_100bar_77K Material: ACOPEN, ELOBPE Final Thought: | 2308.01423#47 | 2308.01423#49 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#49 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Based on the result, answer the question using predicted surface area and the predicted hydrogen uptake at 100 bar. Question: which MOF has a highest band-gap? 37 Thought: I need to gather data on the band-gap of different structures and compare them to find the one with the highest value. Property: bandgap Material: * Final Thought: Based on the result, find the structure with the highest predicted band-gap value. Question: Predict surface area and save results in csv format, only pcu topology. Thought: I need to gather data on the surface area of materials with pcu topology and train a machine learning model to predict their surface area. Then, I can use the model to make predictions and save the results in a csv format. Property: surface_area Material: pcu* | 2308.01423#48 | 2308.01423#50 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#50 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Final Thought: The predicted surface area values for materials with pcu topology can be found in the saved csv file. Question: {question} 38 Supplementary Note S3. Prompt for single data retrieval in predictor You need to answer the question from the markdown table below Markdown Table: {table} {information} Question: {question} Answer: 39 Supplementary Note S4. Prompt for planning in generator Create a plan to generate material based on the following question. Use the following format: Question: the input question you must to answer Thought: you should always think about what to do Property: the property you can predict, must be one of [{model_names}] Objective: you should decide what criteria you wan to generate by. Search look-up table: plan to extract 100 material for the purpose from the look-up table where the property is pre-calculated. Genetic algorithm: plan to create a new materials using the 100 extracted materials. Final thought: get a final answer based on the structures you generate. Begin! Question: generate a material with a porosity of 0.5 and surface area of 120 m2/g Thought: I need to generate a material with a porosity value of 0.5 and surface area of 120 m2/g. Property: void_fraction, accessible_surface_area Objective: near 0.5, near 120 Search look-up table: extract cif_id, void_fraction, and accessible_surface_area of 100 materials with porosity close to 0.5 and surface area near 120 m2/g from look-up tables. Genetic algorithm: create a new material with a porosity close to 0.5 and surface area near 120 m2/g from 100 materials | 2308.01423#49 | 2308.01423#51 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#51 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Final Thought: Based on the generated CIF, find the material that is closest to a porosity of 0.5 and a surface area of 120 m2/g. Question: generate a material with a highest band-gap Thought: I need to generate a material with a highest band-gap. # Property: bandgap 40 Objective: max Search look-up table: extract cif_id and bandgap of 100 materials with high band-gap value from look-up tables. Genetic algorithm: generate 100 new materials with the highest band gap from the 100 materials. | 2308.01423#50 | 2308.01423#52 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#52 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Final Thought: Based on the generated CIF, find the material that has highest band-gap. Question: {question} 41 Supplementary Note S5. Prompt for genetic algorithm in generator "You should act as a generator to find the optimal material. A substance consists of a block1, block2, and must maintain the order. I will give you 100 parent materials. Based on these, you must answer as many new children as you expect to answer the question. The block1 and block2 used in the child must be the blocks used in the parent, and you must not create blocks that don't exist. You must generate children diverse. "The children must not be duplicates of existing parents or already created children. You output children only and nothing else. | 2308.01423#51 | 2308.01423#53 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#53 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Begin. Question: {question} Parent: V12+T31, V24+T32, V7+T12 4 new Children: V12+T12, V24+T31, V7+T31, V7+T32 Parent: {parents} 100 new Children: Begin. Parent: V12+T31, V24+T32, V7+T12 4 new Children: V12+T12, V24+T31, V7+T31, V7+T32 {parents} 100 new Children: 42 # Supplementary Note S6. | 2308.01423#52 | 2308.01423#54 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#54 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Prompt for planning in agent You act like a material scientist answering a question. Answer the following questions as best you can. You have access to the following tools: {tools} You must not use tool `generator` when `Question` does not required to generate. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: | 2308.01423#53 | 2308.01423#55 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#55 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | What is the surface area and bandgap of ACOGEF? Thought: I need to find the surface area of ACOGEF. Action: search_csv Action Input: "Search name ACOGEF and provide information on its surface area" Observation: The surface area of material "ACOGEF" is 1138.35 m^2/g. Check to see if this answer can be you final answer, and if so, you should submit your final answer. Thought: | 2308.01423#54 | 2308.01423#56 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#56 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | The search_csv tool provided the surface area of ACOGEF, but not the bandgap. I need to find the bandgap. 43 Action: search_csv Action Input: "Search name ACOGEF and provide information on its bandgap" Observation: The search_csv tool did not provide any information on the bandgap of ACOGEF. I need to find another way to obtain this information. Thought: The search_csv tool provided the surface area of ACOGEF, but not the bandgap. I need to find the bandgap. | 2308.01423#55 | 2308.01423#57 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#57 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Action: predictor Action Input: "Predict the bandgap of ACOGEF" Observation: The bandgap of material "ACOGEF" is 3.41139 eV. Check to see if this answer can be you final answer, and if so, you should submit your final answer. Thought: I now know the final answer Final Answer: The bandgap and surface area of ACOGEF is 3.41149 eV and 1138.35 m^2/g. Question: What is the highest bandgap MOF? Thought: I need to find the MOF with the highest bandgap. | 2308.01423#56 | 2308.01423#58 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#58 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Action: search_csv Action Input: "find for MOFs with highest bandgap and provide its name and bandgap" Observation: There are no data in database Thought: The search_csv tool did not provide any information on the bandgaps of MOFs. I need to find another way to obtain this information. Action: predictor Action Input: "predict the bandgaps of MOFs and find the name of MOF with highest bandgaps" Observation: The highest bandgap MOF is ACOGEF. Thought: I now know the final answer Final Answer: | 2308.01423#57 | 2308.01423#59 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#59 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | The highest bandgap MOF is ACOGEF. Question: {input} Thought:{agent_scratchpad} 44 4000 3500 3000 2500 2000 1500 Number of MOFs 1000 500 2000 4000 6000 8000 Accessible surface area (cm3/cm3) Figure S1. Histogram for accessible surface area of MOF in the CoREMOF database 45 100 Gal True False (exceed token limit) | False (logic error) for) © oOo fo) Percentage (%) & fo) 20 Search task Predict task Generate task Figure S2. Depiction of accuracies for three tasks using gpt-3.5-turbo model - search, prediction, and generation. Accuracies were evaluated based on three labels: True, False (exceeding token limit), and False (logical error). The number in the bar represent the count of each label . 46 FAEHEHBEEH Question #4 Yara TaYeTe Tre taT oe How does the pore limiting diameter of YUSGID_clean compare with other materials? > Entering new AgentExecutor chain... | need to find the pore limiting diameter of YUSGID_clean and compare it with other materials. | 2308.01423#58 | 2308.01423#60 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#60 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Action: search_csv Action Input: "Search name YUSGID_clean end provide information on its pore limiting diameterâ > Entering new TableSearcher chain... [Table Searcher] Thought: To find the information for a specific name in the DataFrame, we can use the loc function in pandas. This function allows us to access a group of rows and columns by label(s) or a boolean array. [Table Searcher] Input: print(df.loc[df['nameâ ] == "YUSGID'|['Pore limiting diameter (A)'].to_markdown()) [Table Searcher] Observation: |_| Pore limiting diameter (A) | |e ene | | 11739 | 3.71515 | [Table Searcher] Final Thought: The 'Pore limiting diameter (A) for the material with the name 'YUSGID' is 3.71515A. > Finished chain. Observation: The â Pore limiting diameter (Aj' for 'YUSGIDâ is 3.71515 A Thought:| have the pore limiting diameter for YUSGID_clean. Now | need to compare this with the pore limiting diameters of other materials. | 2308.01423#59 | 2308.01423#61 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#61 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Action: search_cesv Action Input: "Provide information on the pore limiting ciameters of all materialsâ > Entering new TableSearcher chain... [Table Searcher] Thought: To get information an the pore limiting diameters of all materials, we can use the pandas function describe(), which provides descriptive stetistics of the DataFrame or Series [Table Searcher] Input: print(dff'Pore limiting diameter (A)'].describe().to_markdown()) [Table Searcher] Observation: | Pore limiting diameter (A) | | | count | 12020 | | mean | 4.87758 | [std | 2.78185 | [min | 2.40006 | | 25% | 3.24153 | | 50% | 4.10336 | | 75% | 5.64609 | [max | 71.502 | [Table Searcher] Final Thought: The describe(} function in pandas provides a statistical summary of the â Pore limiting diameter (A)' column in the DataFrame. This includes the count, mean, standard deviation, minimum, 25th percentile, median (50th percentile), 75th percentile, and maximum > Finished chain. Observation: The pore limiting diameters of all materials range from approximately 2.40A to 71.50A, with an average diameter of approximately 4.88A. Thought:! naw know the final answer Final Answer: The pore limiting diameter of YUSGID_clean is 3.71515 A, which is below the average pore limiting diameter of approximately 4.884 for all materials. The range of pore limiting diameters for all materials is approximately 2.40A to 71.50A. > Finished chain. FRBAHRHEH# Output RASHES EEE Ee Se The pore limiting diameter of YUSGID_clean is 3.71515 A, which is below the average pore limiting diameter of approximately 4.88A for all materials. The range of pore limiting diameters for all materials is approximately 2 40A to 71.50A. FRRHHHEHH Question #tHHE vavewere How does the pore limiting diameter of YUSGID_clean compare with other materials? > Entering new AgentExecutor chain... | 2308.01423#60 | 2308.01423#62 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#62 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | I need to compare the pore limiting diameter of YUSGID_clean with other materials. Action: search_csv Action Input: "Search name YUSGID_clean and provide information on its pore limiting diameterâ > Entering new TableSearcher chain... [Table Searcher] Thought: To search for a specific value in the "nemeâ column, we can use the pandas function loc[], which allows us to access a group of rows and columns by label(s) or a boolean array. We can then extract the information on the pore limiting diameter for the material with the name "YUSGID". [Table Searcher] Input: print(df.loc[df['name'] == 'YUSGID', 'Pore limiting diameter (A)'].to_markcown()) [Table Searcher] Observation: | | Pore limiting diameter (A) | -â ----"| Lippi sspanyitnne ne SA enact E| [11739] 3.71515 | [Table Searcher] Final Thought: The pore limiting diameter for the materia! with the name "YUSGID" is 3.71515 A. > Finished chain. Observation: The answer for question "Search name YUSGID and provide information on its pore limiting diameterâ is 3.71515 A. Thought:The search_csv tool provided the pore limiting diameter of YUSGID_clean. Now I need to find the pore limiting diameter of other materials for comparison. Action: search_csv Action Input. â Search for other materials and provide information on their pore limiting diameterâ > Entering new TableSearcher chain... [Table Searcher] Thought: To search for other materials and provide information on their pore limiting diameter, we can use the pandas function loc{] to filter the dataframe based on a condition. We can then select the "name" and â Pore limiting diameter (A}" columns to display the information. [Table Searcher] Input: print(df.loc[:, ["nameâ , "Pore limiting diameter (A)']].to_mark- down()) ValueError: | 2308.01423#61 | 2308.01423#63 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#63 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | The number of tokens has been exceeded. | | count | 12020 | | mean | 4.87758 | [std | 2.78185 | [min | 2.40006 | | 25% | 3.24153 | | 50% | 4.10336 | | 75% | 5.64609 | 71.502 Figure S3. ChatMOF's answer to the question "How does the pore limiting diameter of YUSGID_clean compare with other materials?" using gpt-4 and gpt-3.5-turbo. The answer is successfully derived for gpt-4 but raises a tokenError for gpt-3.5-turbo. 47 | 2308.01423#62 | 2308.01423#64 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#64 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | What is the largest cavity diameter of ASOVEL_clean? What's the pore limiting diameter of XEXLUJ_clean? How large is the free pore diameter for DIBQUC? What's the density of ROLCEC19? Can you tell me the accessible surface area of ALIBUT? What is the non-accessible surface area of ARAHIM02? What is the accessible volume fraction for EREGOY_clean? Can you tell me about the accessible volume of UZANOZ? What's the non-accessible volume for NAHFOU? | 2308.01423#63 | 2308.01423#65 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#65 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | What type of metal is present in PITPEP_clean? Does DUBKAO have an open metal site? Can you tell me the type of open metal in FECYUJ? Can you provide the top 5 materials with the highest accessible surface area? Which materials have the highest non-accessible volume? What's the largest cavity diameter in materials with a density greater than a 7.0? Can you provide a list of materials with an open metal site? What's the accessible volume fraction for materials with a density less than a 0.25? How does the pore limiting diameter of YUSGID_clean compare with other materials? | 2308.01423#64 | 2308.01423#66 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#66 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 48 What materials have the largest free pore diameter? Is there an open metal site in NELVAC? Can you provide a list of materials with the highest pore limiting diameter? What materials have the largest accessible volume? What's the non-accessible surface area for WAWGOQ_clean? What is the type of open metal in YUNJIB? Can you list the top 10 materials with the highest non-accessible surface area? What's the largest cavity diameter in materials with a density less than a 0.8? Can cg901114e_si_002 have an open metal site? What materials have the largest accessible volume fraction? What is the pore limiting diameter of LOLREL? How does the accessible volume of OSIXEU compare with other materials? Can you provide the top 10 materials with the highest density? What's the largest free pore diameter in materials with a density less than a 0.8? How high is the non-accessible volume for FIJDIM05? What materials have the largest cavity diameter? What is the metal type in FOTNIN? How high is the accessible volume fraction for WOBCAT? Can you list materials with a pore limiting diameter greater than a 6.0? What is the density of materials with an accessible surface area above a 3000 m2/g? | 2308.01423#65 | 2308.01423#67 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#67 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 49 What is the type of open metal in IYAFIX? Can JILXOS have an open metal site? What materials have the largest non-accessible surface area? Can you list the top 5 materials with the highest largest free pore diameter? What is the accessible volume for MOJJUR? Can you provide the top 10 materials with the highest largest cavity diameter? What is the accessible surface area for materials with a pore limiting diameter below a 2.0? What is the non-accessible volume for WABTOK? What materials have the largest pore limiting diameter? What's the type of open metal in QOWRAV11? Can PODVUB have an open metal site? Can you list the top 10 materials with the highest accessible volume fraction? What is the metal type in QULMIU? How does the non-accessible surface area of YICTUZ compare with other materials? Can you provide the top 10 materials with the highest accessible volume? What's the pore limiting diameter in materials with a density greater than a 2.5? How high is the largest free pore diameter for DUFKAS? What materials have the largest density? What is the largest cavity diameter for TONTIB_clean? Can you list the top 5 materials with the highest non-accessible volume? What is the accessible surface area for UXUPEJ? What is the density of NATXIR? How does the largest free pore diameter of LITDAV compare with other materials? | 2308.01423#66 | 2308.01423#68 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#68 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 50 Can you provide the top 10 materials with the highest largest free pore diameter? What's the pore limiting diameter for CSMNOX? How high is the non-accessible surface area for MUZKAV_clean? What materials have the largest accessible surface area? What's the type of open metal in cm503311x_F-mil-53-Al-lp-mono? Can TAHYUZ have an open metal site? Can you list the top 5 materials with the highest density? What is the metal type in YUBTUM? How does the accessible surface area of TATFOL compare with other materials? Can you provide the top 5 materials with the highest accessible surface area? What's the largest cavity diameter for NELVAC? How high is the accessible volume for EPUJIJ_clean? What materials have the largest non-accessible surface area? What's the type of open metal in WAWGOQ_clean? Can YUNJIB have an open metal site? Can you list the top 5 materials with the highest accessible volume fraction? What is the metal type in JUKPAI_clean? How does the non-accessible surface area of HUZFIS compare with other materials? Can you provide the top 5 materials with the highest largest cavity diameter? What's the pore limiting diameter for XIPTAS_clean? How high is the largest free pore diameter for LOLREL? What materials have the largest density? What is the largest cavity diameter for YAVVUP? | 2308.01423#67 | 2308.01423#69 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#69 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 51 Can you list the top 5 materials with the highest non-accessible volume? What is the accessible surface area for FIJDIM05? What is the density of TUFTAR? How does the largest free pore diameter of FOTNIN compare with other materials? Can you provide the top 5 materials with the highest largest free pore diameter? What's the pore limiting diameter for LAXGOH02? How high is the non-accessible surface area for GEDQOX? What materials have the largest accessible surface area? What's the type of open metal in JILXOS? Can VAHSON have an open metal site? Can you list the top 5 materials with the highest density? What is the metal type in MOJJUR? How does the accessible surface area of REHVEH_clean compare with other materials? Can you provide the top 5 materials with the highest accessible surface area? What's the largest cavity diameter for NELVAC? Table S1. 100 questions in search task. 52 At 77 K, what's the hydrogen uptake for ZAXQOG_clean at 100bar? Under standard temperature conditions of 298K and dilute system, can you list the materials with the highest O2 diffusivity? Is XEGKUR_clean thermally stable? What's the thermal stability of QINSUD_clean? | 2308.01423#68 | 2308.01423#70 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#70 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Which materials have a bandgap between 1 and 2 eV? At room temperature (298K), what's the CO2 Henry coefficient for XEGKUR_clean? Can you provide the top 10 materials with the highest N2 diffusivity at 298K and infinity dilute system? Is the solvent removal stability of XEGKUR_clean strong? Can you list 5 materials with a bandgap near 1.6 eV when evaluated at 298K and 1bar? Under 77K and infinity dilute conditions, what's the hydrogen diffusivity for NISPEL_charged? What materials show the best performance in hydrogen uptake at 100bar considering a temperature of 77 K? How high is the O2 uptake of LOQLIN_SL at 298K and normal pressure? Can XAVDUV_clean stably store hydrogen under 100bar pressure when observed at 77K? At 298K and standard atmospheric pressure, how high is the N2 uptake for UXABUR_clean? What's the O2 diffusivity for OCUVUF_clean under conditions of 298K and dilute system? Is XAHWAG_clean stable when the solvent is removed? What are the top 5 materials with the highest CO2 Henry coefficient at room temperature? At 298K and dilute system, how high is the N2 diffusivity for XEGKUR_clean? What materials are known for their high O2 uptake at a temperature of 298K and 1 bar pressure? How high is the bandgap for XEGKUR_clean? | 2308.01423#69 | 2308.01423#71 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#71 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 53 What materials have the highest thermal stability? Can you show materials with a bandgap greater than 3.0 eV? At 298K and standard atmospheric pressure, how much O2 can XEGKUR_clean take up? What is the CO2 Henry coefficient for materials with bandgap less than 1.5 eV at 298K? At 77K, how does the hydrogen uptake of OCUVUF_clean compare with other materials at 100bar? Can you list materials with hydrogen diffusivity greater than 0.1 cm2/s at 77K and dilute system? What is the N2 diffusivity in XEGKUR_clean under 298K and dilute conditions? What is the thermal stability of materials with bandgap above 2.5 eV? Which materials demonstrate a high level of O2 diffusivity at 298K and dilute condition? How does the N2 uptake of XEGKUR_clean compare with others at 298K and 1 bar? Can QINSUD_clean remove solvents stably? How high is the thermal stability for NISPEL_charged? At 298K and 1 bar pressure, what is the O2 uptake of MIFROK_clean? Under standard conditions of 298K and 1 bar, which materials have the highest N2 uptake? Can you list materials that show solvent removal stability? How does the hydrogen diffusivity for QINSUD_clean compare with others at 77K and dilute system? Under conditions of 298K, what is the CO2 Henry coefficient for WOYJOL_clean? What materials have the best N2 diffusivity when measured at 298K and dilute system? | 2308.01423#70 | 2308.01423#72 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#72 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 54 Is the bandgap of XEGKUR_clean within the range 0.5-1.5 for photovoltaic applications? At standard conditions of 77K and 100 bar, which materials have the highest hydrogen uptake? How high is the bandgap for XEGKUR_clean when measured at 298K and under a pressure of 1 bar? Can you provide a list of materials with high thermal stability? At 77K and 100bar pressure, what is the hydrogen uptake of OCUVUF_clean? What is the solvent removal stability of XAHWAG_clean? Can GUCJAQ_clean diffusively handle N2 when evaluated at 298K and dilute system? What materials show high O2 uptake capabilities under conditions of 298K and 1bar? At 298K and dilute system, what is the diffusivity of O2 in XEGKUR_clean? How much N2 can DUVNIS01_charged take up at 298K and 1 bar? Under conditions of 298K, which materials have a high CO2 Henry coefficient? How much hydrogen can XEGKUR_clean take up at 100bar pressure when observed at 77 K? Under 298K and dilute conditions, how high is the O2 diffusivity for DUVNIS01_charged? At standard temperature (298K) and 1 bar pressure, can IZEHAX_clean hold a high volume of N2? At 298K and 1bar, how stable is JALCAD_clean when the solvent is removed? What is the thermal stability of COKMUM_clean? Can AYOYOE_clean handle a high volume of O2 when tested at 298K and 1 bar? At 298K and dilute system, what is the diffusivity of N2 in IHAJIL_clean? 55 What materials show a high level of N2 diffusivity under conditions of 298K and 1 bar pressure? Is the bandgap of XEGKUR_clean higher or lower than 1.5 eV? At a temperature of 298K and a pressure of 1 bar, how does the O2 uptake of DUVNIS01_charged compare with others? Under standard ambient temperature (298K), what is the CO2 Henry coefficient for XEGKUR_clean? | 2308.01423#71 | 2308.01423#73 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#73 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | At 77K and 100 bar, can DUVNIS01_charged take up a significant volume of hydrogen? Considering conditions of 298K and 1 bar, how much N2 can IZEHAX_clean hold? At 77 K and dilute system, what is the hydrogen diffusivity for JALCAD_clean? Can COKMUM_clean remain stable when the solvent is removed? Which materials exhibit the best O2 diffusivity at 298K and dilute system? How high is the bandgap for XEGKUR_clean? What is the thermal stability of DUVNIS01_charged? Under a temperature of 298K and pressure of 1 bar, how high is the O2 uptake for IZEHAX_clean? At 298K and standard pressure, what materials have the highest N2 uptake? Can you list the top materials that demonstrate thermal stability? Given the conditions of 298K and dilute condition, what is the N2 diffusivity for QINSUD_clean? What materials are thermally stable? For a metal-organic framework at 298K and 1 bar, what is the CO2 Henry coefficient for KUGQIN_clean? | 2308.01423#72 | 2308.01423#74 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#74 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 56 What materials have a high bandgap? Considering 77K and 100bar conditions, how high is the hydrogen uptake of XEGKUR_clean? Under ambient conditions of 298K and 1 bar, what is the N2 uptake for DUVNIS01_charged? Can OCUVUF_clean remove solvents without compromising its stability? At a standardized temperature of 77K and dilute system, how does XAHWAG_clean's hydrogen diffusivity measure? What is the O2 uptake of GUCJAQ_clean at the typical conditions of 298K and 1 bar? Can GAJTUI_clean take up a significant volume of N2 under conditions of 298K and 1 bar? How stable is JEDJUY_clean at elevated temperatures? At standard ambient conditions, which materials exhibit a high CO2 Henry coefficient? Under the set conditions of 77K and dilute, how high is the hydrogen diffusivity of XEGKUR_clean? What is the solvent removal stability of DUVNIS01_charged? At 298K and a infinity dilute condition, how does OCUVUF_clean's N2 diffusivity compare to other materials? Which materials show a pronounced hydrogen uptake at 77K and 100 bar? Can XEGKUR_clean stably take up a large volume of O2 when examined at 298K and 1 bar? What is the thermal stability of DUVNIS01_charged? At standard conditions (298K and 1 bar), which materials are known for high O2 uptake? Can XEGKUR_clean stably remove solvents? | 2308.01423#73 | 2308.01423#75 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#75 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | 57 How high is the bandgap of DUVNIS01_charged? Which materials exhibit the highest N2 diffusivity at a temperature of 298K and infinity dilute condition? Under conditions of 298K and 1 bar, how elevated is the N2 uptake for XEGKUR_clean? At the standardized 298K and infinity dilute system, what is the O2 diffusivity of DUVNIS01_charged? Under standard conditions, how high is the CO2 Henry coefficient for IZEHAX_clean? At 77K and 100 bar pressure, can JALCAD_clean stably accommodate a large volume of hydrogen? Under the given conditions of 77K and dilute system, which materials have the highest hydrogen diffusivity? Is XEGKUR_clean thermally stable? Can you provide a list of materials with a bandgap exceeding 2 eV? | 2308.01423#74 | 2308.01423#76 | 2308.01423 | [
"2302.13971"
]
|
2308.01423#76 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | Table S2. 100 questions in prediction task. 58 Can you generate the structures with the largest surface area? I'm looking for structures with a hydrogen uptake of about 500 cm3/cm3 at 100bar and 77K, can you generate those? Can you generate structures with a density close to 0.4? What structures can you generate with a smallest accessible pore volume? I'm interested in structures with a low O2 uptake at 298K and 1bar, can you generate these? I'm curious about structures where accessible volume fraction is less than 0.4, can the model generate these? Can the model generate structures where the largest cavity diameter is in the bottom quartile? I am curious about structures where the thermal stability is the highest among all, can you generate those? What structures can be generated with a N2 uptake in the 3rd quartile range? Can you create a MOF with a void fraction greater than 0.5 that has the highest CO2 Henry coefficient at 298 K? Table S3. 10 questions in generation task 59 | 2308.01423#75 | 2308.01423 | [
"2302.13971"
]
|
|
2308.00675#0 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | 3 2 0 2 g u A 1 ] L C . s c [ 1 v 5 7 6 0 0 . 8 0 3 2 : v i X r a # Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models Cheng-Yu Hsieh1â , Si-An Chen2â , Chun-Liang Li3, Yasuhisa Fujii4, Alexander Ratner1, Chen-Yu Lee3, Ranjay Krishna1â , Tomas Pfister3â 1University of Washington, 2National Taiwan University, 3Google Cloud AI Research, 4Google Research [email protected] # Abstract Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the toolâ s usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and in- variably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentationâ descriptions for the individual tool usageâ over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffu- sion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM [23] and Track Anything [70] models. # Introduction | 2308.00675#1 | 2308.00675 | [
"2302.13971"
]
|
|
2308.00675#1 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Today, large language models (LLMs) summon the imagery of a craftsman: when asked to solve a complex task, they decompose the task into simpler sub-tasks and assemble the best possible tools to tackle each sub-task [51, 72]. For example, consider the complex task of question answering given the image in Figure 1. To answer â whether the two magnets will attract or repel each otherâ , the LLM needs the following: it needs to identify the positions of the magnets in the image, extract general knowledge explaining that â opposite (same) poles attract (repel)â | 2308.00675#0 | 2308.00675#2 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#2 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | . Just like a competent craftsman who knows what their tools are capable of, an LLM with such knowledge of its tools will be able to invoke one tool (e.g. its Text Detector) to identify the north and south poles and a second tool (e.g. Knowledge Retriever) to extract pertinent background knowledge about magnetic forces. But how does an LLM know which tool is capable of what? â Work done as student researchers at Google Cloud AI Research. *The authors contributed equally to this work. Preprint. Under review. Input Plan Question: | 2308.00675#1 | 2308.00675#3 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#3 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Will these magnets attract or Text Detector | â S": (6, 12] repel each other? ry 54, 14] s N s N Large Language â N" eee % Choices: [repel, attract] Model | 1490, 12) Knowledge | Magnet has two poles. If different Tool Set Retriever | poles are closest, the magnets a ia attract. If the same poles are TO Text mo Knowledge closest, the magnets repel Detector Retriever { f= Search Image Solution | The magnet on the left has a Engine Captioner Generator | north pole facing the magnet on . the right, which has a south pole Solution i Calculator Generator L (?) facing left. Opposite poles attract. | 2308.00675#2 | 2308.00675#4 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#4 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Figure 1: Example workflow of tool-using with LLMs to solve a multi-modal question answering task. Given the input question with an image, the LLM selects appropriate tools from the tool set and generates an execution plan to answer the question correctly. Here, the LLMs outlines a plan to first use Text Detector to understand the positioning of the magnets in the image, then leverage Knowledge Retriever to obtain relevant background knowledge about magnets, then finally generate the solution based on the previous steps. Currently, LLM tool-usage provides LLMs with few-shot demonstrations (demos) of what its tools can do, hoping that these demos will help generalize the modelâ s behavior to newer complex tasks. This process has been rather successful so far. These few-shot demos contain one or several exemplars of <input, output> mappings [68] on given instructions and their corresponding tool-use plans (illustrated in Figure 2). LLMs are expected to find patterns within these demos and generalize them for new tasks. On textual tasks, LLMs have presented with demos of calculators [15, 47, 56], Python interpreters [13, 18] and search engines [62, 43, 50, 56, 40] can perform logical and arithmetic operations to obtain more accurate and factual knowledge. On visual tasks, LLMs with demos of pretrained vision models can do complex visual reasoning [37, 40, 57, 16, 73], can generate and even edit images [19, 9]. On embodied robotic tasks, LLMs can similarly be used to reason and plan [75, 21, 1, 17]. We argue that this reliance on demos in tool using is unnecessary in some cases, and might be even limiting. In fact, recent work finds that LLMs tend to be sensitive to demos [81], and carefully selecting demos is needed to avoid biasing or overfitting to a particular usage [12]. This leads to the follow-up question: how do we choose which few-shot demos to use? There are no known principled approaches to select demos without human intervention or to even efficiently enable humans to choose or create them. To make the matter worse, when we scale up the number of tools that LLMs have access to, this few-shot selection process becomes combinatorially intractable. | 2308.00675#3 | 2308.00675#5 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#5 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Just as a craftsman doesnâ t need to see a new tool being demonstrated and can instead discern their capabilities from reading a user manual for the tool, we seek to enable LLMs to learn how to use tools without seeing any demos. Our work provides an alternative to demonstrations: tool documentation (doc). Similar to the metaphor of a manual indicating an physical toolâ s capabilities, a software toolâ s docs outline what the tool can and cannot be used for and how to invoke it. Docs provide relatively neutral instruction about the toolsâ functionalities and how individual tools should be used (illustrated in Figure 2), and they are usually conveniently available through the creation of the tools organically. Intuitively, just as the craftman leans to use a new tool by reading the manual, we provide LLMs with README files when encountering a new tool/repository. With docs, an LLM may not necessarily need demos to use a new tool. Distinct from existing work that rely mostly on few-shot demos for tool-learning, in this work, we study whether LLMs can instead solely rely on docs to use tools. We study the tool-learning performances of LLMs as we include or exclude docs, and vary the number of demos from few-shot down to zero-shot. We conduct the experiments on 6 tasks across vision and text modalities. Our experiments show that: | 2308.00675#4 | 2308.00675#6 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#6 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | â ¢ Surprisingly, when provided with tool docs, LLMsâ zero-shot tool-using performance is on par or even better than their few-shot counterparts, showing that including docs is an effective way to sidestep the few-shot demos needed. 2 a â a 7 Demonstration Documentation Description: examples of questions and the tool-use plan. Description: available tools and their functionalities. ¢ Question: e Text Detector: Which property do these objects have in common? It detects the text in an image [...] e Knowledge Retriever: 7 It retrieves relevant knowledge [...] & ~4 e Search Engine: It searches the web for relevant info [...] e Image Captioner: It generates a caption for an image [...] Tool-use Plan: [...] L JL J Chips Pretzel Fries Tool-use Plan: Text Detector + Knowledge Retriever â Solution Generator e@ Question: [...] Tool-use Plan: [...] e@ Question: [...] Figure 2: Two types of knowledge for prompting LLMs for tool-use: Demonstrations (demos) and Documentations (docs). Demos consist of <input, output> pairs on input instructions and their corresponding output tool-use plans. They require manual efforts for careful curation on every new task, and the model performance can be sensitive to which demos are used [81, 12]. Many demos may also be necessary for good coverage when the number of tools scales up. On the other hand, docs provide descriptions for the tool functionality, and are usually organically available for tools. | 2308.00675#5 | 2308.00675#7 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#7 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | â ¢ Building on the above finding, we relax the few-shot demo constraint, and show that we can efficiently scale up to a significantly larger tool set, on a newly collected API usage dataset, by simply providing the LLMs with docs. â ¢ We show how to seamlessly add new tools along with their docs to a tool set for LLMs to solve unseen tasks on image editing and video tracking, all without any further demos in a plug-and-play manner. â ¢ Finally, with unseen tools developed recently as building blocks, we showcase LLMs are capable of re-inventing popular yet even more recent works Grounded-SAM [23] and Track Anything [70], which suggests a potential from zero-shot tool usage to automatic knowledge discovery. # 2 Related work LLMs with retrieval augmentation and tools. In spite of the remarkable achievements demon- strated by LLMs, the performance can be further boosted with external tool usages to be more accurate, efficient or versatile for wider applications. The authors in [51] detailed the cognitive origins, the paradigm shift of foundation models, and the complementary roles of tools and models to LLMs. The example tool usage starts from knowledge retrieval [6, 20, 33, 74, 77] and expands to search engine [43, 31, 32, 62, 58, 46, 40], QA system [56], calculator [15, 47, 56], the Python inter- preter [18, 13, 65, 24, 46, 16], simulation engines [37], machine learning models [57, 73, 69, 40, 16], or even tools created by LLMs [11]. Pioneer works of LLMs with tools often rely on human su- pervision [62, 31] or additional self-supervised learning techniques [56], which pose challenges for practical plug-and-play usage. Recent advancements eliminate additional training by using example demos in the prompt [19, 75, 73, 57, 40, 46]. Our work further simplifies prompt design by only leveraging documentation for individual tools, while maintaining competitive performance. Planning with LLMs. | 2308.00675#6 | 2308.00675#8 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#8 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Language models are proven to have potential to conduct planning for solving complex tasks or decompose the complex tasks into sub-problems when prompted properly. [21, 22] retrieve demos at test-time with large knowledge space coverage to generate admissible actions. [28] relies on pre-designed demos for task decomposition. Similarly, recent works of tool using with LLMs leverage the example demonstrations of solving examples tasks with a planning of tools [13, 19, 75, 73, 57, 40, 46]. However, crafting demos of interactions between tools may be challenging in practice when the number of tools surges. Concurrent work [48, 52, 71] tackles the challenge by using strong LLMs such as GPT-4 [45] to create large instruction-following datasets that cover diverse instructions and corresponding tool-use plans, typically through mechanisms like | 2308.00675#7 | 2308.00675#9 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#9 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | 3 self-instruct [66]. The resultant datasets can then be used to finetune and equip other LLMs (e.g., LLaMA [63] and OPT [79]) the ability to use a large collection of tools for unseen instructions. On the other hand, our work showcases the potential for LLMs to utilize any unseen new tools by reading their tool docs. Demonstration and Documentation. Learning from demonstration is popular in reinforcement learning [49, 4, 44, 55]. [8] propose the in-context learning algorithm for efficient and effective downstream task adaptations through showing example demonstrations. Inspired by the success, most of existing LLM tool-using works rely on few-shot demonstration [13, 19, 75, 73, 57, 40, 46]. However, [12] show that having more example demonstration might counter-intuitively degrade performance, and a careful selection might be needed. [35] proposes a retrieval method for demo selection, which implicitly requires a larger set of examples to be selected. Using documentation to improve algorithms is relatively under-explored. [7, 82] propose document reading algorithms for specific games. [83] introduced DocPrompting, which employs a trained retriever on the given training data to boost code generation by retrieving relevant documents. In this work, we take a step towards exploring the zero-shot tool planning in LLMs solely with the aid of documentation, and investigate a wide range of diverse tasks from language to vision domains. While [64, 42] showcase pure zero-shot planning capability of LLMs, they do not study either the tool usage or the unseen scenarios to the language models. ViperGPT [16] is a concurrent work, which focuses on visual programming in Python and uses function implementations and specifications as documentation. Lastly, while AutoGPT [3] provides several demos that showcase the LLMâ s capability of tool using through documentation reading, our study focuses on a systematic exploration ranging from real-world use cases to academic benchmarks. # 3 Experimental setup # 3.1 General workflow We follow the general framework of tool-using with LLMs in [51], which encompasses many of the recent works [75, 27, 19, 57, 73, 69, 40]. | 2308.00675#8 | 2308.00675#10 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#10 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Specifically, given a natural language instruction, an LLM planner generates a program to be sequentially executed where each step of the program may rely on using tools selected from a tool set. After the program is generated, it is then executed by an environment which finally returns the execution results. Here, the program extends beyond conventional coding practice [76, 53, 25] and is more closely associated with automata theory [59]: a set of instructions of automations (e.g. tools in our case). Therefore, the tool set can be libraries with specific programming languages (e.g. Python), or general computation with properly defined input-output, such as trained models, API calls, and beyond. # 3.2 Tool-use prompting methods As discussed in Section 1, two main types of information are considered in prompting LLMs for tool-using plans: demonstrations (demos) and documentations (docs). Demos showcase how tool interactions can accomplish specific tasks, while docs describe individual tool functionalities without task-specific ties as shown in Figure 2. In the experiment, we explore combinations of including/excluding docs and demos in prompts, as well as varying numbers of demos. # 3.3 Evaluation tasks We conduct our experiments on 6 tasks across multiple modalities with a variety of tool sets. We describe the setup and the tool sets for each task below. Except for specific cases where it is explicitly specified, the LLM planner is ChatGPT (gpt-3.5-turbo). Multi-modal question answering on ScienceQA. ScienceQA [39] consists of multi-modal multiple- choice science questions that requires language and visual understanding as well as domain-specific knowledge to answer correctly. On ScienceQA, we follow the setup used in Chameleon [40] and employ the same tool set with 7 tools, such as the search engine and the image text detector. | 2308.00675#9 | 2308.00675#11 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#11 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Tabular math reasoning on TabMWP. TabMWP [41] is a math reasoning dataset with various forms of tables. It requires a model to understand structured or domain-specific tables, and utilize the 4 Answer (in GCP) a @ gcloud compute firewall-rules create NAME wcallow tep:8000 (Question: Here is a new cloud service called LLMVM, which provides its own SDK CLI tool (llmcloud). * Create a firewall allowing port 8000. Please touch a file ¢ touch my file my_file then copy it to server with port 8000. © gcloud compute scp -~po my file my_vm:./ â | 2308.00675#10 | 2308.00675#12 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#12 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | © Last, please publish a message â Hiâ to the project. © gcloud pubsub topics create TOPIC ¢ gcloud pubsub topics publish PROJ --message "hi" 1 Few Shot 1 Documentation > 8 low my_vm --port, @ 1lmcloud comp firewall-rules | @ 1lmcloud comp firewall-rules create 1 tep [Hallucination] H create NAME ow tep i NAME --allow tep:8000 ) | @ touch my file 1 @ touch my file @ scp -P 8000 /path/to/my file 1O@. ud comp scp --P 8000 1 @ ilmcloud comp scp --port 8000 user@server_ip:~ [Wrong command] f my file my_vm:./ [Wrong flag] 1 my file my_vm:./ @ [Missing topic creation] 1 @ [Missing topic creation] 1 @ 1lmcloud pubsub topics create TOPIC rx) ssage --project | @ 1lmcloud pubsub topics publisn! @ 1 d pubsub topics publish PROJ " [Hallucination] PROJ my-topic --message "Hi" Figure 3: The new LLM Cloud Platform command-line toolkit, which is an unseen toolset to existing LLMs based on real-world Google Cloud command-line tools through renaming. | 2308.00675#11 | 2308.00675#13 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#13 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | information to answer corresponding math questions. On TabMWP, we also follow Chameleon [40] with the same tool set with 9 tools, such as program generator and column lookup. Multi-modal reasoning on NLVRv2. NLVRv2 [60] requires the model to verify whether a statement is true on a pair of images, requiring compositional understanding of both texts and images. On NLVRv2, we follow the setup used in Visual Programming (VisProg) [19] with 20 vision modules (tools) for image understanding and manipulation. Since VisProg only relies on few-shot demonstra- tions and does not utilize documentations for the modules. We generate the documentation for each module by including descriptions on the functionality of the module and the function signature. We provide the full documentations we use for each module in the appendix. | 2308.00675#12 | 2308.00675#14 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#14 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Unseen API usage on a newly collected dataset. Existing benchmarks used in literature come with a limited set of tools. To explore real-world use cases involving a large number of tools, we collect a new benchmark called the LLM Cloud CLI that consists of 200 commands representing the functionalities of the Google Cloud Platform (GCP) command-line interface (CLI). Each command in our CLI is renamed from its corresponding GCP command, preserving the semantics and logic of the original tools, while being unseen to the language models. For instance, the command gcloud compute create NAME , responsible for creating a virtual machine, is renamed to be llmvm compute make NAME . The renaming conventions also allow us to utilize authentic GCP examples as few-shot demos and leverage the corresponding GCP documentation. The benchmark comprises 50 questions, each focused on creating and configuring specific cloud services using command-line tools. Each question requires at least two commands to complete the task. | 2308.00675#13 | 2308.00675#15 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#15 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | We show an example in Figure 3, and include more in appendix. Due to the length constraints of the LLM we use, we cannot fit documentation of 200 tools in a single prompt. Therefore, we employ a simple TF-IDF search using the questions as queries to retrieve the most relevant documentations and truncate them to fit within the prompt length. More details can be found in the appendix. Image editing with natural language. We consider image editing as a form of qualitative evaluation. This process calls for the model to plan and use different vision modules to handle complex natural language instructions. For instance, to execute an instruction like "replace the red bus with a green bicycle", the model must localize the red bus, generate its segmentation mask, and then inpaint the masked area. We use the tool sets from VisProg. Unlike VisProg, which depends on few-shot demonstrations, our model only looks at the module documentation. We further include the recently released image understanding works, Segment Anything (SAM) [30] and Grouding DINO [38] to expand the tool set to test the zero-shot capability on the new and unseen tools in a plug-and-play fashion. Video tracking. Video tracking is also utilized in this study as a qualitative evaluation. This task aims to acquire the masks of a tracked object in each frame of a video, necessitating the deployment of processes such as object localization, segmentation, and tracking. In addition to SAM and Groudning DINO, we incorporate the documentation of an unseen object tracking module, Xmen [14] into the VisProg framework with the aim to showcase the modelâ s ability to adapt and employ new tools without the need for explicit demonstrations again on a different task. | 2308.00675#14 | 2308.00675#16 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#16 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | 5 â e- With doc â *â Without doc ScienceQA TabMWP NLVRv2 80.5 904 Â¥ 80.0 92 ou a B 79.5 90 40 5 8 79.0 88 s 20 % 78.5 86 2 78.0 84 0 0 5 10 0 8 16 0 2 4 12 Number of demos Number of demos. Number of demos Figure 4: Tool-using performance with gpt-3.5-turbo on different benchmarks, which covers from langauge to vision modalities. We report results with and without documentation (doc) and demonstations (demo), and their combinations. Clearly, with documentation only (upper-left blue dot) shows competitive performance across all datasets. # 4 Empirical findings We showcase the importance of tool documentation in three-fold: First, we show that tool documen- tations reduces the need of demonstrations (Section 4.1). Second, based on the finding, we further show that relying on documentation rather than demonstrations provides a more scalable solution to equip LLMs with a large number of available tools (Section 4.2). Finally, we show that with tool documentations alone, LLMs are able to comprehend and utilize most recent vision models to accomplish impressive results on image editing and video tracking tasks, on which existing results are achieved either with human-crafted demos or predefined procedures (Section 4.3). # 4.1 Documentations sidestep the need for demonstrations In this section, we show how tool documentations reduce the need of demonstrations. | 2308.00675#15 | 2308.00675#17 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#17 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | We present the findings on three datasets: ScienceQA, TabMWP, and NLVRv2. We evaluate the model performance, with and without tool documentations, across varying number of demonstrations (demo) on each dataset. In Figure 4, we see that when provided with tool docs, the model is able to maintain stable performance as we strip away the number of demos used. In fact, without using any demos (i.e., 0-shot), the model is able to achieve on par performances to using 16-shot on TabMWP, and using 12-shot on NLVRv2. On ScienceQA, the model can even achieve better performance solely with docs compared to additionally using 10-shot demos. On the other hand, without tool docs, the model performance is very sensitive to the number of demos used. As we decrease the number of demos, we see significant performance drop on all three datasets. This highlights the importance of tool docs and shows that it provides an effective way to reduce the reliance on demos. In Table 1, when compared to existing baseline methods, we also see that with doc, even 0-shot can perform very competitively. By sidestepping the need for demos, we are able to alleviate the efforts needed to carefully curate these demos. For example, aligned with recent studies [81, 12], we observe in Figure 4 that the model performance is sensitive to which demos are used, shown by the large performance variances under 5-shot on ScienceQA and 2-shot on NLVRv2. # 4.2 Documentations enable efficient scaling on tool-using The findings in Section 4.1 show that one can in fact reduce the reliance on few-shot demos with tool docs. By relaxing this constraint, we study whether tool docs enables a more scalable way to equip LLMs with a large number of tools, wherein few-shot demos can specifically fall short on covering limited tool-use cases. We present our findings in this section on the newly collected LLM Cloud CLI dataset with 200 available tools. Qualitative walk-through result. Figure 3 serves as a qualitative example illustrating the limita- tions of the LLMs with different information. As expected, zero-shot LLM successfully identifies and responds to the touch command, which is familiar and well-known. | 2308.00675#16 | 2308.00675#18 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#18 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | However, when faced with the 6 Table 1: Comparisons to existing baseline methods on different benchmarks. We follow [40, 19] to select the beasline methods for each benchmark task. We see that 0-shot with doc performs competitively, outperforming CoT and PoT on ScienceQA and TabMWP. On NLVRv2, ViLT-NLVR is finetuned on the dataset, while the LLM performs in a zero-shot fashion. Benchmark Methods CoT [67] without doc (0-shot) with doc (0-shot) ScienceQA 78.54 78.25 79.91 PoT [13] without doc (0-shot) with doc (0-shot) TabMWP 89.28 84.13 92.69 ViLT-NLVR [29] without doc (0-shot) with doc (0-shot) NLVRv2 76.30 0.00 63.40 | 2308.00675#17 | 2308.00675#19 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#19 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | â e- Withdoc â *â Without doc text-davinci-002 gpt-3.5-turbo 0.5 05 gas © Wee Of O42 04 O37 035 035 o35 Of = A £8: +o 5 0.3 0.3 a 4 0.2 0.2 0,18 0,19 018 he Od ee 0.1 0.1 oloz oloz 0,05 0,05 0.0 0.0 0 5 10 15 0 5 10 15 Number of demos Number of demos Figure 5: Command planning of LLM Cloud Platform CLI with and without documentation (doc) and demonstations (demo), and their combinations. Few-shot demonstration without documentation results in unsatisfactory performance due to low coverage of large number of tools, while reading documentation significantly boosts the performance. unseen LLM-Cloud command lines, the zero-shot LLM fails to generate accurate responses involving these unfamiliar tools due to its lack of knowledge regarding their syntax and usage. While few-shot demonstrations have the potential to enhance model performance, it is important to acknowledge that the coverage of these demonstrations is limited due to the vast number of command-line tools. Consequently, certain commands or flags may not be adequately covered. In Figure 3, although we observe data copying is commonly appeared the few-shot examples, however, the model encounters difficulties in correctly configuring the less common flag --port, instead hallucinating the use of -P based on familiarity with the scp -P command in Linux. Conversely, in the same example illustrated in Figure 3, by solely utilizing the provided documentation, the language models not only successfully discern the steps required for utilizing tools (such as a hidden step of creating a topic before sending messages), but also possess the ability to accurately configure flags (e.g., --port) by leveraging information extracted from the documentation. Quantitative comparisons. We calculate the command-line level F1 score of each example and report the average F1 across 50 examples. Figure 5 showcases the performance of various LLMs in the zero-shot setting, where they have no prior exposure to the LLM-Cloud command-line tools we create. As anticipated, all zero-shot LLMs demonstrate low F1 scores. | 2308.00675#18 | 2308.00675#20 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#20 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Zero-shot text-davinci-002 achieves an F1 score of 0.02, while the gpt-3.5-turbo model achieves a slightly higher score of 0.13. The improved performance of the gpt-3.5-turbo model can be attributed to better handling of common Linux commands, such as touch. As mentioned in quantitative comparison, few-shot demos improve upon zero-shot, but still fail on uncovered commands or flags in the demo. Therefore, the best few-shot demo in text-davinci-002 and gpt-3.5-turbo are only with 0.05 and 0.19 F1 scores respectively. On the other hand, LLM with documentation boosts the performance by a large margin to be 0.37 in text-davinci-002 and 0.45 in gpt-3.5-turbo . | 2308.00675#19 | 2308.00675#21 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#21 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | 7 Question: Replace the bench with a blue sofa Program: BOXO = DINO_LOC(image=IMAGE, object='benchâ ) Pp OBJO = SAM_SEG(image=|MAGE, a box=BOX0) Documentation ; » [Documentation IMAGEO = REPLACE(image=IMAGE, of other APIs] Question: object=OBJO, prompt='blue sofaâ ) : Change the hair color FINAL_ANS = RESULT(var=IMAGEO) + Program: 3 DINO_LOG (NEW BOXO = DINO_LOC(image=IMAGE, 3 © Description: object="hairâ ) a Ce NOL Pr >|] OBJO=SAM SEG(image=IMAGE, [>>| 98 ° Usge 2] | box-Box0) $ BOX=DINO_LOC(..) \2| | IMAGEO = REPLACE(image=IMAGE, & - i bject=OBJO, prompt="red hairâ ) Fy ISAM_SEG [NEW: ix go 2 : Deception Question: 5) \ FINAL_ANS = RESULT(var=IMAGEO) = $ an | | ee Ee Cbieet Segmentation by _ [Track the cat in the video Re-invent Grounded-SAM S - ° e Usage: Pa = Program: oa teackinew) IMAGE = EVAL(expr="{VIDEO}[O]") = BOXO = DINO_LOC(image=IMAGE, e Description: Video object antctcat') tracking by XMem [...] object='cat) . © Usage: OBJ=TRAACK(..) [>| >| OBJO=SAM_SEG(image=IMAGE, [> box=BOX0) \N A VIDEOO = TRACK(video=VIDEO, object=OBJO) FINAL_ANS = RESULT(var=VIDEOO) _/ Re-invent Track Anything Figure 6: Plug-and-play new vision tools without demonstration. We add GroundingDINO [38], Segment Anything (SAM) [30], XMem [14] as new tools for VisProg. Solely with the documentations of the new tools, the LLM is able to automatically â | 2308.00675#20 | 2308.00675#22 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#22 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | re-inventâ recent Grounded-SAM [23] and Track Anything [70] without knowing these derivatives, taking a further step toward automatic knowledge discovery. We further compare the performance of the documentation reading with that of the documentation supplemented with few-shot demonstrations. In the case of text-davinci-002 , with documen- tation only, we achieves an F1 score of 0.37. Conversely, the documentation augmented with different shots yields an average F1 score of 0.35. Similarly, in the gpt-3.5-turbo experiment, the performance with different shot demonstrations (0.44, 0.44, 0.42) are consistently lower than the documentation-only performance (0.45). These results highlight two observations. First, the performance of the model is highly sensitive to the selection of few-shot demonstrations. The observation aligns the finding in [12] that more few-shot demos might be redundant and even degrade performance due to spurious correlations. It emphasizes the importance of careful selection and design, which may involve more human effort. Second, the zero-shot documentation reading baseline exhibits remarkable robustness and delivers competitive performance across both examples. This highlights the potential value and reliability of relying solely on the documentation, which is usually easy to get in many packages and tools. # 4.3 Plug-and-play with new image and video tools In this section, we validate that one can equip LLMs with unseen tools to solve novel tasks solely with tool docs, and without any further demos. We present our results on image editing and video tracking tasks. We show that LLMs can effectively re-invent existing human-programmed image editing and video tracking pipelines, backed by state-of-the-art vision models to achieve impressive results. Recent advancements in vision models, including GroundingDINO [38], an advanced open-set object detector; Segment Anything (SAM) [30], a cutting-edge image segmentation tool; and XMem [14], a | 2308.00675#21 | 2308.00675#23 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#23 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | 8 state-of-the-art video object segmentation tool, accompany the progress of language models. These breakthroughs, emerging in the past year, serve as additional tools that are yet unfamiliar to our LLM (gpt-3.5-turbo). By expanding VisProg to include these new tools, we embark on the intriguing exploration of whether LLMs can effortlessly comprehend the documentation associated with these new models, and combine these tools in a plug-and-play manner, enabling a wide range of applications. | 2308.00675#22 | 2308.00675#24 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#24 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | In Figure 6, when performing an image editing request â replace the bench with a blue sofaâ , the LLM generates a VisProg program that harnesses the power of GroundingDINO and SAM from the expanded tool set to segment the bench, and apply the stable diffusion [54] for synthesizing the sofa. This program re-invents the wheel by replicating the behavior of recent popular project, Grounded- SAM [23] without prior knowledge of this repository. Similarly, when tasked with video tracking â track the cat in the videoâ , the generated VisProg program by the LLM incorporates GroundingDINO together SAM for first frame segmentation as the initialization for XMem to do video tracking. It again re-invents the results obtained in the contemporary work, Track Anything [70]. We note that TaskMatrix [69] also has an updated approach with Grounded-SAM. However, they pre-program the entire Grounded-SAM editing pipeline as an image editing function, allowing the LLM to control it rather than enabling the LLM to generate the editing program using the building tools alone as we present here. By successfully re-inventing the functionalities of Grounded-SAM and Track Anything without prior knowledge, solely relying on the available building blocks, the LLM demonstrates not only its capacity to effortlessly comprehend and combine new tools with documentation only but also highlights its potential for automatic knowledge discovery. It discovers new insights through leveraging its existing knowledge only without further demonstration. # 4.4 Performance v.s. documentation quality We investigates the impact of documentation quality on performance. | 2308.00675#23 | 2308.00675#25 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#25 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | To assess LLMâ s capability to comprehend realistic documentation, we refrain from engineering or curating the content of the documentation. Instead, we vary the document length by truncating the documents and keeping the first n words, using it as a proxy for assessing thoroughness and quality. In this ablation, we consider the LLM-Cloud benchmark, which has long documentation based on real-world GCP CLI manuals. We illustrate the result in Figure 7. 0.45 â â gpt-3.5-turbo (doc) 0.40 â sâ text-davinci-002 (doc) 0.35 gpt-3.5-turbo (best 15 shots) So30f Ne text-davinci-002 (best 15 shots) a 0.25 o20f 0.15 5 sa 200° 300 400 500 600 700 800 Documentation Length Figure 7: Performance of zero-shot documentation LLM when varying the input document length. In both text-davinci-002 and gpt-3.5-turbo experiments, we consistently observe a trend where performance improves as the document length increases, up to a length of 600. This finding aligns with our hypothesis that the models possess the ability to comprehend and leverage documen- tation effectively. Remarkably, this improvement in performance is achieved without any additional training, fine-tuning nor document curation . It highlights the tremendous value of providing compre- hensive documentation, as it empowers the models to leverage a wide range of command-line tools at scale, solely through the process of reading and understanding the documentation. We note that a degradation in performance after the document length exceeds 600 words. We attribute this decline to the inherent challenges associated with comprehending lengthy documents in language models [61]. However, we foresee the ongoing advancements in handling long inputs in language models will gradually address this limitation [10, 5, 2]. We leave exploring solutions for overcoming this limitation for future research. | 2308.00675#24 | 2308.00675#26 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#26 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | 9 # 5 Conclusion In this paper, we examined the effectiveness of tool docs in enabling zero-shot tool usage with LLMs. We first showed that LLMs can achieve on par or better performance than their few-shot counterparts when provided with tool docs. We then scaled up to a significantly larger tool set on a newly collected API through docs only. By simply plugging in new tools along with their docs, LLMs are able to tackle unseen tasks in image editing and video tracking without further demos and replicate the functionalities of recent popular projects, suggesting a potential for automatic knowledge discovery. Overall, we shed light on a new perspective of tool usage with LLMs by focusing on their internal planning and reasoning capabilities with docs, rather than explicitly guiding their behaviors with demos. | 2308.00675#25 | 2308.00675#27 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#27 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | # References [1] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. | 2308.00675#26 | 2308.00675#28 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#28 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Do as i can and not as i say: Grounding language in robotic affordances. In arXiv preprint arXiv:2204.01691, 2022. [2] Anthropic. 100k context windows. https://www.anthropic.com/index/ 100k-context-windows, 2023. Accessed: 05/15/2023. [3] AutoGPT. Auto gpt. https://autogpt.net/category/chatgpt-tools/autogpt/, 2023. Accessed: 05/15/2023. | 2308.00675#27 | 2308.00675#29 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#29 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | [4] Michael Bain and Claude Sammut. A framework for behavioural cloning. In Machine Intelli- gence 15, pages 103â 129, 1995. [5] Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew R Gormley. Unlimiformer: Long- range transformers with unlimited length input. arXiv preprint arXiv:2305.01625, 2023. [6] Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206â 2240. PMLR, 2022. [7] SRK Branavan, David Silver, and Regina Barzilay. Learning to win by reading manuals in a monte-carlo framework. Journal of Artificial Intelligence Research, 43:661â 704, 2012. [8] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. | 2308.00675#28 | 2308.00675#30 | 2308.00675 | [
"2302.13971"
]
|
2308.00675#30 | Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models | Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020. [9] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. | 2308.00675#29 | 2308.00675#31 | 2308.00675 | [
"2302.13971"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.