doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.06283
210
There is no unique approach for both tasks. Nevertheless, highly qualified scientists and engineers generate such abstracts and titles and can act as a valuable benchmark to evaluate model performance. Moreover, developing metrics for text generation is a difficult task. Here, for the sake of simplicity, we choose the Recall- Oriented Understudy for Gisting Evaluation (ROUGE) metrics for evaluating the model performance. ChemNLP uses models such as a pre-trained Text-to-Text Transfer Transformer (T5) developed by Google for text summarization. The pre-trained model is further fine-tuned to produce summaries of abstracts and analyze the performance by analyzing its closeness to the title of the given paper. For the text generation, ChemNLP fine-tunes LLMs such as the pre-trained GPT-2-medium model from the HuggingFace library. For these tasks, the community can contribute their models on a benchmarking platform called the JARVIS-Leaderboard (https: //pages.nist.gov/jarvis_leaderboard/). Specifically, the TextGen task is available at: https://pages.nist.gov/jarvis_leaderboard/AI/TextSummary/arxiv_summary_text/ and TextSummary task is available at https://pages.nist.gov/jarvis_leaderboard/AI/TextGen/ arxiv_gen_text/ # 38. Example
2306.06283#210
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
211
# 38. Example Title to Abstract example for cond-mat/0105402. [76] # Question MgB2 under pressure: phonon calculations, Raman spectroscopy, and optical reflectance can be described as # Fine-tuned GPT-2-medium model output The amplitude of phonon vibrations in a material exhibiting an unconventional lattice structure has been measured in the low pressure region of the granular MgB2 phase. The peak in the high pressure region is the first measurement of phonon amplitude ever presented in a granular superconducting material, and it indicates a potential source of superconductivity. Superconducting Transition from Fluctuations to Brillouin T-Wings in CeFo Josephson Junctions can be described as We report electronic and numerical calculations for solving the electron-wave (E-W) conduction spectra of a quasi-two-dimensional superconducting monolith CeFo Josephson junctions: The amplitude of 36 phonon vibrations in the low pressure region (KPSM), and in the high pressure region of the superconducting density wave (SDW) densities, are used to understand the superconductivity. # Actual abstract
2306.06283#211
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
212
# Actual abstract The effect of pressure on optical phonon frequencies of MgBg has been calculated using the frozen- phonon approach based on a pseudopotential method. Gruneisen parameters of the harmonic mode requencies are reported for the high-frequency zone-center Ey, and By, and the zone-boundary E>, and Bz, modes at A. Anharmonic effects of phonon frequencies and the implications of the calculated phonon frequency shifts for the pressure dependence of the superconducting transition temperature of MgB» are discussed. Also reported are Raman and optical reflectance spectra of gBz measured at high pressures. The experimental observations in combination with calculated results indicate that broad spectral features we observed in the Raman spectra at frequencies etween 500 and 900 cm7! cannot be attributed to first-order scattering by zone-center modes, ut originate in part from a chemical species other than MgB» at the sample surface and in part rom a maximum in the MgB» phonon density of states. Low-temperature Raman spectra taken at ambient pressure showed increased scattering intensity in the region below 300 cm7!. Interestingly, the generated abstract contains grammatically and syntactically incorrect sentences. We suspect that this is due to our use of a small, outdated, base model. However, more systematic analysis will need to be performed in future work. # One sentence summaries
2306.06283#212
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
213
# One sentence summaries a. Problem/Task Text summarization and generation, in specific, a summary of an abstract into a title and generation of an abstract conditioned on a title. b. Approach Fine-tuning of transformer models such as T-5 and GPT-2 on data from arXiv. c. Results and Impact Initial exploration indicates that transformer models might be suitable for this task. d. Challenges and Future Work More systematic analysis, including rating of the generated titles and abstracts by domain experts is required to identify the limitations of this approach. 37
2306.06283#213
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
214
38 IV. Education A. i-Digest a. Problem Over the last few years, especially during the Covid period, most of us had to switch to the online mode of working in our day-to-day jobs. And even today, the online mode of working has, to some extent, stayed on as it turned out to be convenient for both employers and employees. One clear example can be found in the field of education, where the use of video lectures became the norm for teaching students in universities and schools. Likewise, podcasts and three-minute thesis videos, which communicate important scientific information to society at large, have grown tremendously [77, 78]. This has led to a situation where, at present, we have an enormous amount of important scientific information stored in the form of videos and audio all over the internet. A current challenge is to summarize and make use of this knowledge efficiently. Some efforts in this direction have been made by using AI Youtube summarizers and QnA Bots [79]. We would like to build upon such efforts and create a tool for the field of education. b. Solution We present a tool that self-guides students and other users toward a better understanding of the content of a video lecture or a podcast. In order to accomplish this, we used
2306.06283#214
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
215
a tool that self-guides students and other users toward a better understanding of the content of a video lecture or a podcast. In order to accomplish this, we used publicly available LLMs like Open AI’s Whisper [80] and GPT-3.5-turbo model. All the user needs to do is provide a link to the lecture video or audio file. After only a short time, the overview page shows some technical keywords on which the video is based, a short but comprehensive summary, and some questions for the user to assess his or her understanding of the concepts discussed in the video/audio (Figure 19). Additionally, for chemistry enthusiasts, if some chemical elements/molecules are discussed in the content, we link them to online databases. At the backend, we first convert the video to audio using Pytube (In the case of a podcas this step is not needed). Then we use the Whisper model to transcribe the audio to text. Next, we mak use of the OpenAl GPT-3.5-turbo model to obtain a short summary and a set of questions based on th text. Finally, we extract the name of chemical elements/molecules and list the PubChem database entry fo that element /molecule on the
2306.06283#215
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
216
set of questions based on th text. Finally, we extract the name of chemical elements/molecules and list the PubChem database entry fo that element /molecule on the overview page. [81-83] The web interface was made using the open-source a framework Streamlit [84]. Bs 2 oO Large Language Model ' r Key Words —? “Suggest three 1) Monte Carlo » ‘Simulation keywords 2) Metropois algorithm 3) Importance ‘Sampling > | =) Transcript r Summary — Tati a ae Give a summary The lecture is about the Monte Pubchem Search ° Carlo simulations and its algorithm. The speaker discusses Chemicals —— r Questions —®& “Come up with 1) Could you explain the . acceptance rule? questions 2) Why. sit important to select a particle at random for displacement? 3) Figure 19. A schematic of the i-digest interface. On providing a link to an online video or audio, i-digest generates some technical keywords, a short but comprehensive summary, and a list of questions based on the content in the video/audio. Additionally, chemicals discussed in the content are linked to online databases such as PubChem. c. Impact We strongly believe that extracting important scientific information
2306.06283#216
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
217
video/audio. Additionally, chemicals discussed in the content are linked to online databases such as PubChem. c. Impact We strongly believe that extracting important scientific information in terms of short lecture notes and questions would help to push forward the field of education towards creating and using resources more efficiently. Moreover, by providing additional links to resources, e.g., databases, journals, and books,
2306.06283#217
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
218
we provide an opportunity for the user to go beyond the content of the lecture and spark interest in a more detailed understanding of the topic. Specifically, this would help researchers/teachers/professors to create new course content or to update/modify already available content. In general, our tool covers a broad range of users, from the youngest learner to the chemistry novice who wants to kickstart his research, all the way to professors, course creators, and lifetime learners. d. Lessons learned Working together with colleagues can be fun and enriching and often help to solve big problems. This hackathon taught us that even in one day, coming together can help achieve something significant. # One sentence summaries e. Problem/Task Provide students with automatically generated active learning tasks for lecture record- ings. f. Approach ‘Transcription of videos using OpenAl’s Whisper model, prompting of OpenAI’s GPT-3.5- turbo model to produce a short summary and questions based on the transcript, as well as to extract mentions of chemicals in the text. g. Results and Impact The system can transcribe the text, generate meaningful questions, and success- fully extract mentions of chemicals.
2306.06283#218
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
219
g. Results and Impact The system can transcribe the text, generate meaningful questions, and success- fully extract mentions of chemicals. h. Challenges and Future Work It is difficult to systematically evaluate the performance of this system due to the lack of suitable benchmarks/eval. An obvious extension of this approach is to condition it on further material (e.g., lecture notes and books). In addition, one might automatically score the answers and show them at the beginning and at the end of the video. This would allow us to evaluate the learning of the students and to guide them to the relevant material in case a question was not answered correctly. 39 40 V. Meta analysis of the workshop contributions We have a female/male ratio of about 30% among the workshop participants who co-authored this paper. We have participants from 22 different institutions in 8 countries. Most teams combine expertise from different institutions (Figure 21), in several cases beyond academia (Figure 22). Around 20% of the teams are international, with participants from two countries (Figure 23). 5 10 15 20 Number of participants Figure 20. Worldmap (Robin projection) with the number of participants shown in color. f | | aE || = 1 2 3 4 5 Number of unique affiliations Number of teams Figure 21. Histogram of the number of unique affiliations per team. Number of teams , = no academia only academia mixed
2306.06283#219
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
220
Number of teams , = no academia only academia mixed Figure 22. Number of teams with participants only from academia or academia and industry/nonprofit, respectively. We counted national labs as “academia”. Number of unique countries Number of teams ol Figure 23. Histogram of the number of unique countries per team. 41 Ward, L.; Blaiszik, B.; Foster, .; Assary, R. $.; Narayanan, B.; Curtiss, L. Machine learning prediction of accurate atomization energies of organic molecules from low-fidelity quantum chemical calculations. MRS Commun. 2019, 9, 891-899. Curtiss, L. A.; Redfern, P. C.; Raghavachari, K. Gaussian-4 theory using reduced order perturbation theory. J. Chem. Phys. 2007, 127, 124105. Ramakrishnan, R.; Dral, P. O.; Rupp, M.; Von Lilienfeld, O. A. Quantum chemistry structures and properties of 134 kilo molecules. Sci. Data 2014, 1, 1-7. arayanan, B.; Redfern, P. C.; Assary, R. S.; Curtiss, L. A. Accurate quantum chemical energies for 133000 organic molecules. Chem. Sci. 2019, 10, 7449-7455.
2306.06283#220
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
221
Weininger, D. SMILES, a chemical language and information system. 1. Introduction to methodology and en- coding rules. J. Chem. Inf. Comput. Sci. 1988, 28, 31-36. Krenn, M.; Hase, F.; Nigam, A.; Friederich, P.; Aspuru-Guzik, A. Self-referencing embedded strings (SELFIES): A 100% robust molecular string representation. Mach. Learn.: Sci. Technol. 2020, 1, 045024. Krenn, M.; Ai, Q.; Barthel, S.; Carson, N.; Frei, A.; Frey, N. C.; Friederich, P.; Gaudin, T.; Gayle, A. A.; Jablonka, K. M., et al. SELFIES and the future of molecular string representations. Patterns 2022, 3, 100588. Jablonka, K. M.; Schwaller, P.; Ortega-Guerrero, A.; Smit, B. Is GPT-3 all you need for low-data discovery in chemistry? ChemRaiv preprint 10.26434/chemrziv-2023-fw8n4 2023,
2306.06283#221
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
222
Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A., et al. Language models are few-shot learners. Advances in neural information processing systems 2020, 33, 1877-1901. 10 Ramakrishnan, R.; Dral, P. O.; Rupp, M.; Von Lilienfeld, O. A. Big data meets quantum chemistry approxima- tions: the A-machine learning approach. J. Chem. Theory Comput. 2015, 11, 2087-2096. 11 Gupta, A. K.; Raghavachari, K. Three-Dimensional Convolutional Neural Networks Utilizing Molecular Topo- logical Features for Accurate Atomization Energy Predictions. J. Chem. Theory Comput. 2022, 18, 2132-2143. 12 {angrulkar, S.; Gugger, S.; Debut, L.; Belkada, Y.; Paul, S. PEFT: State-of-the-art Parameter-Efficient Fine- Tuning methods. https: //github.com/huggingface/peft, 2022.
2306.06283#222
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
223
13 Hu, E. J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. arXiv preprint: Arxiv-2106.09685 2021, 14 Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language Models are Unsupervised Mul- titask Learners. 2019, https: //d4mucfpksywv.cloudfront .net/better-language-models/language_models_ are_unsupervised_multitask_learners.pdf. 15 Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; Iwasawa, Y. Large Language Models are Zero-Shot Reasoners. 2023. 16 Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Roziére, B.; Goyal, N.; Ham- ro, E.; Azhar, F., et al. Llama: Open and efficient foundation language models. arXiv preprint:2302.13971 2023,
2306.06283#223
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
224
17 Lin, Z.; Akin, H.; Rao, R.; Hie, B.; Zhu, Z.; Lu, W.; Smetanin, N.; Verkuil, R.; Kabeli, O.; Shmueli, Y., et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science 2023, 379, 1123-1130. 18 Andrew, R. Global Co2 Emissions From Cement Production. 2017; https: //zenodo.org/record/831455. 19 Lookman, T.; Balachandran, P. V.; Xue, D.; Yuan, R. Active learning in materials science with emphasis on adaptive sampling using uncertainties for targeted design. npj Comput. Mater. 2019, 5. 20. Vélker, C.; Firdous, R.; Stephan, D.; Kruschwitz, $. Sequential learning to accelerate discovery of alkali-activated inders. Journal of Materials Science 2021, 56, 15859-15881.
2306.06283#224
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
225
21 Volker, C.; Benjami Moreno Torres,; Tehseen Rug,; Firdous, R.; Ghezal Ahmad,; Zia, J.; Liiders, S.; Scaffino, H. L.; Hépler, M.; Bohmer, F.; Pfaff, M.; Stephan, D.; Kruschwitz, S. Green building materials: a new frontier in data-driven sustainable concrete design. Preprint 10.13140/RG.2.2.29079.85925. 2023. 22 Ramos, M. C.; Michtavy, $8. S.; Porosoff, M. D.; White, A. D. Bayesian Optimization of Catalysts With In-context Learning. arXiv preprint: Araiv-2304.05341 2023, 23 Rao, G. M.; Rao, T. D. G. A quantitative method of approach in designing the mix proportions of fly ash and GGBS-based geopolymer concrete. Aust. J. Civ. Eng. 2018, 16, 53-63. 24 OpenAI, Text-davinci-003. https: //platform. openai .com/models/text-davinci-003. 25 Bousquet, A. lolopy. https://pypi.org/project/lolopy/, 2017; Accessed: 2023-02-27.
2306.06283#225
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
226
25 Bousquet, A. lolopy. https://pypi.org/project/lolopy/, 2017; Accessed: 2023-02-27. 26 Heinisch, O. Steel, R. G. D., and J. H. Torrie: Principles and Procedures of Statistics. (With special Reference to the Biological Sciences.) McGraw-Hill Book Company, New York, Toronto, London 1960, 481 S., 15 Abb. 81 s 6d. Biometrische Zeitschrift 1962, 4, 207-208. 27 Dinh, T.; Zeng, Y.; Zhang, R.; Lin, Z.; Gira, M.; Rajput, S.; Sohn, J.-Y.; Papailiopoulos, D.; Lee, K. LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks. arXiv preprint: Arxiv-2206.06565. 2022. 42 28 Herhold, P.; Farnworth, E. The Net-Zero Challenge: Fast-Forward to Decisive Climate Action. World Economic Forum, available at: https://www3. weforum. org/docs/WEF _The_Net_Zero_ Challenge. pdf (accessed 4 October 2021). 2020.
2306.06283#226
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
227
29 Hong, Z.; Ajith, A.; Pauloski, G.; Duede, E.; Malamud, C.; Magoulas, R.; Chard, K.; Foster, I. ScholarBERT: Bigger is Not Always Better. arXiv preprint: Arxiv-2205.11342. 2022. 30. Kim, S.; Thiessen, P. A.; Bolton, E. E.; Chen, J.; Fu, G.; Gindulyte, A.; Han, L.; He, J.; He, S.; Shoemaker, B. A., et al. PubChem substance and compound databases. Nucleic acids research 2016, 44, D1202—D1213. 31 Dai, H. et al. AugGPT: Leveraging ChatGPT for Text Data Augmentation. arXiv preprint: Arxiv-2302.13007. 2023. 32 Wolf, T. et al. Transformers: State-of-the-Art Natural Language Processing. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Online, 2020; pp 38-45. 33, edregosa, F. et al. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 2011, 12, 2825-2830.
2306.06283#227
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
228
33, edregosa, F. et al. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 2011, 12, 2825-2830. 34 Rajpurkar, P.; Jia, R.; Liang, P. Know What You Don’t Know: Unanswerable Questions for SQuAD. 2018, 35, Zhang, J.; Chang, W.-C.; Yu, H.-F.; Dhillon, I. Fast multi-resolution transformer fine-tuning for extreme multi- abel text classification. Adv. Neural Inf. Process. Syst. 2021, 34, 7267-7280. 36 White, A. D.; Hocky, G. M.; Gandhi, H. A.; Ansari, M.; Cox, S.; Wellawatte, G. P.; Sasmal, S.; Yang, Z.; Liu, K.; Singh, Y., et al. Assessment of chemistry knowledge in large language models that generate code. Digital Discovery 2023, 37 Schwaller, P.; Laino, T.; Gaudin, T.; Bolgar, P.; Hunter, C. A.; Bekas, C.; Lee, A. A. Molecular transformer: A model for uncertainty-calibrated chemical reaction prediction. AC'S Central Science 2019, 5, 1572-1583.
2306.06283#228
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
229
38 Schwabe, T.; Grimme, S. Theoretical thermodynamics for large molecules: walking the thin line between accuracy and computational cost. Acc. Chem. Res. 2008, 41, 569-579. 39 Skyner, R. E.; McDonagh, J. L.; Groom, C. R.; van Mourik, T.; Mitchell, J. B. O. A review of methods for the calculation of solution free energies and the modelling of systems in solution. Phys. Chem. Chem. Phys. 2015, 17, 6174-6191. 40 Schleder, G. R.; Padilha, A. C. M.; Acosta, C. M.; Costa, M.; Fazzio, A. From DFT to machine learning: recent approaches to materials science-a review. J. Phys. Mater. 2019, 2, 032001. 41 Chase, H. LangChain. 2022; https: //github.com/hwchase17/langchain. 42 Bran, A. M.; Cox, S.; White, A. D.; Schwaller, P. ChemCrow: Augmenting large-language models with chemistry tools. arXiv preprint: Arxiv-2304.05376 2023,
2306.06283#229
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
230
43 Jain, A.; Ong, S. P.; Hautier, G.; Chen, W.; Richards, W. D.; Dacek, S.; Cholia, S.; Gunter, D.; Skinner, D.; Ceder, G.; Persson, K. A. Commentary: The Materials Project: A materials genome approach to accelerating materials innovation. APL Materials 2013, 1, 011002. 44 {cDermott, M. J.; Dwaraknath, S$. S.; Persson, K. A. A Graph-Based Network for Predicting Chemical Reaction ‘athways in Solid-State Materials Synthesis. Nat. Commun. 2021, 12, 3097. 45 Shao, Z.; Gong, Y.; Shen, Y.; Huang, M.; Duan, N.; Chen, W. Synthetic Prompting: Generating Chain-of- Thought Demonstrations for Large Language Models. 2023, 46 Gao, L.; Schulman, J.; Hilton, J. Scaling Laws for Reward Model Overoptimization. ARXIV.ORG 2022, AT Rego, N.; Koes, D. 3Dmol.js: molecular visualization with WebGL. Bioinformatics 2014, 31, 1322-1324. 48 Schrédinger, L.; DeLano, W. PyMOL. http: //www.pymol.org/pymol.
2306.06283#230
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
231
48 Schrédinger, L.; DeLano, W. PyMOL. http: //www.pymol.org/pymol. 49 Sehnal, D.; Bittrich, S.; Deshpande, M.; Svobodova, R.; Berka, K.; Bazgier, V.; Velankar, S.; Burley, S. K.; Koéa, J.; Rose, A. S. Mol* Viewer: modern web app for 3D visualization and analysis of large biomolecular structures. Nucleic Acids Res. 2021, 49, W431-W437. 50. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, L; Narasimhan, K.; Cao, Y. ReAct: Synergizing Reasoning and Acting in Language Models. arXiv preprint: Arxiv-2210.03629 2023,
2306.06283#231
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
232
51 Thompson, A. P.; Aktulga, H. M.; Berger, R.; Bolintineanu, D. $.; Brown, W. M.; Crozier, P. S.; in ’t Veld, P. J.; Kohlmeyer, A.; Moore, S. G.; Nguyen, T. D.; Shan, R.; Stevens, M. J.; Tranchida, J.; Trott, C.; Plimpton, S. J. LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Comp. Phys. Comm. 2022, 271, 108171. 52 Abraham, M. J.; Murtola, T.; Schulz, R.; Pall, S.; Smith, J. C.; Hess, B.; Lindahl, E. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX 2015, 1-2, 19-25. 53, Volk, A. A.; Epps, R. W.; Yonemoto, D. T.; Masters, B. S.; Castellano, F. N.; Reyes, K. G.; Abolhasani, M. AlphaFlow: autonomous discovery and optimization of multi-step chemistry using a self-driven fluidic lab guided by reinforcement learning. Nat. Commun. 2023, 14, 1403.
2306.06283#232
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
233
54 Griffiths, R.-R. et al. GAUCHE: A Library for Gaussian Processes in Chemistry. 2022; http: //arxiv.org/abs/ 2212.04450, arXiv:2212.04450 [cond-mat, physics:physics]. 55, Shields, B. J.; Stevens, J.; Li, J.; Parasram, M.; Damani, F.; Alvarado, J. I. M.; Janey, J. M.; Adams, R. P.; Doyle, A. G. Bayesian reaction optimization as a tool for chemical synthesis. Nature 2021, 590, 89-96. 43 56 Rankovié, B.; Griffiths, R.-R.; Moss, H. B.; Schwaller, P. Bayesian optimisation for additive screening and yield improvements in chemical reactions — beyond one-hot encodings. ChemRxiv preprint 10.26434/chemrxiv-2022- nll2j. 2022. 57 e04j, Neo4j - The World’s Leading Graph Database. 2012; http: //neo4j.org/.
2306.06283#233
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
234
57 e04j, Neo4j - The World’s Leading Graph Database. 2012; http: //neo4j.org/. 58 Venugopal, V.; Pai, S.; Olivetti, E. MatKG: The Largest Knowledge Graph in Materials Science—Entities, Rela- tions, and Link Prediction through Graph Representation Learning. arXiv preprint:2210.17340 2022, 59 IcCusker, J. P.; Deagen, M.; Fateye, T.; Wallace, A.; Rashid, S. M.; McGuinness, D. L. Creating and Visualizing the Materials Science Knowledge Graph with Whyis. ISWC (Posters/Demos/Industry). 2021. 60. Dunn, A.; Dagdelen, J.; Walker, N.; Lee, S.; Rosen, A. S.; Ceder, G.; Persson, K. A.; Jain, A. Structured information extraction from complex scientific text with fine-tuned large language models. arXiv preprint: Arziv- 2212.05238 2022, 61 Badhwar, S. Smart Manufacturing - A Case for Creating a Knowledge Network Using Data Mining. 2022.
2306.06283#234
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
235
61 Badhwar, S. Smart Manufacturing - A Case for Creating a Knowledge Network Using Data Mining. 2022. 62 IcCusker, J. P.; Keshan, N.; Rashid, S.; Deagen, M.; Brinson, C.; McGuinness, D. L. NanoMine: A knowledge graph for nanocomposite materials science. The Semantic Web-ISWC 2020: 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part II. 2020; pp 144-159. 63 Kearnes, S. M.; Maser, M. R.; Wleklinski, M.; Kast, A.; Doyle, A. G.; Dreher, S. D.; Hawkins, J. M.; Jensen, K. F.; Coley, C. W. The Open Reaction Database. J. Am. Chem. Soc. 143, 18820-18826. 64 Taori, R.; Gulrajani, I.; Zhang, T.; Dubois, Y.; Li, X.; Guestrin, C.; Liang, P.; Hashimoto, T. B. Stanford Alpaca: An Instruction-following LLaMA model. https: //github.com/tatsu-lab/stanford_alpaca, 2023.
2306.06283#235
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
236
65, Alpaca-LoRA. https: //github.com/tloen/alpaca-lora. 66 Colter, Z.; Fayazi, M.; Youbi, Z. B.-E.; Kamp, S.; Yu, $.; Dreslinski, R. Tablext: A combined neural network and heuristic based table extractor. Array 2022, 15, 100220. 67 Mamaghani, Z. G.; Hawboldt, K. A.; MacQuarrie, 8. Adsorption of CO2 using biochar - Review of the impact of gas mixtures and water on adsorption. J. Environ. Chem. Eng. 2023, 11, 109643. 68 Peng, Y.; Krungleviciute, V.; Eryazici, 1; Hupp, J. T.; Farha, O. K.; Yildirim, T. Methane Storage in Metal—Organic Frameworks: Current Records, Surprise Findings, and Challenges. J. Am. Chem. Soc. 2013, 135, 11887-11894.
2306.06283#236
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
237
69 Sahoo, B.; Pandey, V.; Dogonchi, A.; Mohapatra, P.; Thatoi, D.; Nayak, N.; Nayak, M. A state-of-art review on 2D material-boosted metal oxide nanoparticle electrodes: Supercapacitor applications. J. Energy Storage 2023, 65, 107335. 70 Suppiah, D. D.; Daud, W. M. A. W.; Johan, M. R. Supported Metal Oxide Catalysts for CO2 Fischer—Tropsch Conversion to Liquid Fuels-A Review. Energy Fuels. 2021, 35, 17261-17278. 71 Gonzalez-Vazquez, M.; Garcia, R.; Gil, M.; Pevida, C.; Rubiera, F. Comparison of the gasification performance of multiple biomass types in a bubbling fluidized bed. Energy Convers. Manag. 2018, 176, 309-323. 72 Johsin, M.; Farhan, S.; Ahmad, N.; Raza, A. H.; Kayani, Z. N.; Jafri, S. H. M.; Raza, R. The electrochemical study of NixCe1-.O2-¢ electrodes using natural gas as a fuel. New J. Chem. 2023, 47, 8679-8692.
2306.06283#237
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
238
73 Kaur, P.; Singh, K. Review of perovskite-structure related cathode materials for solid oxide fuel cells. Ceram. Int. 2020, 46, 5521-5535. Sengottuvelu, R. jsonformer. https://github.com/1rgs/jsonformer, 2018. 74 75 Choudhary, K.; Kelley, M. L. ChemNLP: A Natural Language Processing based Library for Materials Chemistry Text Data. arXiv preprint arXiv:2209.08203 2022, 76 Kune, K.; Loa, I.; Syassen, K.; Kremer, R.; Ahn, K. MgB2 under pressure: phonon calculations, Raman spec- troscopy, and optical reflectance. arXiv preprint cond-mat/0105402 77 FameLab International — Cheltenham Festivals. https: //www.cheltenhamfestivals.com/famelab,last accessed 2023-05-30. 78 IT 180 - My Thesis in 180 Seconds. https://www.epfl.ch/campus/events/events/public-events/ my-thesis-in-180-seconds,last accessed 2023-07-07. 79 CUPDIGEST. https://clipdigest.com/, last accessed 2023-05-30.
2306.06283#238
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
239
79 CUPDIGEST. https://clipdigest.com/, last accessed 2023-05-30. Radford, A.; Kim, J. W.; Xu, T.; Brockman, G.; McLeavey, C.; Sutskever, I. Robust speech recognition via large-scale weak supervision. arXiv preprint: ArXiv-2212.04356. 2022. 81 Kim, S.; Chen, J.; Cheng, T.; Gindulyte, A.; He, J.; He, S.; Li, Q.; Shoemaker, B. A.; Thiessen, P. A.; Yu, B.; Zaslavsky, L.; Zhang, J.; Bolton, E. E. PubChem 2023 update. Nucleic Acids Res. 2022, 51, D1373—D1380. 82 Kim, S.; Chen, J.; Cheng, T.; Gindulyte, A.; He, J.; He, S.; Li, Q.; Shoemaker, B. A.; Thiessen, P. A.; Yu, B.; Zaslavsky, L.; Zhang, J.; Bolton, E. E. PubChem 2019 update: improved access to chemical data. Nucleic Acids Res. 2018, 47, D1102-D1109.
2306.06283#239
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
240
83, Kim, S.; Thiessen, P. A.; Cheng, T.; Yu, B.; Bolton, E. E. An update on PUG-REST: RESTful interface for programmatic access to PubChem. Nucleic Acids Res. 2018, 46, W563-W570. 84 Streamlit. https: //streamlit.io/. 44 # Acronyms AI: artificial intelligence. API: application programming interface. BO: Bayesian optimization. CAS: Chemical Abstract Services. COT: chain of thought. DFT: density functional theory. DOI: digital object identifier. ELN: electronic lab notebook. GA: genetic algorithm. GPR: Gaussian process regression. GPT: generative pretrained transformer. GUI: graphical user interface. HTML: HyperText Markup Language. ICL: in-context learning. ID: inverse design. InChl: international chemical identifier. JSON: JavaScript object notation. LIFT: language-interfaced fine-tuning. LIMS: laboratory information system. LLM: large language model. LoRA: low-rank adaptors. MAD: median absolute deviation. MAE: mean absolute error. MAPI: Materials Project API. ML: machine learning. NER: named entity recognition. NLM: national library of medicine. NLP: natural language processing. OCR: optical character recognition. # Acronyms # Acronyms
2306.06283#240
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
241
NER: named entity recognition. NLM: national library of medicine. NLP: natural language processing. OCR: optical character recognition. # Acronyms # Acronyms ORD: Open Reaction Database. PDB: protein data bank. PEFT: parameter efficient fine-tuning. RF: random forest. RLHF: reinforcement learning from human feedback. ROUGE: Recall-Oriented Understudy for Gisting Evaluation. SELFIES: self-referencing embedded strings. SMILES: simplified molecular-input line-entry system. SVM: support vector machine. UI: user interface. 46 84
2306.06283#241
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.04926
0
# covLLM: Large Language Models for COVID-19 Biomedical Literature Yousuf A. Khan*,1,2,3,4,5 Clarisse Hokia*,1,6 Jennifer Xu*,6,7 Ben Ehlert*,1 All authors contributed equally to this work 1Department of Biomedical Data Science, Stanford University, CA, USA 2Department of Molecular and Cellular Physiology, Stanford University, Stanford University, CA, USA 3Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA 4Department of Structural Biology, Stanford University, Stanford, CA, USA 5Department of Photon Science, Stanford University, Stanford, CA, USA 6Department of Computer Science, Stanford University, Stanford, CA, USA 7Department of Bioengineering, Stanford University, Stanford, CA, USA Emails: [email protected], [email protected], [email protected], [email protected]
2306.04926#0
covLLM: Large Language Models for COVID-19 Biomedical Literature
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
http://arxiv.org/pdf/2306.04926
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230608
20230608
[]
2306.05087
0
3 2 0 2 n u J 8 ] L C . s c [ 1 v 7 8 0 5 0 . 6 0 3 2 : v i X r a # PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization Yidong Wang1,2∗, Zhuohao Yu1∗, Zhengran Zeng1, Linyi Yang2, Cunxiang Wang2, Hao Chen3, Chaoya Jiang1, Rui Xie1, Jindong Wang3, Xing Xie3, Wei Ye1†, Shikun Zhang1†, Yue Zhang2† 1Peking University 2Westlake University 3Microsoft Research Asia # Abstract
2306.05087#0
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
http://arxiv.org/pdf/2306.05087
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.CL, cs.AI
null
null
cs.CL
20230608
20230608
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1803.05457" }, { "id": "2305.10403" }, { "id": "1807.05118" }, { "id": "2211.05100" }, { "id": "2302.10198" }, { "id": "2205.01068" }, { "id": "2003.05689" }, { "id": "1806.03822" }, { "id": "1711.05101" }, { "id": "2304.03208" }, { "id": "2304.01373" }, { "id": "2303.14742" }, { "id": "2303.04673" }, { "id": "2212.10560" }, { "id": "2211.08073" }, { "id": "2210.02414" }, { "id": "2304.03277" }, { "id": "2002.06305" }, { "id": "2305.13412" }, { "id": "2304.01196" } ]
2306.05212
0
3 2 0 2 n u J 8 ] R I . s c [ 1 v 2 1 2 5 0 . 6 0 3 2 : v i X r a # RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit Jiongnan Liu1, Jiajie Jin2, Zihan Wang1, Jiehan Cheng1, Zhicheng Dou1∗, and Ji-Rong Wen1 1Gaoling School of Artificial Intelligence, Renmin University of China 2University of Science and Technology of China 1{liujn, wangzihan0527, jiehan_cheng, dou, jrwen}@ruc.edu.cn [email protected] # Abstract
2306.05212#0
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including {request rewriting, document retrieval, passage extraction, answer generation, and fact checking} modules. Our toolkit is publicly available at https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM.
http://arxiv.org/pdf/2306.05212
Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen
cs.IR
Technical Report for RETA-LLM
null
cs.IR
20230608
20230608
[ { "id": "2210.02414" }, { "id": "2208.05753" } ]
2306.05301
0
3 2 0 2 p e S 7 ] L C . s c [ 2 v 1 0 3 5 0 . 6 0 3 2 : v i X r a ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases Qiaoyu Tang1,3, Ziliang Deng1,3, Hongyu Lin1*, Xianpei Han1,2*, Qiao Liang1,3, Boxi Cao1,3, Le Sun1,2 1Chinese Information Processing Laboratory 2State Key Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences, Beijing, China 3University of Chinese Academy of Sciences, Beijing, China {tangqiaoyu2020,dengziliang2021,hongyu,xianpei}@iscas.ac.cn {liangqiao2022,boxi2020,sunle}@iscas.ac.cn # Abstract intelligence holds great significance in advancing the devel- opment of general intelligent systems.
2306.05301#0
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Enabling large language models to utilize real-world tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tool-use abilities without tool-specific training. To address this question, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tool-use corpus and learn generalized tool-use abilities on compact language models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tool-use corpus by building a multi-agent simulation environment. The corpus contains 3938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to fine-tune compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tool-use capabilities comparable to those of extremely large language models like GPT-3.5, demonstrating that learning generalized tool-use ability is feasible for compact language models.
http://arxiv.org/pdf/2306.05301
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun
cs.CL
null
null
cs.CL
20230608
20230907
[ { "id": "2305.16504" }, { "id": "2305.13691" }, { "id": "2304.08244" }, { "id": "2303.08774" }, { "id": "2211.08264" }, { "id": "2304.08354" }, { "id": "2305.18752" }, { "id": "2212.14024" }, { "id": "2211.10435" }, { "id": "2210.03629" }, { "id": "2212.09689" }, { "id": "2306.06624" }, { "id": "2212.10560" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2304.09842" }, { "id": "2305.11206" }, { "id": "2302.07842" } ]
2306.05499
0
3 2 0 2 n u J 8 ] R C . s c [ 1 v 9 9 4 5 0 . 6 0 3 2 : v i X r a # Prompt Injection attack against LLM-integrated Applications Yi Liu1, Gelei Deng1, Yuekang Li2, Kailong Wang3, Tianwei Zhang1, Yepang Liu4, Haoyu Wang3, Yan Zheng5, and Yang Liu1 1Nanyang Technological University, 2University of New South Wales, 3Huazhong University of Science and Technology, 4Southern University of Science and Technology, 5Tianjin University {yi009, gelei.deng, yli044, tianwei.zhang, yangliu}@ntu.edu.sg, [email protected], [email protected], [email protected], [email protected] # Abstract
2306.05499#0
Prompt Injection attack against LLM-integrated Applications
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.
http://arxiv.org/pdf/2306.05499
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
cs.CR, cs.AI, cs.CL, cs.SE
null
null
cs.CR
20230608
20230608
[]
2306.04926
1
Abstract The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) – neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre- processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which
2306.04926#1
covLLM: Large Language Models for COVID-19 Biomedical Literature
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
http://arxiv.org/pdf/2306.04926
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230608
20230608
[]
2306.05087
1
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, es- tablishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM’s focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5’s evaluation ability and 88.28% of GPT-4’s in
2306.05087#1
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
http://arxiv.org/pdf/2306.05087
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.CL, cs.AI
null
null
cs.CL
20230608
20230608
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1803.05457" }, { "id": "2305.10403" }, { "id": "1807.05118" }, { "id": "2211.05100" }, { "id": "2302.10198" }, { "id": "2205.01068" }, { "id": "2003.05689" }, { "id": "1806.03822" }, { "id": "1711.05101" }, { "id": "2304.03208" }, { "id": "2304.01373" }, { "id": "2303.14742" }, { "id": "2303.04673" }, { "id": "2212.10560" }, { "id": "2211.08073" }, { "id": "2210.02414" }, { "id": "2304.03277" }, { "id": "2002.06305" }, { "id": "2305.13412" }, { "id": "2304.01196" } ]
2306.05152
1
# Juyeon Yoon KAIST [email protected] # Shin Yoo KAIST [email protected] Abstract—Software testing is an important part of the devel- opment cycle, yet it requires specialized expertise and substantial developer effort to adequately test software. Recent discoveries of the capabilities of large language models (LLMs) suggest that they can be used as automated testing assistants, and thus provide helpful information and even drive the testing process. To highlight the potential of this technology, we present a taxonomy of LLM-based testing agents based on their level of autonomy, and describe how a greater level of autonomy can benefit developers in practice. An example use of LLMs as a testing assistant is provided to demonstrate how a conversational framework for testing can help developers. This also highlights how the often criticized “hallucination” of LLMs can be beneficial for testing. We identify other tangible benefits that LLM-driven testing agents can bestow, and also discuss potential limitations. Index Terms—software testing, machine learning, large language model, artificial intelligence, test automation # I. INTRODUCTION
2306.05152#1
Towards Autonomous Testing Agents via Conversational Large Language Models
Software testing is an important part of the development cycle, yet it requires specialized expertise and substantial developer effort to adequately test software. Recent discoveries of the capabilities of large language models (LLMs) suggest that they can be used as automated testing assistants, and thus provide helpful information and even drive the testing process. To highlight the potential of this technology, we present a taxonomy of LLM-based testing agents based on their level of autonomy, and describe how a greater level of autonomy can benefit developers in practice. An example use of LLMs as a testing assistant is provided to demonstrate how a conversational framework for testing can help developers. This also highlights how the often criticized hallucination of LLMs can be beneficial for testing. We identify other tangible benefits that LLM-driven testing agents can bestow, and also discuss potential limitations.
http://arxiv.org/pdf/2306.05152
Robert Feldt, Sungmin Kang, Juyeon Yoon, Shin Yoo
cs.SE
null
null
cs.SE
20230608
20230905
[ { "id": "2305.10601" }, { "id": "2303.17580" }, { "id": "2305.16291" }, { "id": "2201.09305" }, { "id": "2210.03629" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2302.03287" }, { "id": "2209.11515" } ]
2306.05171
1
1)~6) South China University of Technology, School of Computer Science and Engineering, Guang Zhou 510006 7)University of Washington, Human centered design and engineering, Seattle ,98195 Abstract — Traditional robot task planning methods face challenges when dealing with highly unstructured environments and complex tasks. We propose a task planning method that combines human expertise with an LLM and have designed an LLM prompt template, Think_Net_Prompt, with stronger expressive power to represent structured professional knowledge. We further propose a method to progressively decompose tasks and generate a task tree to reduce the planning volume for each task, and we have designed a strategy to decouple robot task planning. By dividing different planning entities and separating the task from the actual machine binding process, the task planning process becomes more flexible. Research results show that our method performs well in handling specified code formats, understanding the relationship between tasks and subtasks, and extracting parameters from text descriptions. However, there are also problems such as limited complexity of task logic handling, ambiguity in the quantity of parts and the precise location of assembly. Improving the precision of task description and cognitive improvements. certain https://github.com/NOMIzy/Think_Net_Prompt unstructured environments, directly specifying a complete behavioral strategy for robots is impractical due to the excessive complexity of the required strategies.
2306.05171#1
Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures
Traditional robot task planning methods face challenges when dealing with highly unstructured environments and complex tasks. We propose a task planning method that combines human expertise with an LLM and have designed an LLM prompt template, Think_Net_Prompt, with stronger expressive power to represent structured professional knowledge. We further propose a method to progressively decompose tasks and generate a task tree to reduce the planning volume for each task, and we have designed a strategy to decouple robot task planning. By dividing different planning entities and separating the task from the actual machine binding process, the task planning process becomes more flexible. Research results show that our method performs well in handling specified code formats, understanding the relationship between tasks and subtasks, and extracting parameters from text descriptions. However, there are also problems such as limited complexity of task logic handling, ambiguity in the quantity of parts and the precise location of assembly. Improving the precision of task description and cognitive structure can bring certain improvements. https://github.com/NOMIzy/Think_Net_Prompt
http://arxiv.org/pdf/2306.05171
Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, Fang Yi-shu
cs.RO, cs.AI
null
null
cs.RO
20230608
20230608
[ { "id": "2302.12927" }, { "id": "2212.06817" }, { "id": "2006.05398" }, { "id": "2209.05451" }, { "id": "2209.11302" }, { "id": "2210.12250" }, { "id": "2204.01691" }, { "id": "2201.07207" }, { "id": "2303.12153" } ]
2306.05212
1
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be allevi- ated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval- augmented LLMs). Applying this strategy, LLMs can generate more factual texts in re- sponse to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorpo- rating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To sup- port research in this area and facilitate the de- velopment of retrieval-augmented LLM sys- tems, we develop RETA-LLM, a RETreival- Augmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previ- ous retrieval-augmented LLM systems, RETA- LLM provides more plug-and-play modules to support
2306.05212#1
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including {request rewriting, document retrieval, passage extraction, answer generation, and fact checking} modules. Our toolkit is publicly available at https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM.
http://arxiv.org/pdf/2306.05212
Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen
cs.IR
Technical Report for RETA-LLM
null
cs.IR
20230608
20230608
[ { "id": "2210.02414" }, { "id": "2208.05753" } ]
2306.05301
1
Enabling large language models to utilize real-world tools ef- fectively is crucial for achieving embodied intelligence. Ex- isting approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to at- tain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tool-use abilities without tool-specific training. To address this ques- tion, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tool-use cor- pus and learn generalized tool-use abilities on compact lan- guage models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tool-use corpus by building a multi-agent simulation environ- ment. The corpus contains 3938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to fine-tune compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these
2306.05301#1
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Enabling large language models to utilize real-world tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tool-use abilities without tool-specific training. To address this question, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tool-use corpus and learn generalized tool-use abilities on compact language models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tool-use corpus by building a multi-agent simulation environment. The corpus contains 3938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to fine-tune compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tool-use capabilities comparable to those of extremely large language models like GPT-3.5, demonstrating that learning generalized tool-use ability is feasible for compact language models.
http://arxiv.org/pdf/2306.05301
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun
cs.CL
null
null
cs.CL
20230608
20230907
[ { "id": "2305.16504" }, { "id": "2305.13691" }, { "id": "2304.08244" }, { "id": "2303.08774" }, { "id": "2211.08264" }, { "id": "2304.08354" }, { "id": "2305.18752" }, { "id": "2212.14024" }, { "id": "2211.10435" }, { "id": "2210.03629" }, { "id": "2212.09689" }, { "id": "2306.06624" }, { "id": "2212.10560" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2304.09842" }, { "id": "2305.11206" }, { "id": "2302.07842" } ]
2306.05424
1
# Abstract Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the underexplored field of video-based conversation by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with a LLM. The resulting model is capable of understanding and generating detailed conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video- ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantitative evaluation framework for video-based dialogue models to objectively analyse the strengths and weaknesses of video-based dialogue models. Our code, models, instruction set and demo are released at https://github.com/mbzuai-oryx/Video-ChatGPT.
2306.05424#1
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the underexplored field of video-based conversation by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with a LLM. The model is capable of understanding and generating human-like conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantiative evaluation framework for video-based dialogue models to objectively analyse the strengths and weaknesses of proposed models. Our code, models, instruction-sets and demo are released at https://github.com/mbzuai-oryx/Video-ChatGPT.
http://arxiv.org/pdf/2306.05424
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan
cs.CV
null
null
cs.CV
20230608
20230608
[ { "id": "2103.07461" }, { "id": "2302.13971" }, { "id": "2109.08472" }, { "id": "2303.05657" }, { "id": "2212.00280" }, { "id": "2305.06355" }, { "id": "2206.08155" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2005.14165" }, { "id": "2305.16355" }, { "id": "2212.03191" }, { "id": "2205.01068" } ]
2306.05499
1
# Abstract Large Language Models (LLMs), renowned for their supe- rior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, high- lighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HOUYI, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HOUYI is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HOUYI, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HOUYI on 36 actual LLM-integrated applica- tions and discern 31 applications susceptible to prompt in- jection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation. # 1 Introduction
2306.05499#1
Prompt Injection attack against LLM-integrated Applications
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.
http://arxiv.org/pdf/2306.05499
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
cs.CR, cs.AI, cs.CL, cs.SE
null
null
cs.CR
20230608
20230608
[]
2306.04926
2
synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
2306.04926#2
covLLM: Large Language Models for COVID-19 Biomedical Literature
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
http://arxiv.org/pdf/2306.04926
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230608
20230608
[]
2306.05087
2
that PandaLM-7B achieves 93.75% of GPT-3.5’s evaluation ability and 88.28% of GPT-4’s in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca’s hyper- parameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
2306.05087#2
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
http://arxiv.org/pdf/2306.05087
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.CL, cs.AI
null
null
cs.CL
20230608
20230608
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1803.05457" }, { "id": "2305.10403" }, { "id": "1807.05118" }, { "id": "2211.05100" }, { "id": "2302.10198" }, { "id": "2205.01068" }, { "id": "2003.05689" }, { "id": "1806.03822" }, { "id": "1711.05101" }, { "id": "2304.03208" }, { "id": "2304.01373" }, { "id": "2303.14742" }, { "id": "2303.04673" }, { "id": "2212.10560" }, { "id": "2211.08073" }, { "id": "2210.02414" }, { "id": "2304.03277" }, { "id": "2002.06305" }, { "id": "2305.13412" }, { "id": "2304.01196" } ]
2306.05152
2
# I. INTRODUCTION Software testing, an integral part of the development cycle, enables quality assurance and bug detection prior to deploy- ment, for example via continuous integration practices [1]. However, automated software testing can be challenging, and necessitates a high level of technical acumen. There is significant expertise required to appropriately test software, as evidenced by the existence of test engineers/architects. Meanwhile, as Arcuri [2] notes, existing automated software test generation tools may also require significant expertise in the tool, in addition to being difficult to apply in indus- try.Furthermore, software testing and the writing of software tests can be repetitive, as Hass et al. [3] note. A more positive attribute of test cases is that their syntax is often significantly simpler than production software [4].
2306.05152#2
Towards Autonomous Testing Agents via Conversational Large Language Models
Software testing is an important part of the development cycle, yet it requires specialized expertise and substantial developer effort to adequately test software. Recent discoveries of the capabilities of large language models (LLMs) suggest that they can be used as automated testing assistants, and thus provide helpful information and even drive the testing process. To highlight the potential of this technology, we present a taxonomy of LLM-based testing agents based on their level of autonomy, and describe how a greater level of autonomy can benefit developers in practice. An example use of LLMs as a testing assistant is provided to demonstrate how a conversational framework for testing can help developers. This also highlights how the often criticized hallucination of LLMs can be beneficial for testing. We identify other tangible benefits that LLM-driven testing agents can bestow, and also discuss potential limitations.
http://arxiv.org/pdf/2306.05152
Robert Feldt, Sungmin Kang, Juyeon Yoon, Shin Yoo
cs.SE
null
null
cs.SE
20230608
20230905
[ { "id": "2305.10601" }, { "id": "2303.17580" }, { "id": "2305.16291" }, { "id": "2201.09305" }, { "id": "2210.03629" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2302.03287" }, { "id": "2209.11515" } ]
2306.05171
2
unstructured environments, directly specifying a complete behavioral strategy for robots is impractical due to the excessive complexity of the required strategies. The latest developments in Large Language Models (LLMs) provide a potential direction to improve the generality of robot task generation. LLMs are neural language models with a large number of parameters and trained with a large amount of data. These LLMs have shown strong universality in many natural language processing (NLP) tasks. Since the introduction of the GPT-3 model in 2020, LLMs have become an emerging research field in natural language processing and have also attracted the attention of robotics researchers. Our goal is to combine the task semantic understanding capability of language models with the cognitive framework of humans to provide professional knowledge for language models, and even to train professional models, to improve their performance in professional task planning and apply it to robot task planning problems. Index Terms — GPT-3, GPT-4, LLM, Prompt Engineering, Task and Motion Planning # I. INTRODUCTION
2306.05171#2
Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures
Traditional robot task planning methods face challenges when dealing with highly unstructured environments and complex tasks. We propose a task planning method that combines human expertise with an LLM and have designed an LLM prompt template, Think_Net_Prompt, with stronger expressive power to represent structured professional knowledge. We further propose a method to progressively decompose tasks and generate a task tree to reduce the planning volume for each task, and we have designed a strategy to decouple robot task planning. By dividing different planning entities and separating the task from the actual machine binding process, the task planning process becomes more flexible. Research results show that our method performs well in handling specified code formats, understanding the relationship between tasks and subtasks, and extracting parameters from text descriptions. However, there are also problems such as limited complexity of task logic handling, ambiguity in the quantity of parts and the precise location of assembly. Improving the precision of task description and cognitive structure can bring certain improvements. https://github.com/NOMIzy/Think_Net_Prompt
http://arxiv.org/pdf/2306.05171
Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, Fang Yi-shu
cs.RO, cs.AI
null
null
cs.RO
20230608
20230608
[ { "id": "2302.12927" }, { "id": "2212.06817" }, { "id": "2006.05398" }, { "id": "2209.05451" }, { "id": "2209.11302" }, { "id": "2210.12250" }, { "id": "2204.01691" }, { "id": "2201.07207" }, { "id": "2303.12153" } ]
2306.05301
2
compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tool-use capabilities comparable to those of extremely large language models like GPT-3.5, demonstrating that learning generalized tool-use ability is feasible for compact language models. 1
2306.05301#2
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Enabling large language models to utilize real-world tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tool-use abilities without tool-specific training. To address this question, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tool-use corpus and learn generalized tool-use abilities on compact language models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tool-use corpus by building a multi-agent simulation environment. The corpus contains 3938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to fine-tune compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tool-use capabilities comparable to those of extremely large language models like GPT-3.5, demonstrating that learning generalized tool-use ability is feasible for compact language models.
http://arxiv.org/pdf/2306.05301
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun
cs.CL
null
null
cs.CL
20230608
20230907
[ { "id": "2305.16504" }, { "id": "2305.13691" }, { "id": "2304.08244" }, { "id": "2303.08774" }, { "id": "2211.08264" }, { "id": "2304.08354" }, { "id": "2305.18752" }, { "id": "2212.14024" }, { "id": "2211.10435" }, { "id": "2210.03629" }, { "id": "2212.09689" }, { "id": "2306.06624" }, { "id": "2212.10560" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2304.09842" }, { "id": "2305.11206" }, { "id": "2302.07842" } ]
2306.05424
2
1 The surge of deep learning applications for video understanding has lead to major advancements in video-related tasks. However, the current video understanding models are still unable to hold an open-ended conversation about the video content in a coherent manner. A video-based dialogue model can revolutionize video search, surveillance operations and help summarize key events and abnormal event detection. Above all, it can provide a unified human-understandable interface to video-related tasks such as action recognition, localization, detection, segmentation, retrieval, and tracking. Further, such a capability is of great interest as it will demonstrate the model’s ability to encode temporal and spatial cues, contextual relationships and long-term dependencies. Recent advancements in multimodal understanding are largely based on the combination of pretrained image models with Large Language Models (LLMs) but generally do not consider video inputs [1–5]. It is therefore interesting to leverage the vast capabilities of LLMs for video understanding tasks in a way that would not only maintain the temporal and spatial characteristics but also be adept at generating human-like conversations about videos. In this paper, we introduce Video-ChatGPT, a novel multimodal model that merges the representational abilities of a pretrained visual encoder and the generative powers of an LLM, capable of understanding and conversing about videos.
2306.05424#2
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the underexplored field of video-based conversation by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with a LLM. The model is capable of understanding and generating human-like conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantiative evaluation framework for video-based dialogue models to objectively analyse the strengths and weaknesses of proposed models. Our code, models, instruction-sets and demo are released at https://github.com/mbzuai-oryx/Video-ChatGPT.
http://arxiv.org/pdf/2306.05424
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan
cs.CV
null
null
cs.CV
20230608
20230608
[ { "id": "2103.07461" }, { "id": "2302.13971" }, { "id": "2109.08472" }, { "id": "2303.05657" }, { "id": "2212.00280" }, { "id": "2305.06355" }, { "id": "2206.08155" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2005.14165" }, { "id": "2305.16355" }, { "id": "2212.03191" }, { "id": "2205.01068" } ]
2306.05499
2
# 1 Introduction Large Language Models (LLMs) like GPT-4 [39], LLaMA [37], and PaLM2 [18], have dramatically transformed a wide array of applications with their exceptional ability to generate human-like texts. Their integration spans various applications, from digital assistants to AI-powered journalism. However, this expanded usage is accompanied by heightened security vulnerabilities, manifested by a broad spectrum of adversarial tactics such as jailbreak [15, 41, 60] and backdoor [7, 36, 68], and complex data poisoning [32, 38, 67]. Among these security threats, prompt injection where harm- ful prompts are used by malicious users to override the origi- nal instructions of LLMs, is a particular concern. This type of attack, most potent in LLM-integrated applications, has been recently listed as the top LLM-related hazard by OWASP [40]. Existing prompt injection methods [6, 20, 44] manipulate the LLM output for individual users. A recent variant [48] aims to recover previously input prompts at the service provider end. Unfortunately, comprehending the prompt patterns that initiate such attacks remains a significant challenge. Early attempts to exploit this vulnerability used heuristic prompts, discovered through the "trial and error" manner, exploiting the initial unawareness of developers. A thorough understand- ing of the mechanisms underlying prompt injection attacks, however, is still elusive.
2306.05499#2
Prompt Injection attack against LLM-integrated Applications
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.
http://arxiv.org/pdf/2306.05499
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
cs.CR, cs.AI, cs.CL, cs.SE
null
null
cs.CR
20230608
20230608
[]
2306.04926
3
# 1. Introduction # 1.1. Covid-19 COVID-19 and over 1.1 million people in the United States died due to COVID-19 complications1. COVID-19 is a highly infectious viral disease caused by SARS-CoV-2. It can cause a wide range of symptoms, most commonly fever, chills, and sore throat. Depending on the severity of the symptoms, several patients require immediate medical attention for severe difficulty in breathing, confusion, chest pain, or other symptoms of severe illness. Additionally, certain populations with pre-existing health conditions, those over age 60, and unvaccinated individuals are at increased risk for severe illness, hospitalization, and death, though anyone can become sick with COVID-19. People infected with COVID-19 are also at risk of long COVID, which occurs when they experience prolonged fatigue, respiratory, and neurological symptoms2.
2306.04926#3
covLLM: Large Language Models for COVID-19 Biomedical Literature
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
http://arxiv.org/pdf/2306.04926
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230608
20230608
[]
2306.05087
3
# Introduction Large language models (LLMs) have attracted increasing attention in the field of artificial intelli- gence [1, 2, 3, 4, 5, 6], with various applications from question answering [7, 8], machine transla- tion [9, 10] to content creation [11, 12]. The Alpaca project [13] has been a pioneering effort in instruction tuning of LLaMA [14], setting a precedent for instruction tuning LLMs, followed by Vicunna [15]. Subsequent research [16, 17, 18] have typically adopted Alpaca’s hyperparameters as a standard for training their LLMs. Given the necessity of instruction tuning for these pre-trained mod- els to effectively understand and follow natural language instructions [19, 13, 20], optimizing their tuning hyperparameters is crucial for peak performance. Critical factors such as optimizer selection, learning rate, number of training epochs, and quality and size of training data significantly influence the model’s performance [21, 22]. However, a research gap remains in the area of hyperparameter optimization specifically designed for instruction tuning LLMs. To address this issue, we aim to
2306.05087#3
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
http://arxiv.org/pdf/2306.05087
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.CL, cs.AI
null
null
cs.CL
20230608
20230608
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1803.05457" }, { "id": "2305.10403" }, { "id": "1807.05118" }, { "id": "2211.05100" }, { "id": "2302.10198" }, { "id": "2205.01068" }, { "id": "2003.05689" }, { "id": "1806.03822" }, { "id": "1711.05101" }, { "id": "2304.03208" }, { "id": "2304.01373" }, { "id": "2303.14742" }, { "id": "2303.04673" }, { "id": "2212.10560" }, { "id": "2211.08073" }, { "id": "2210.02414" }, { "id": "2304.03277" }, { "id": "2002.06305" }, { "id": "2305.13412" }, { "id": "2304.01196" } ]
2306.05152
3
These unique characteristics of test code naturally bring about a distinction between testing experts and domain experts, which existing literature on developer expertise [5] supports by identifying distinct types of expertise: “understanding the vision of the project” and “knowledge about tools”. Under this framework, an ideal setup would be one in which a testing expert and a domain expert collaborate to write tests for a project. The domain expert may lay out the specifications of a project, while the testing expert may convert those specifications into concrete tests, based on the testing expert’s experience. A great strength of this process is that as a result of such a dialogue, initially unexpected, yet nuanced issues with the specification may arise, which provide opportunities to clarify the desired behavior. Indeed, handling such unexpected behavior is one of the virtues of software testing [6], [7].
2306.05152#3
Towards Autonomous Testing Agents via Conversational Large Language Models
Software testing is an important part of the development cycle, yet it requires specialized expertise and substantial developer effort to adequately test software. Recent discoveries of the capabilities of large language models (LLMs) suggest that they can be used as automated testing assistants, and thus provide helpful information and even drive the testing process. To highlight the potential of this technology, we present a taxonomy of LLM-based testing agents based on their level of autonomy, and describe how a greater level of autonomy can benefit developers in practice. An example use of LLMs as a testing assistant is provided to demonstrate how a conversational framework for testing can help developers. This also highlights how the often criticized hallucination of LLMs can be beneficial for testing. We identify other tangible benefits that LLM-driven testing agents can bestow, and also discuss potential limitations.
http://arxiv.org/pdf/2306.05152
Robert Feldt, Sungmin Kang, Juyeon Yoon, Shin Yoo
cs.SE
null
null
cs.SE
20230608
20230905
[ { "id": "2305.10601" }, { "id": "2303.17580" }, { "id": "2305.16291" }, { "id": "2201.09305" }, { "id": "2210.03629" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2302.03287" }, { "id": "2209.11515" } ]
2306.05171
3
Index Terms — GPT-3, GPT-4, LLM, Prompt Engineering, Task and Motion Planning # I. INTRODUCTION OBOTS play an increasingly important role in society, and rapidly expanding. their Traditionally, these applications were mainly concentrated in structured environments such as factories, where robot behavior is relatively fixed and often directly designated by humans. In highly unstructured human environments, such as homes, restaurants, or hospitals, robots are usually given specific goals, such as classifying and organizing specified objects, but the actions needed to achieve these goals change according to different environmental conditions. For example, to tidy up stacked bowls and chopsticks and protect them, the robot needs to pick them up in a reasonable order. In these
2306.05171#3
Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures
Traditional robot task planning methods face challenges when dealing with highly unstructured environments and complex tasks. We propose a task planning method that combines human expertise with an LLM and have designed an LLM prompt template, Think_Net_Prompt, with stronger expressive power to represent structured professional knowledge. We further propose a method to progressively decompose tasks and generate a task tree to reduce the planning volume for each task, and we have designed a strategy to decouple robot task planning. By dividing different planning entities and separating the task from the actual machine binding process, the task planning process becomes more flexible. Research results show that our method performs well in handling specified code formats, understanding the relationship between tasks and subtasks, and extracting parameters from text descriptions. However, there are also problems such as limited complexity of task logic handling, ambiguity in the quantity of parts and the precise location of assembly. Improving the precision of task description and cognitive structure can bring certain improvements. https://github.com/NOMIzy/Think_Net_Prompt
http://arxiv.org/pdf/2306.05171
Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, Fang Yi-shu
cs.RO, cs.AI
null
null
cs.RO
20230608
20230608
[ { "id": "2302.12927" }, { "id": "2212.06817" }, { "id": "2006.05398" }, { "id": "2209.05451" }, { "id": "2209.11302" }, { "id": "2210.12250" }, { "id": "2204.01691" }, { "id": "2201.07207" }, { "id": "2303.12153" } ]
2306.05212
3
# Introduction Large language models (LLMs) have attracted in- creasing attention from both research community and industry (Brown et al., 2020; OpenAI, 2023; Ouyang et al., 2022; Touvron et al., 2023; Chowdh- ery et al., 2022; Zhao et al., 2023; Zeng et al., 2022). With tremendous world knowledge stored in pa- rameters (Petroni et al., 2019; Roberts et al., 2020; Jiang et al., 2020) and the Reinforcement Learning
2306.05212#3
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including {request rewriting, document retrieval, passage extraction, answer generation, and fact checking} modules. Our toolkit is publicly available at https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM.
http://arxiv.org/pdf/2306.05212
Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen
cs.IR
Technical Report for RETA-LLM
null
cs.IR
20230608
20230608
[ { "id": "2210.02414" }, { "id": "2208.05753" } ]
2306.05301
3
# 1 Introduction Embodied intelligence, the ability to meaningfully interact with the environment, stands as a core attribute of advanced cognitive systems and a crucial advancement in artificial in- telligence. The ability to create and use tools has expanded human beings’ physical capabilities to interact with environ- ments and augmented cognitive functions. Such evolution- ary milestone has not only broadened our range of physical actions, but also brought about transformative changes in our problem-solving abilities and innovative thinking. The pursuit of incorporating tool-use capabilities into artificial Corresponding Authors 1Our code and data are available at https://github.com/ tangqiaoyu/ToolAlpaca.
2306.05301#3
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Enabling large language models to utilize real-world tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tool-use abilities without tool-specific training. To address this question, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tool-use corpus and learn generalized tool-use abilities on compact language models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tool-use corpus by building a multi-agent simulation environment. The corpus contains 3938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to fine-tune compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tool-use capabilities comparable to those of extremely large language models like GPT-3.5, demonstrating that learning generalized tool-use ability is feasible for compact language models.
http://arxiv.org/pdf/2306.05301
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun
cs.CL
null
null
cs.CL
20230608
20230907
[ { "id": "2305.16504" }, { "id": "2305.13691" }, { "id": "2304.08244" }, { "id": "2303.08774" }, { "id": "2211.08264" }, { "id": "2304.08354" }, { "id": "2305.18752" }, { "id": "2212.14024" }, { "id": "2211.10435" }, { "id": "2210.03629" }, { "id": "2212.09689" }, { "id": "2306.06624" }, { "id": "2212.10560" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2304.09842" }, { "id": "2305.11206" }, { "id": "2302.07842" } ]
2306.05424
3
Video-ChatGPT leverages an adapted LLM [1] that integrates the visual encoder of CLIP [6] with Vicuna [7] as a language decoder, fine-tuned on generated instructional image-text pairs. Our approach further adapts the desgin for spatiotemporal video modeling and fine-tunes the model on video-instruction data to capture temporal dynamics and frame-to-frame consistency relationships available in video data. In contrast to other concurrent works for video-based conversation [8, 9], Video-ChatGPT excels at temporal understanding, spatial consistency and contextual comprehension as demonstrated by our extensive evaluations. A fundamental contribution of this work is the creation of a dataset of 100,000 video-instruction pairs using a combination of human-assisted and semi-automatic annotation methods. Each pair consists of Equally contributing first authors Preprint. Under review. a video and its associated instruction in the form of a question-answer. This provides Video-ChatGPT with a large and diverse dataset to learn from, increasing its video-specific understanding, attention to temporal relationships and conversation capabilities. Moreover, we introduce the first quantitative video conversation evaluation framework for bench- marking, allowing for a more accurate evaluation of the performance of video conversation models. This framework evaluates models on a variety of capabilities, such as correctness of information, detail orientation, contextual understanding, temporal understanding, and consistency. The contributions of this work are as follows,
2306.05424#3
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the underexplored field of video-based conversation by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with a LLM. The model is capable of understanding and generating human-like conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantiative evaluation framework for video-based dialogue models to objectively analyse the strengths and weaknesses of proposed models. Our code, models, instruction-sets and demo are released at https://github.com/mbzuai-oryx/Video-ChatGPT.
http://arxiv.org/pdf/2306.05424
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan
cs.CV
null
null
cs.CV
20230608
20230608
[ { "id": "2103.07461" }, { "id": "2302.13971" }, { "id": "2109.08472" }, { "id": "2303.05657" }, { "id": "2212.00280" }, { "id": "2305.06355" }, { "id": "2206.08155" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2005.14165" }, { "id": "2305.16355" }, { "id": "2212.03191" }, { "id": "2205.01068" } ]
2306.05499
3
To decipher these attack mechanisms, we initiate a pilot study on 10 real-world black-box LLM-integrated applica- tions, all of which are currently prevalent commercial ser- vices in the market. We implement existing prompt injection techniques [6, 20, 44] on them, and only achieve partially successful exploits on two out of the ten targets. The rea- sons for the unsuccessful attempts are three-pronged. Firstly, the interpretation of prompt usage diverges among applica- tions. While some applications perceive prompts as parts of the queries, others identify them as analytical data payloads, rendering the applications resistant to traditional prompt in- jection strategies. Secondly, numerous applications enforce specific format prerequisites on both inputs and outputs, in- advertently providing a defensive mechanism against prompt injection, similar to syntax-based sanitization. Finally, appli- cations often adopt multi-step processes with time constraints on responses, rendering potentially successful prompt injec- tions to fail in displaying results due to extended generation duration. Based on our findings, we find that a successful prompt attack hinges on tricking the LLM to interpret the malicious payload as a question, rather than a data payload. This is inspired by traditional injection attacks such as SQL injec- tion [10, 14, 25] and XSS attacks [23, 27, 63], where specially 1
2306.05499#3
Prompt Injection attack against LLM-integrated Applications
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.
http://arxiv.org/pdf/2306.05499
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
cs.CR, cs.AI, cs.CL, cs.SE
null
null
cs.CR
20230608
20230608
[]
2306.04926
4
In addition to the COVID-19 vaccines3,4, these include over-the-counter medication, prescription medication, and in-patient treatments. Healthcare providers may prescribe Paxlovid or Lagevrio for high risk individuals infected with COVID-19. Evusheld monoclonal antibodies are prescribed to immunocompromised individuals exposed to COVID-19. Patients with severe illness due to COVID-19 who require hospitalization may be treated with the antiviral medication remdesivir and medications to counteract overactive immune systems or to treat complications. Through clinical trials and other research, these treatments were eventually developed and made available to both adult and pediatric patients who qualify. The NIH’s Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) initiative has promoted additional research on treatments such as immune modulators, monoclonal and polyclonal antibodies, and blood thinners and on the uses of medications used to treat other conditions5. However, this research has been slow to translate to clinical treatments.
2306.04926#4
covLLM: Large Language Models for COVID-19 Biomedical Literature
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
http://arxiv.org/pdf/2306.04926
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230608
20230608
[]
2306.05087
4
∗Equal contribution. Yidong did this work during his internship at Westlake University. †Corresponding to [email protected]; [email protected]; [email protected]. Preprint. Under review. construct an automated, reliable, and robust evaluation method, which can be integrated into any open-sourced LLMs and used as the judging basis for hyperparameter optimization. The development of such an evaluation method presents its own challenges, including ensuring evaluation reliability and privacy protection. Current methods often involve either crowd-sourcing work or API usage, which could be costly, and time-consuming. Besides, these methods face challenges in terms of consistency and reproducibility. This is primarily due to the lack of transparency regarding language model change logs and the inherent subjectivity of human annotations. Note that utilizing API-based evaluations carries the risk of potentially high costs associated with addressing data leaks. Although open-sourced LLMs can be alternative evaluators, they are not specifically designed for assessment, thus making it difficult to deploy them directly as evaluators.
2306.05087#4
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
http://arxiv.org/pdf/2306.05087
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.CL, cs.AI
null
null
cs.CL
20230608
20230608
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1803.05457" }, { "id": "2305.10403" }, { "id": "1807.05118" }, { "id": "2211.05100" }, { "id": "2302.10198" }, { "id": "2205.01068" }, { "id": "2003.05689" }, { "id": "1806.03822" }, { "id": "1711.05101" }, { "id": "2304.03208" }, { "id": "2304.01373" }, { "id": "2303.14742" }, { "id": "2303.04673" }, { "id": "2212.10560" }, { "id": "2211.08073" }, { "id": "2210.02414" }, { "id": "2304.03277" }, { "id": "2002.06305" }, { "id": "2305.13412" }, { "id": "2304.01196" } ]
2306.05152
4
In this paper, we argue that Large Language Models (LLMs), which have been trained with a large quantity of code including software test data [8] may eventually be capable of providing such testing knowledge, and that humans may act as domain experts and specify or clarify to the LLM what the intended behavior is. Specifically, we argue that LLMs are sufficiently well-trained with software tests to ‘fill in’ lower- level details of the intention of the developer. They also exhibit some ‘knowledge’ about testing methodologies,and can adapt them to new situations [9]. Going further, LLMs appear suffi- ciently capable in dialogue to converse about the results with a prospective software tester so that they could engage in a ‘Socratic’ manner: that is, they could provide counterexamples to help the developer to think their specification through, and thus uncover unexpected issues with the desired behavior, in this process clarifying what would be ideal. Equipped with appropriate ‘middleware’ which provides tools that the LLM could interact with, our eventual vision is that we can grant the LLM ‘autonomy’, in which it
2306.05152#4
Towards Autonomous Testing Agents via Conversational Large Language Models
Software testing is an important part of the development cycle, yet it requires specialized expertise and substantial developer effort to adequately test software. Recent discoveries of the capabilities of large language models (LLMs) suggest that they can be used as automated testing assistants, and thus provide helpful information and even drive the testing process. To highlight the potential of this technology, we present a taxonomy of LLM-based testing agents based on their level of autonomy, and describe how a greater level of autonomy can benefit developers in practice. An example use of LLMs as a testing assistant is provided to demonstrate how a conversational framework for testing can help developers. This also highlights how the often criticized hallucination of LLMs can be beneficial for testing. We identify other tangible benefits that LLM-driven testing agents can bestow, and also discuss potential limitations.
http://arxiv.org/pdf/2306.05152
Robert Feldt, Sungmin Kang, Juyeon Yoon, Shin Yoo
cs.SE
null
null
cs.SE
20230608
20230905
[ { "id": "2305.10601" }, { "id": "2303.17580" }, { "id": "2305.16291" }, { "id": "2201.09305" }, { "id": "2210.03629" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2302.03287" }, { "id": "2209.11515" } ]
2306.05171
4
Current task planning work based on LLM [21,22,23,24,25] focuses on exploring the possibility and structure of generating specific content, but does not carefully consider the structure of input knowledge and further consider optimizations towards actual engineering. The work most conceptually similar to ours is [24], which considers providing a task behavior tree knowledge base to generate robot behavior trees across domains. They utilize a knowledge base storing a series of robot task behavior trees. By automatically querying the knowledge base and selecting the behavior tree most similar to the required task description as a prompt, they generate behavior trees for the required tasks in new domains. Although the focus of their work is on generating hierarchical, state machine-like structured outputs, from their work, we found the possibility of enhancing the LLM planning ability with that structured knowledge. However, describes robot behavior similar to a state machine, such as a behavior tree, still has a lot of possiblilty for improvement in terms of expressing the universal structure of knowledge for tasks in other domains. For example, a tree-thinking structure F. A. Yue Zhen, South China University Of Technology, (e-mail: ZhenYue1614@ gmail.com/[email protected]). 1
2306.05171#4
Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures
Traditional robot task planning methods face challenges when dealing with highly unstructured environments and complex tasks. We propose a task planning method that combines human expertise with an LLM and have designed an LLM prompt template, Think_Net_Prompt, with stronger expressive power to represent structured professional knowledge. We further propose a method to progressively decompose tasks and generate a task tree to reduce the planning volume for each task, and we have designed a strategy to decouple robot task planning. By dividing different planning entities and separating the task from the actual machine binding process, the task planning process becomes more flexible. Research results show that our method performs well in handling specified code formats, understanding the relationship between tasks and subtasks, and extracting parameters from text descriptions. However, there are also problems such as limited complexity of task logic handling, ambiguity in the quantity of parts and the precise location of assembly. Improving the precision of task description and cognitive structure can bring certain improvements. https://github.com/NOMIzy/Think_Net_Prompt
http://arxiv.org/pdf/2306.05171
Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, Fang Yi-shu
cs.RO, cs.AI
null
null
cs.RO
20230608
20230608
[ { "id": "2302.12927" }, { "id": "2212.06817" }, { "id": "2006.05398" }, { "id": "2209.05451" }, { "id": "2209.11302" }, { "id": "2210.12250" }, { "id": "2204.01691" }, { "id": "2201.07207" }, { "id": "2303.12153" } ]
2306.05212
4
from Human Feedback (RLHF) techniques (Chris- tiano et al., 2017; Ziegler et al., 2019), LLMs can generate helpful, detailed, and polite texts in re- sponse to user inputs. Many studies have demon- strated LLMs’ extraordinary abilities in various ar- eas, including nature language processing (Moslem et al., 2023), information retrieval (Sun et al., 2023; Wang et al., 2023; Mao et al., 2023), and recom- mendation (Hou et al., 2023; Zhang et al., 2023). However, LLMs still tend to hallucinate and sometimes generate texts opposite to facts (Zhou et al., 2021; Zhao et al., 2023). To tackle these prob- lems, researchers have proposed a new paradigm to strengthen LLMs with information retrieval systems (retrieval-augmented LLMs) (Shi et al., 2023; Jiang et al., 2023; Nakano et al., 2022), which enables LLMs to retrieve relevant contents from an external repository (knowledge corpus) to generate texts based on them. It has been verified that retrieval-augmented LLMs can gen- erate
2306.05212#4
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including {request rewriting, document retrieval, passage extraction, answer generation, and fact checking} modules. Our toolkit is publicly available at https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM.
http://arxiv.org/pdf/2306.05212
Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen
cs.IR
Technical Report for RETA-LLM
null
cs.IR
20230608
20230608
[ { "id": "2210.02414" }, { "id": "2208.05753" } ]
2306.05301
4
Corresponding Authors 1Our code and data are available at https://github.com/ tangqiaoyu/ToolAlpaca. Recent advancements in enhancing large language mod- els (LLMs) such as GPT-4 (OpenAI 2023) with tool-use abilities have made significant progress in this area. These models have shown their ability to effectively employ ex- ternal tools through integrated plugins, thereby expanding their versatility and enhancing the precision and quality of their outputs. Unfortunately, due to a lack of under- standing of how existing large language models acquire the general tool-use capability, currently compact language models still do not possess such general ability. Conse- quently, substantial research efforts are dedicated to fine- tuning smaller language models to acquire the capacity for tool usage (Komeili, Shuster, and Weston 2022; Parisi, Zhao, and Fiedel 2022; Schick et al. 2023) on a limited range of tools, which lacks the ability to generalize to unseen tools. This discrepancy between the generalized tool-use abilities of larger models and the more constrained capabilities of compact models presents an intriguing question: Can these compact language models learn to generalize their tool-use abilities, thus enabling interaction with a broader spectrum of tools?
2306.05301#4
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Enabling large language models to utilize real-world tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tool-use abilities without tool-specific training. To address this question, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tool-use corpus and learn generalized tool-use abilities on compact language models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tool-use corpus by building a multi-agent simulation environment. The corpus contains 3938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to fine-tune compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tool-use capabilities comparable to those of extremely large language models like GPT-3.5, demonstrating that learning generalized tool-use ability is feasible for compact language models.
http://arxiv.org/pdf/2306.05301
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun
cs.CL
null
null
cs.CL
20230608
20230907
[ { "id": "2305.16504" }, { "id": "2305.13691" }, { "id": "2304.08244" }, { "id": "2303.08774" }, { "id": "2211.08264" }, { "id": "2304.08354" }, { "id": "2305.18752" }, { "id": "2212.14024" }, { "id": "2211.10435" }, { "id": "2210.03629" }, { "id": "2212.09689" }, { "id": "2306.06624" }, { "id": "2212.10560" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2304.09842" }, { "id": "2305.11206" }, { "id": "2302.07842" } ]
2306.05424
4
The contributions of this work are as follows, • We propose Video-ChatGPT, a video conversation model capable of generating meaningful conversations about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted for spatiotemporal video representations. We introduce 100,000 high-quality video instruction pairs together with a novel annotation framework that is scalable and generates a diverse range of video-specific instruction sets. • We develop the first quantitative video conversation evaluation framework for benchmarking video conversation models. We demonstrate Video-ChatGPT to perform well compared to concurrent conversational engines for videos such as Video Chat [8]. # 2 Related Work
2306.05424#4
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the underexplored field of video-based conversation by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with a LLM. The model is capable of understanding and generating human-like conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantiative evaluation framework for video-based dialogue models to objectively analyse the strengths and weaknesses of proposed models. Our code, models, instruction-sets and demo are released at https://github.com/mbzuai-oryx/Video-ChatGPT.
http://arxiv.org/pdf/2306.05424
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan
cs.CV
null
null
cs.CV
20230608
20230608
[ { "id": "2103.07461" }, { "id": "2302.13971" }, { "id": "2109.08472" }, { "id": "2303.05657" }, { "id": "2212.00280" }, { "id": "2305.06355" }, { "id": "2206.08155" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2005.14165" }, { "id": "2305.16355" }, { "id": "2212.03191" }, { "id": "2205.01068" } ]
2306.05499
4
1 crafted payloads disturb the routine execution of a program by encapsulating previous commands and misinterpreting malev- olent input as a new command. This understanding underpins the formulation of our distinct payload generation strategy for black-box prompt injection attacks. To optimize the effectiveness, an injected prompt should ac- count for the previous context to instigate a substantial context separation. The payloads we devise consist of three pivotal components: (1) Framework Component, which seamlessly integrates a pre-constructed prompt with the original appli- cation; (2) Separator Component, which triggers a context separation between preset prompts and user inputs; (3) Dis- ruptor Component, a malicious question aimed to achieve the adversary’s objective. We define a set of generative strategies for each of these components to enhance the potency of the prompt injection attack.
2306.05499#4
Prompt Injection attack against LLM-integrated Applications
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.
http://arxiv.org/pdf/2306.05499
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
cs.CR, cs.AI, cs.CL, cs.SE
null
null
cs.CR
20230608
20230608
[]
2306.04926
5
acceleration in research and treatments. Between January 1, 2020 and June 30, 2020, researchers published over 23,500 coronavirus articles, letters, reviews, notes, and editorials to major databases6. By August 1, 2021, the number of publications increased to 210,183, with 720,801 unique authors from all scientific subfields7. This vast involvement of the scientific research community was unlike trends from other infectious diseases, including HIV/AIDS, Zika, and tuberculosis. The United States, China, and Italy were the countries that published the most papers by volume, while BMJ, Journal of Medical Virology, and The Lancet were the journals that published the most papers by volume. Of the articles published on Scopus and Web of Science, 48% and 37%, respectively, were research papers. Findings have involved topics such as data reporting quality, mental health impacts of the pandemic, conflicts of interest, quality of research publications and studies, impacts of the pandemic on academia, and the uses of technology to learn more about COVID-196. Additionally, at least one author from each of the 21 major scientific fields and 174 scientific subfields published on COVID-197.
2306.04926#5
covLLM: Large Language Models for COVID-19 Biomedical Literature
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
http://arxiv.org/pdf/2306.04926
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230608
20230608
[]
2306.05087
5
On the other hand, the labels of previous evaluation methods [23, 24] simply definite answers and fail to consider the language complexity in practice. The evaluation metrics of these procedures are typically accuracy and F1-score, without considering the subjective evaluation metrics that autoregressive generative language models should pay attention to, thus does not reflect the potential of such models to generate contextually relevant text. The appropriate subjective evaluation metrics can be relative conciseness, clarity, adherence to instructions, comprehensiveness, formality, and context relevance.
2306.05087#5
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
http://arxiv.org/pdf/2306.05087
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.CL, cs.AI
null
null
cs.CL
20230608
20230608
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1803.05457" }, { "id": "2305.10403" }, { "id": "1807.05118" }, { "id": "2211.05100" }, { "id": "2302.10198" }, { "id": "2205.01068" }, { "id": "2003.05689" }, { "id": "1806.03822" }, { "id": "1711.05101" }, { "id": "2304.03208" }, { "id": "2304.01373" }, { "id": "2303.14742" }, { "id": "2303.04673" }, { "id": "2212.10560" }, { "id": "2211.08073" }, { "id": "2210.02414" }, { "id": "2304.03277" }, { "id": "2002.06305" }, { "id": "2305.13412" }, { "id": "2304.01196" } ]
2306.05171
5
1 does not support the expression of a recursive, that is, a command with a cycle. Therefore, similar to the paradigm for describing entity relationships in the database field and the syntax paradigm in the principles of compilation, we attempt to discuss this question: Can we provide a method to better describe structured knowledge in professional fields? If progress can be made on this question, it may be possible to train general artificial intelligence in specific professional fields more efficiently. For the issue of optimization biased towards actual engineering, we have summarized our experience in the feasibility verification of Prompt and proposed some ideas for improvement and optimization. The main contributions of this article include: (1) Proposing an LLM prompt template, Think_Net_Prompt, which has stronger capabilities in expressing structured professional knowledge and is easy to configure, and trying to assess its feasibility. We successfully verifying the possibility of LLM using the same command to recursively layer tasks, which means that complex tasks can be analyzed in a simpler way and reduce the difficulty of professional knowledge design; (2) Proposing a method to decompose tasks layer by layer, generating a task tree to reduce the volume of task planning each time. Proposing an executable task sequence generation algorithm which regenerates the task description and task goal according to a given precision format each time a subtask is generated, enabling LLM to perform better in single tasks .
2306.05171#5
Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures
Traditional robot task planning methods face challenges when dealing with highly unstructured environments and complex tasks. We propose a task planning method that combines human expertise with an LLM and have designed an LLM prompt template, Think_Net_Prompt, with stronger expressive power to represent structured professional knowledge. We further propose a method to progressively decompose tasks and generate a task tree to reduce the planning volume for each task, and we have designed a strategy to decouple robot task planning. By dividing different planning entities and separating the task from the actual machine binding process, the task planning process becomes more flexible. Research results show that our method performs well in handling specified code formats, understanding the relationship between tasks and subtasks, and extracting parameters from text descriptions. However, there are also problems such as limited complexity of task logic handling, ambiguity in the quantity of parts and the precise location of assembly. Improving the precision of task description and cognitive structure can bring certain improvements. https://github.com/NOMIzy/Think_Net_Prompt
http://arxiv.org/pdf/2306.05171
Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, Fang Yi-shu
cs.RO, cs.AI
null
null
cs.RO
20230608
20230608
[ { "id": "2302.12927" }, { "id": "2212.06817" }, { "id": "2006.05398" }, { "id": "2209.05451" }, { "id": "2209.11302" }, { "id": "2210.12250" }, { "id": "2204.01691" }, { "id": "2201.07207" }, { "id": "2303.12153" } ]
2306.05212
5
from an external repository (knowledge corpus) to generate texts based on them. It has been verified that retrieval-augmented LLMs can gen- erate texts in response to user input with fewer hallucinations (Nakano et al., 2022). Further- more, by incorporating customized private data resources, retrieval-augmented LLMs can respond to in-domain queries that cannot be answered by LLMs trained with public data.
2306.05212#5
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including {request rewriting, document retrieval, passage extraction, answer generation, and fact checking} modules. Our toolkit is publicly available at https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM.
http://arxiv.org/pdf/2306.05212
Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen
cs.IR
Technical Report for RETA-LLM
null
cs.IR
20230608
20230608
[ { "id": "2210.02414" }, { "id": "2208.05753" } ]
2306.05301
5
In this paper, we explore whether it is feasible for com- pact language models to learn generalized tool-use abilities. Intuitively, previous studies have demonstrated the possi- bility of equipping compact language models with gener- alized instruction-following abilities by fine-tuning them on diversified instruction datasets (Taori et al. 2023; Zhou et al. 2023). Hence, a promising strategy for equipping language models with generalized tool-use abilities would involve fine-tuning them on a corpus containing highly-diversified tool-use instances. Unfortunately, such a diversified corpus is currently unavailable. This absence can be attributed to several crucial factors. First, the absence of a set of available tool APIs that can accommodate various tool usage scenar- ios for language models presents a considerable challenge in assembling a diverse collection of tools. Second, real-world tool-use instances often entail complex, intricate, and multi- turn interactions between the language model, users, and tools. This complexity significantly heightens the difficulty and manual effort involved in creating instances encompass- ing a wide array of tools on a large scale. Consequently,
2306.05301#5
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Enabling large language models to utilize real-world tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tool-use abilities without tool-specific training. To address this question, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tool-use corpus and learn generalized tool-use abilities on compact language models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tool-use corpus by building a multi-agent simulation environment. The corpus contains 3938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to fine-tune compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tool-use capabilities comparable to those of extremely large language models like GPT-3.5, demonstrating that learning generalized tool-use ability is feasible for compact language models.
http://arxiv.org/pdf/2306.05301
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun
cs.CL
null
null
cs.CL
20230608
20230907
[ { "id": "2305.16504" }, { "id": "2305.13691" }, { "id": "2304.08244" }, { "id": "2303.08774" }, { "id": "2211.08264" }, { "id": "2304.08354" }, { "id": "2305.18752" }, { "id": "2212.14024" }, { "id": "2211.10435" }, { "id": "2210.03629" }, { "id": "2212.09689" }, { "id": "2306.06624" }, { "id": "2212.10560" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2304.09842" }, { "id": "2305.11206" }, { "id": "2302.07842" } ]
2306.05424
5
# 2 Related Work Vision Language Models: Significant advancements in the field of computer vision have recently been observed due to the development of many foundational vision-language models. These models represent a significant leap towards creating general-purpose vision models capable of tackling various tasks simultaneously [6, 10–12]. A prime example is CLIP [6], which is trained on 400M image-text pairs and has demonstrated impressive zero-shot performance on numerous benchmarks. It has been employed in various downstream applications, from image-based object detection and segmentation [13, 14] to 3D applications [15, 16]. Numerous attempts have also been made to adapt CLIP for video applications [17, 16]. Similar to our design, ViFi-CLIP [18] suggests employing temporal pooling across video frames to adapt the image-based CLIP model for video-based tasks. Large Language Models: The field of natural language processing has witnessed a paradigm shift with the advent of pretrained Large Language Models (LLMs) such as GPT [19], LLaMA [20], OPT [21], and MOSS [22]. These models exhibit extraordinary abilities like language generation and in-context learning, and their knack for understanding intricate tasks given user prompts in a zero-shot manner reflects their impressive adaptability and generalization. The proven capabilities of LLMs have encouraged researchers to fine-tune them to maximize their proficiency.
2306.05424#5
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the underexplored field of video-based conversation by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with a LLM. The model is capable of understanding and generating human-like conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantiative evaluation framework for video-based dialogue models to objectively analyse the strengths and weaknesses of proposed models. Our code, models, instruction-sets and demo are released at https://github.com/mbzuai-oryx/Video-ChatGPT.
http://arxiv.org/pdf/2306.05424
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan
cs.CV
null
null
cs.CV
20230608
20230608
[ { "id": "2103.07461" }, { "id": "2302.13971" }, { "id": "2109.08472" }, { "id": "2303.05657" }, { "id": "2212.00280" }, { "id": "2305.06355" }, { "id": "2206.08155" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2005.14165" }, { "id": "2305.16355" }, { "id": "2212.03191" }, { "id": "2205.01068" } ]
2306.05499
5
Utilizing these insights, we introduce HOUYI1, a ground- breaking black-box prompt injection attack methodology, no- table for its versatility and adaptability when targeting LLM- integrated service providers. To our knowledge, our work represents the pioneering efforts towards a systematic per- spective of such threat, capable of manipulating LLMs across various platforms and contexts without direct access to the in- ternals of the system. HOUYI employs an LLM to deduce the semantics of the target application from user interactions and applies different strategies to construct the injected prompt. Notably, HOUYI comprises three distinct phases. In the Con- text Inference phase, we engage with the target application to grasp its inherent context and input-output relationships. In the Payload Generation phase, we devise a prompt gen- eration plan based on the obtained application context and prompt injection guidelines. In the Feedback phase, we gauge the effectiveness of our attack by scrutinizing the LLM’s re- sponses to the injected prompts. We then refine our strategy to enhance the success rate, enabling iterative improvement of the payload until it achieves optimal injection outcome. This three-phase approach constitutes a comprehensive and adapt- able strategy, effective across diverse real-world applications and scenarios.
2306.05499#5
Prompt Injection attack against LLM-integrated Applications
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.
http://arxiv.org/pdf/2306.05499
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
cs.CR, cs.AI, cs.CL, cs.SE
null
null
cs.CR
20230608
20230608
[]
2306.04926
6
Through the race to publish on the COVID-19 pandemic, scientists have highlighted the volume of articles and questioned the quality of clinical trials. Ioannidis, Salholz-Hillel, et. al underscore the number of researchers and breadth of disciplines that published on COVID-19, stating that 28% of COVID-19 publication authors published in a subfield that was different from their subfield of expertise. They express concern that some COVID-19
2306.04926#6
covLLM: Large Language Models for COVID-19 Biomedical Literature
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
http://arxiv.org/pdf/2306.04926
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230608
20230608
[]
2306.05087
6
To tackle these challenges, we introduce a judge language model, aiming for Reproducible and Automated Language Model Assessment (PandaLM). Tuned from LLaMA-7B, PandaLM is used to distinguish the most superior model among various candidates, each fine-tuned with different hyperparameters, and is also capable of providing the rationale behind its choice based on the reference response for the context. PandaLM surpasses the limitations of traditional evaluation methods and focuses on more subjective aspects, such as relative conciseness, clarity, comprehensiveness, formality, and adherence to instructions. Furthermore, the robustness of PandaLM is strengthened by its ability to identify and rectify problems such as logical fallacies, unnecessary repetitions, grammatical inaccuracies, and context irrelevance. By considering these diverse aspects, we leverage PandaLM’s ability to distinguish the most superior model among candidates on the validation set and then provide insights for facilitating hyperparameter optimization of instruction tuning.
2306.05087#6
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
http://arxiv.org/pdf/2306.05087
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.CL, cs.AI
null
null
cs.CL
20230608
20230608
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1803.05457" }, { "id": "2305.10403" }, { "id": "1807.05118" }, { "id": "2211.05100" }, { "id": "2302.10198" }, { "id": "2205.01068" }, { "id": "2003.05689" }, { "id": "1806.03822" }, { "id": "1711.05101" }, { "id": "2304.03208" }, { "id": "2304.01373" }, { "id": "2303.14742" }, { "id": "2303.04673" }, { "id": "2212.10560" }, { "id": "2211.08073" }, { "id": "2210.02414" }, { "id": "2304.03277" }, { "id": "2002.06305" }, { "id": "2305.13412" }, { "id": "2304.01196" } ]
2306.05152
6
To illustrate this idea, we organize the paper as follows. In Section II, we present literature on LLMs and how they can be used to emulate cognitive models for human behavior, thus providing a way of implementing our vision of testing LLMs that interact with tools and have agency while interacting with humans. In Section III, we provide a taxonomy of LLM-based software testing systems based on whether the LLMs are used in an interactive way, and the degree of ‘autonomy’, i.e. formu- lating and executing its own plans. In Section IV we present an example interaction with the GPT-4 model [10], demonstrating that even without significant autonomy, developers gain an opportunity to ponder fine-grained semantics of their code via dialogue. The benefits of (autonomous) conversational testing agents are given in Section V, and we argue that greater autonomy confers greater benefits. Potential limitations are given in Section VI, and conclude in Section VII. # II. BACKGROUND With the recent advancements in large language models, the possibility of having a personal agent capable of assisting with general tasks is greater than ever. One popular ongoing
2306.05152#6
Towards Autonomous Testing Agents via Conversational Large Language Models
Software testing is an important part of the development cycle, yet it requires specialized expertise and substantial developer effort to adequately test software. Recent discoveries of the capabilities of large language models (LLMs) suggest that they can be used as automated testing assistants, and thus provide helpful information and even drive the testing process. To highlight the potential of this technology, we present a taxonomy of LLM-based testing agents based on their level of autonomy, and describe how a greater level of autonomy can benefit developers in practice. An example use of LLMs as a testing assistant is provided to demonstrate how a conversational framework for testing can help developers. This also highlights how the often criticized hallucination of LLMs can be beneficial for testing. We identify other tangible benefits that LLM-driven testing agents can bestow, and also discuss potential limitations.
http://arxiv.org/pdf/2306.05152
Robert Feldt, Sungmin Kang, Juyeon Yoon, Shin Yoo
cs.SE
null
null
cs.SE
20230608
20230905
[ { "id": "2305.10601" }, { "id": "2303.17580" }, { "id": "2305.16291" }, { "id": "2201.09305" }, { "id": "2210.03629" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2302.03287" }, { "id": "2209.11515" } ]
2306.05171
6
(3) Proposing a method to decouple robot task planning at a higher level and design a method to split subtasks: a. Divide planning entities with different professional knowledge to cooperate in generating the overall executable task sequence b. Separate the task of binding the executable task entities according to the number of actual robots and work status and hand it over to another type of entity. # II. BACKGROUD A. Robot task and action planning problem The problem of planning for a robot in an environment with a large number of objects, enabling it to execute actions by changing object states and its own movement in the world, is known as Task and Motion Planning (TAMP). The TAMP problem includes elements of discrete task planning, discrete continuous mathematical planning, and continuous motion planning [1]. In the most common series of solutions, the task planner uses symbolic domain statements and goals to generate candidate plan frameworks, while the action planner verifies the geometric feasibility of the plan framework and returns an action plan when successful. This process is repeated until a solution is found or the planner times out. General TAMP methods have three core components: 1) Pre-set dynamic models and overall task state information. 2) Carefully defined symbolic planning domains, which can be adjusted for the capabilities, environment, and tasks of specific robots to perform task planning. 3) A process for testing the geometric feasibility of task plans.
2306.05171#6
Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures
Traditional robot task planning methods face challenges when dealing with highly unstructured environments and complex tasks. We propose a task planning method that combines human expertise with an LLM and have designed an LLM prompt template, Think_Net_Prompt, with stronger expressive power to represent structured professional knowledge. We further propose a method to progressively decompose tasks and generate a task tree to reduce the planning volume for each task, and we have designed a strategy to decouple robot task planning. By dividing different planning entities and separating the task from the actual machine binding process, the task planning process becomes more flexible. Research results show that our method performs well in handling specified code formats, understanding the relationship between tasks and subtasks, and extracting parameters from text descriptions. However, there are also problems such as limited complexity of task logic handling, ambiguity in the quantity of parts and the precise location of assembly. Improving the precision of task description and cognitive structure can bring certain improvements. https://github.com/NOMIzy/Think_Net_Prompt
http://arxiv.org/pdf/2306.05171
Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, Fang Yi-shu
cs.RO, cs.AI
null
null
cs.RO
20230608
20230608
[ { "id": "2302.12927" }, { "id": "2212.06817" }, { "id": "2006.05398" }, { "id": "2209.05451" }, { "id": "2209.11302" }, { "id": "2210.12250" }, { "id": "2204.01691" }, { "id": "2201.07207" }, { "id": "2303.12153" } ]
2306.05212
6
To support research in this area and help users build their own in-domain LLM-based systems, we devise RETA-LLM, a RETreival-Augmented LLM toolkit. Different from previous general LLM- enhanced toolkits such as LangChain,1 RETA- LLM focuses on the retrieval-augmented LLMs and provides more plug-in modules. Typically, retrieval-augmented LLMs use a retrieve-and- generate strategy with two modules: First, they retrieve documents or passages based on user re- quest (document retrieval module); then, they gen- erate answers utilizing these relevant documents as references (answer generation module). In addi*Corresponding author. 1LangChain, langchain https://github.com/hwchase17/ History Requests “Introduce the majors in School of Information?” User Request “How about School of Economics” Top-K Relevant Documents Systems Dy: Economics major’s web page. Dz: Digital economics major’s web page. D3: International economics and trade major’s web page. Dx: School of Applied Economics’ web page Final Response Figure 1: The RETA-LLM framework. Examples are taken from an intelligent university information seeking system powered by RETA-LLM.
2306.05212#6
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including {request rewriting, document retrieval, passage extraction, answer generation, and fact checking} modules. Our toolkit is publicly available at https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM.
http://arxiv.org/pdf/2306.05212
Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen
cs.IR
Technical Report for RETA-LLM
null
cs.IR
20230608
20230608
[ { "id": "2210.02414" }, { "id": "2208.05753" } ]
2306.05301
6
PO Toolset Construction —_—_—_——_aairiaiaias \ Gomer | openap 3.00 wr tile Public Holidays |) | Sescripuon | servers: | I getHolidays: Get alist of holidays for a particular country with dates, descriptions, and types. searchHoliday: Search for holidays based on keywords, country, and date - url: paths: Iholidays!{country}: range. Inolidays/search: I'm planning a trip to Japan next year, and | want to avoid any major holidays, so can you tell me the list of holidays in Japan for 2024? ; oo | need to get the list of holidays in \ i i i i ' Japan for 2024. | Action: getHolidays i ‘Action Input: (‘country*: “lapan’, I "year": 2024) i Status Code: 200. Response: i {Cholidays":[(‘name i Day’, “dat 4-01 ; | tNational’, “description ToolAlpaca Training the interactions the tools by leveraging agents. In this way, ate a substantial manual intervention. clusive tool-use fectively showcasing distinct tools. To verify whether guage models with duct experiments to ang et al.
2306.05301#6
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Enabling large language models to utilize real-world tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tool-use abilities without tool-specific training. To address this question, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tool-use corpus and learn generalized tool-use abilities on compact language models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tool-use corpus by building a multi-agent simulation environment. The corpus contains 3938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to fine-tune compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tool-use capabilities comparable to those of extremely large language models like GPT-3.5, demonstrating that learning generalized tool-use ability is feasible for compact language models.
http://arxiv.org/pdf/2306.05301
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun
cs.CL
null
null
cs.CL
20230608
20230907
[ { "id": "2305.16504" }, { "id": "2305.13691" }, { "id": "2304.08244" }, { "id": "2303.08774" }, { "id": "2211.08264" }, { "id": "2304.08354" }, { "id": "2305.18752" }, { "id": "2212.14024" }, { "id": "2211.10435" }, { "id": "2210.03629" }, { "id": "2212.09689" }, { "id": "2306.06624" }, { "id": "2212.10560" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2304.09842" }, { "id": "2305.11206" }, { "id": "2302.07842" } ]
2306.05424
6
A key strategy in this pursuit is instruction tuning. This approach focuses on improving the model’s alignment with user intentions and optimizing their output quality. For instance, InstructGPT [23] and ChatGPT [24] significantly benefit from this technique, showcasing improvements in diverse conver- sational interaction capabilities and their aptitude to answer a broad range of complex questions. This effective approach has recently been employed in open-source models like Alpaca [25] and Vicuna [7], both developed using the LLaMA [20] framework, resulting in performance improvements. Pre-trained LLMs in Vision-Language Tasks: The recent strides in multimodal understanding have primarily been driven by the integration of image-based vision models with LLMs. Seminal contributions such as Flamingo [10] and BLIP-2 [4] have demonstrated the power of utilizing web-scale image-text data, as well as pioneering techniques in cross-modal alignment, to exhibit dynamic abilities in conversational and few-shot learning contexts. Building on this foundation, MiniGPT-4 [2] allows image-based conversations by integrating BLIP-2 and Vicuna for zero-shot image comprehension.
2306.05424#6
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the underexplored field of video-based conversation by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with a LLM. The model is capable of understanding and generating human-like conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantiative evaluation framework for video-based dialogue models to objectively analyse the strengths and weaknesses of proposed models. Our code, models, instruction-sets and demo are released at https://github.com/mbzuai-oryx/Video-ChatGPT.
http://arxiv.org/pdf/2306.05424
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan
cs.CV
null
null
cs.CV
20230608
20230608
[ { "id": "2103.07461" }, { "id": "2302.13971" }, { "id": "2109.08472" }, { "id": "2303.05657" }, { "id": "2212.00280" }, { "id": "2305.06355" }, { "id": "2206.08155" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2005.14165" }, { "id": "2305.16355" }, { "id": "2212.03191" }, { "id": "2205.01068" } ]
2306.05499
6
To substantiate HOUYI, we devise a comprehensive toolkit and apply it across all the 36 real-world LLM-integrated ser- vices. Impressively, the toolkit registers an 86.1% success rate in launching attacks. We further highlight the potentially severe ramifications of these attacks. Specifically, we demon- strate that via prompt injection attacks, we can purloin the original service prompts, thereby imitating the service at zero cost, and freely exploit the LLM’s computational power for our own purposes. This could potentially result in the financial loss of millions of US dollars to the service providers, impact- ing millions of users. During these experiments, we strictly confine our experiments to avert any real-world damage. We have responsibly disclosed our findings to the respective ven- dors and ensured no unauthorized disclosure of information 1HOUYI is a mythological Chinese archer 2 related to the original prompts. Thwarting prompt injection attacks can pose a significant challenge. To evaluate the efficacy of existing countermea- sures, we apply common defensive mechanisms [46, 50, 52] to some open-source LLM-integrated projects. Our assess- ments reveal that while these defenses can mitigate traditional prompt injection attacks, they are still vulnerable to malicious payloads generated by HOUYI. We hope our work will in- spire additional research into the development of more robust defenses against prompt injection attacks.
2306.05499#6
Prompt Injection attack against LLM-integrated Applications
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.
http://arxiv.org/pdf/2306.05499
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
cs.CR, cs.AI, cs.CL, cs.SE
null
null
cs.CR
20230608
20230608
[]
2306.04926
7
authors’ fields of expertise were “remote” from COVID-19, including “fisheries, ornithology, entomology, or architecture”. They also cite that some scientists had participated in “epistemic trespassing,” where scientists publish on health and medical questions, despite being experts in other fields. Moreover, surveys on the quality of COVID-19 research consistently found a high prevalence of low-quality studies7. Park, Mogg, et. al argue that clinical trials focused primarily on treatments for severe disease, rather than pre-exposure, post-exposure, or outpatient treatments and identified shortcomings including overlaps in proposed trials, having sample sizes smaller than 100 participants, and not identifying dose ranges. The translation of relevant findings to clinical practice has been slow and inconsistent, resulting in poorer quality of care to patients8. This exponential rise in coronavirus research creates the opportunity for computational methods that enable clinicians to efficiently filter through papers and rapidly translate these findings into treatments. # 1.2. Large Language Models
2306.04926#7
covLLM: Large Language Models for COVID-19 Biomedical Literature
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
http://arxiv.org/pdf/2306.04926
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230608
20230608
[]
2306.05087
7
In practice, we generate paired responses from a diverse set of similarly sized foundation models including LLaMA-7B [14], Bloom-7B [25], Cerebras-GPT-6.7B [26], OPT-7B [27], and Pythia- 6.9B [28]. Each of these models is fine-tuned using the same data and hyperparameters as Alpaca [13]. The paired responses from these tuned LLMs constitute the input of training data for PandaLM. The most straightforward approach to generate the corresponding target of training data is through human annotation, but this method can be costly and time-consuming [29]. Considering that GPT-3.5 has the ability to provide reliable evaluation to some extent, to reduce costs, we follow self-instruct [19] to distill data from GPT-3.5 and apply heuristic data filtering strategies to mitigate noise.
2306.05087#7
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
http://arxiv.org/pdf/2306.05087
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.CL, cs.AI
null
null
cs.CL
20230608
20230608
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1803.05457" }, { "id": "2305.10403" }, { "id": "1807.05118" }, { "id": "2211.05100" }, { "id": "2302.10198" }, { "id": "2205.01068" }, { "id": "2003.05689" }, { "id": "1806.03822" }, { "id": "1711.05101" }, { "id": "2304.03208" }, { "id": "2304.01373" }, { "id": "2303.14742" }, { "id": "2303.04673" }, { "id": "2212.10560" }, { "id": "2211.08073" }, { "id": "2210.02414" }, { "id": "2304.03277" }, { "id": "2002.06305" }, { "id": "2305.13412" }, { "id": "2304.01196" } ]
2306.05152
7
# II. BACKGROUND With the recent advancements in large language models, the possibility of having a personal agent capable of assisting with general tasks is greater than ever. One popular ongoing project is AutoGPT [11], which aims to implement a fully autonomous system that works towards achieving user-defined goals. Beyond the basic capabilities of GPT-4 model, the pro- posed framework supports a high level of autonomy by giving access to external tools such as search engines, complementary neural network models, and file systems. Moreover, AutoGPT retains both short-term and long-term memory management to cope with complex tasks that exceed the context length limitation of currently available language models.
2306.05152#7
Towards Autonomous Testing Agents via Conversational Large Language Models
Software testing is an important part of the development cycle, yet it requires specialized expertise and substantial developer effort to adequately test software. Recent discoveries of the capabilities of large language models (LLMs) suggest that they can be used as automated testing assistants, and thus provide helpful information and even drive the testing process. To highlight the potential of this technology, we present a taxonomy of LLM-based testing agents based on their level of autonomy, and describe how a greater level of autonomy can benefit developers in practice. An example use of LLMs as a testing assistant is provided to demonstrate how a conversational framework for testing can help developers. This also highlights how the often criticized hallucination of LLMs can be beneficial for testing. We identify other tangible benefits that LLM-driven testing agents can bestow, and also discuss potential limitations.
http://arxiv.org/pdf/2306.05152
Robert Feldt, Sungmin Kang, Juyeon Yoon, Shin Yoo
cs.SE
null
null
cs.SE
20230608
20230905
[ { "id": "2305.10601" }, { "id": "2303.17580" }, { "id": "2305.16291" }, { "id": "2201.09305" }, { "id": "2210.03629" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2302.03287" }, { "id": "2209.11515" } ]
2306.05171
7
3) A process for testing the geometric feasibility of task plans. Hierarchical methods [2] are characteristic of the most common solutions. They typically use: 1) AI task planners to derive candidate plan frameworks 2) Action planners to obtain action trajectories that satisfy robot and environmental constraints; for example, through sample-based planning [3] or constraint optimization [4]. Current general ideas to accelerate TAMP include: learning sampling distributions [5], visual feasibility heuristics [6,7,8], low-level controllers [9,10], or state sparsifiers [11,12]. However, these methods learn solutions computed by classic TAMP solvers, so they also rely on carefully designed symbolic planning domains specific to the task. While methods have been proposed for learning symbolic representations for TAMP [13,14], these methods usually require symbolic transformation prior knowledge specific to the task. B. Application of language conditioned policy in robot task and action planning problem Language, as a medium for solving TAMP, has attracted a lot of attention. Language conditioned policies (LCP) can now be applied to manipulate robots. Many methods have been proposed for short-term tasks [15,16,17], and some focus on long-term tasks [18,19].
2306.05171#7
Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures
Traditional robot task planning methods face challenges when dealing with highly unstructured environments and complex tasks. We propose a task planning method that combines human expertise with an LLM and have designed an LLM prompt template, Think_Net_Prompt, with stronger expressive power to represent structured professional knowledge. We further propose a method to progressively decompose tasks and generate a task tree to reduce the planning volume for each task, and we have designed a strategy to decouple robot task planning. By dividing different planning entities and separating the task from the actual machine binding process, the task planning process becomes more flexible. Research results show that our method performs well in handling specified code formats, understanding the relationship between tasks and subtasks, and extracting parameters from text descriptions. However, there are also problems such as limited complexity of task logic handling, ambiguity in the quantity of parts and the precise location of assembly. Improving the precision of task description and cognitive structure can bring certain improvements. https://github.com/NOMIzy/Think_Net_Prompt
http://arxiv.org/pdf/2306.05171
Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, Fang Yi-shu
cs.RO, cs.AI
null
null
cs.RO
20230608
20230608
[ { "id": "2302.12927" }, { "id": "2212.06817" }, { "id": "2006.05398" }, { "id": "2209.05451" }, { "id": "2209.11302" }, { "id": "2210.12250" }, { "id": "2204.01691" }, { "id": "2201.07207" }, { "id": "2303.12153" } ]
2306.05212
7
Figure 1: The RETA-LLM framework. Examples are taken from an intelligent university information seeking system powered by RETA-LLM. tion to these two basic modules, our RETA-LLM provides three optional modules: (1) a request rewriting module to make user’s current request more complete and clear; (2) a passage extraction module to extract relevant passages or fragments from the whole retrieved document contents; and (3) a fact checking module to verify whether there exist factual errors in the generated answers. These optional modules can make the interaction between IR systems and LLMs more effective and smooth. The disentanglement between LLMs and IR sys- tems in our RETA-LLM is more thorough, which makes the customization of search engines and LLMs more convenient. Furthermore, to make the usage easier, we provide a complete and ready-to- use pipeline for researchers and users to build their RETA-LLM toolkits based on their own repository for in-domain LLM-based systems from scratch. RETA-LLM is part of YuLan, a open source LLM initiative proposed by Gaoling School of Ar- tificial Intelligence, Renmin University of China. RETA-LLM is still under development and there are many issues that need to be solved with great efforts. We sincerely welcome contributions on this open source toolkit. # 2 RETA-LLM Framework
2306.05212#7
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including {request rewriting, document retrieval, passage extraction, answer generation, and fact checking} modules. Our toolkit is publicly available at https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM.
http://arxiv.org/pdf/2306.05212
Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen
cs.IR
Technical Report for RETA-LLM
null
cs.IR
20230608
20230608
[ { "id": "2210.02414" }, { "id": "2208.05753" } ]
2306.05424
7
Equally significant is the emergence of LLaVA [1], a model derived from the LLaMa architecture, leveraging GPT-4’s language proficiency to generate multimodal instruction-following data. With instruction tuning applied on the derived data, LLaVA has displayed interesting multimodal chat capability, hinting at the scalability potential of such a methodology. In addition, InstructBLIP [5] model has demonstrated strong image-based dialogue capabilities via vision-language instruction tuning by innovating with instruction-aware visual feature extraction. More closely related to our work, VideoChat [8] employs selective components of video foundational models [26] and image foundation models [4], and integrates them with LLMs [7] in conjunction ii \) Video-ChatGPT Response ~~ This video is taken in New York City, especially in the vicinity of the Statue of Liberty. The 2 \)- wpa Statue is shown in the background, and the video also shows the city skyline in the background. +) Video-ChatGPT rs a | Large Language Model (Vicuna, v1.1) © ~ System Command Linear Layer & User Query You are Video-ChatGPT, a . Where is this video large ‘ vision- Langauage Temporal Features Fe Spatial Features ‘taken from? model trained with video Zi L nstruction data. ) . Vid Spatial Pooling ~ Temporal Pooling Video Frames
2306.05424#7
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the underexplored field of video-based conversation by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with a LLM. The model is capable of understanding and generating human-like conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantiative evaluation framework for video-based dialogue models to objectively analyse the strengths and weaknesses of proposed models. Our code, models, instruction-sets and demo are released at https://github.com/mbzuai-oryx/Video-ChatGPT.
http://arxiv.org/pdf/2306.05424
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan
cs.CV
null
null
cs.CV
20230608
20230608
[ { "id": "2103.07461" }, { "id": "2302.13971" }, { "id": "2109.08472" }, { "id": "2303.05657" }, { "id": "2212.00280" }, { "id": "2305.06355" }, { "id": "2206.08155" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2005.14165" }, { "id": "2305.16355" }, { "id": "2212.03191" }, { "id": "2205.01068" } ]
2306.05499
7
In conclusion, our contributions are as follows: A comprehensive investigation into the prompt injec- tion risks of real-world LLM-integrated applications. Our study has detected vulnerabilities to prompt injection attacks and identified key obstacles to their effectiveness. • A pioneering methodology for black-box prompt injec- tion attacks. Drawing from SQL injection and XSS attacks, we are the first to apply a systematic approach to prompt injection on LLM-integrated applications, accompanied by innovative generative strategies for boosting attack success rates. • Significant outcomes. We develop our methodology into a toolkit and assess it across 36 LLM-integrated applications. The toolkit exhibits a high success rate of 86.1% in purloin- ing the original prompt and/or utilizing the computational power across services, demonstrating significant potential impacts on millions of users and financial losses amounting to millions of US dollars. # 2 Background # 2.1 LLM-integrated Applications LLMs have expanded their scope, transcending the realm of impressive independent functions to integral components in a broad array of applications, thus offering a diverse spec- trum of services. These LLM-integrated applications affords users the convenience of dynamic responses produced by the underlying LLMs, thereby expediting and streamlining user interactions and augmenting their experience.
2306.05499#7
Prompt Injection attack against LLM-integrated Applications
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.
http://arxiv.org/pdf/2306.05499
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
cs.CR, cs.AI, cs.CL, cs.SE
null
null
cs.CR
20230608
20230608
[]
2306.04926
8
# 1.2. Large Language Models Large language models (LLMs) are deep learning algorithms that can engage with linguistic and language components, such as text, for natural language processing and other artificial intelligence applications. LLMs learn from large datasets that typically include almost everything available on the internet, where algorithms define metrics of similarity and use those to group inputs. Following training, LLMs are then able to use the given knowledge to generate desired outputs. They can also be trained or fine-tuned with smaller batches of data for specific applications, such as biomedical research. One commonly known example of an LLM application is ChatGPT, which can perform natural language processing functions. Despite the potential applications of LLMs, some challenges of using LLMs for specific fields include domain constraints, dataset availability, and technical skillset of the developers9. Branching from current technologies, emerging areas of LLMs include developing models that can check their own outputs. A current shortcoming of existing generative language models is models’ tendencies to “hallucinate”, which occurs when LLMs present false or inaccurate information as facts. Potential solutions to this problem include having a model provide citations, allowing the model to access external information sources, or asking the model to identify aspects of the output that it feels are the weakest10.
2306.04926#8
covLLM: Large Language Models for COVID-19 Biomedical Literature
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
http://arxiv.org/pdf/2306.04926
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230608
20230608
[]
2306.05087
8
To ensure the reliability of PandaLM, we develop a test dataset that aligns with human preference and covers a wide range of tasks and contexts. The instructions and inputs of test data are sampled from the human evaluation dataset of self-instruct [19], with responses generated by different LLMs and each label independently provided by three different human evaluators. Samples with significant divergences are excluded to ensure the Inter Annotator Agreement (IAA) of each annotator remains larger than 0.85. PandaLM-7B demonstrates highly competitive performance, achieving 93.75% of GPT-3.5’s evaluation ability and 88.28% of GPT4’s in terms of F1-score on our diverse human- annotated test dataset.
2306.05087#8
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
http://arxiv.org/pdf/2306.05087
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.CL, cs.AI
null
null
cs.CL
20230608
20230608
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1803.05457" }, { "id": "2305.10403" }, { "id": "1807.05118" }, { "id": "2211.05100" }, { "id": "2302.10198" }, { "id": "2205.01068" }, { "id": "2003.05689" }, { "id": "1806.03822" }, { "id": "1711.05101" }, { "id": "2304.03208" }, { "id": "2304.01373" }, { "id": "2303.14742" }, { "id": "2303.04673" }, { "id": "2212.10560" }, { "id": "2211.08073" }, { "id": "2210.02414" }, { "id": "2304.03277" }, { "id": "2002.06305" }, { "id": "2305.13412" }, { "id": "2304.01196" } ]
2306.05152
8
The operation of AutoGPT can be interpreted from the perspective of existing cognitive architecture frameworks. In fact, modelling the human mind has been a longstanding re- search interest, driven by both objectives of explaining human behaviors and devising artificial intelligent agents. Several influential architectures such as ACT-R [12], and SOAR [13] have been developed so far, and their core components contain associative memory structures linked with perception and actu- ation (“motor”) [14]. This bears resemblance with AutoGPT’s architecture: i.e., incorporating external tools to perceive new information (e.g., via search engine results) or perform an action (e.g., writing a Python script) may be viewed as building the perception and actuation modules into the architecture. On the other hand, LLMs can strengthen classical cognitive architectures by deriving plausible actions using the relevant memory and current state as prompting context.
2306.05152#8
Towards Autonomous Testing Agents via Conversational Large Language Models
Software testing is an important part of the development cycle, yet it requires specialized expertise and substantial developer effort to adequately test software. Recent discoveries of the capabilities of large language models (LLMs) suggest that they can be used as automated testing assistants, and thus provide helpful information and even drive the testing process. To highlight the potential of this technology, we present a taxonomy of LLM-based testing agents based on their level of autonomy, and describe how a greater level of autonomy can benefit developers in practice. An example use of LLMs as a testing assistant is provided to demonstrate how a conversational framework for testing can help developers. This also highlights how the often criticized hallucination of LLMs can be beneficial for testing. We identify other tangible benefits that LLM-driven testing agents can bestow, and also discuss potential limitations.
http://arxiv.org/pdf/2306.05152
Robert Feldt, Sungmin Kang, Juyeon Yoon, Shin Yoo
cs.SE
null
null
cs.SE
20230608
20230905
[ { "id": "2305.10601" }, { "id": "2303.17580" }, { "id": "2305.16291" }, { "id": "2201.09305" }, { "id": "2210.03629" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2302.03287" }, { "id": "2209.11515" } ]
2306.05171
8
C. Related work on task planning with LLM The emergence of Large Language Models (LLM) as a task-independent reasoning module provides a promising path to achieve universal robot planning capabilities. Large language models can utilize a wealth of knowledge learned from a large amount of text, but they may not necessarily be able to decompose high-level commands into low-level instructions suitable for robot execution. To make the language model adapt to the problem statement and give the expected output, it needs to decompose high-level commands into a sequence of usable low-level skills. Several recent works utilize the generative features of LLM by prompting them to generate long-term plans: [20] confines the LLM planner to a feasible set of actions, exploring the potential of language models applied to TAMP problems. Related work translates plans generated by LLM from natural language into code [21]. Utilizing LLM's ability to perform robot system planning without manually specifying the symbolic planning domain, the SayCan framework [22] their combines corresponding robot basic tasks into prompts, ProgPrompt [23] represents robot tasks as Pythonic programs, and then uses Pythonic code as prompts. Paper [24] uses a large language model to generate a three-layer behavior tree for robot task planning, demonstrating the feasibility of LLM generating structured content. Paper [25] proposed Text2Motion, based on previous works,, 2
2306.05171#8
Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures
Traditional robot task planning methods face challenges when dealing with highly unstructured environments and complex tasks. We propose a task planning method that combines human expertise with an LLM and have designed an LLM prompt template, Think_Net_Prompt, with stronger expressive power to represent structured professional knowledge. We further propose a method to progressively decompose tasks and generate a task tree to reduce the planning volume for each task, and we have designed a strategy to decouple robot task planning. By dividing different planning entities and separating the task from the actual machine binding process, the task planning process becomes more flexible. Research results show that our method performs well in handling specified code formats, understanding the relationship between tasks and subtasks, and extracting parameters from text descriptions. However, there are also problems such as limited complexity of task logic handling, ambiguity in the quantity of parts and the precise location of assembly. Improving the precision of task description and cognitive structure can bring certain improvements. https://github.com/NOMIzy/Think_Net_Prompt
http://arxiv.org/pdf/2306.05171
Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, Fang Yi-shu
cs.RO, cs.AI
null
null
cs.RO
20230608
20230608
[ { "id": "2302.12927" }, { "id": "2212.06817" }, { "id": "2006.05398" }, { "id": "2209.05451" }, { "id": "2209.11302" }, { "id": "2210.12250" }, { "id": "2204.01691" }, { "id": "2201.07207" }, { "id": "2303.12153" } ]
2306.05212
8
# 2 RETA-LLM Framework As aforementioned, compared with Langchain, which is a common LLM-augmented toolkit, our RETA-LLM toolkit focuses specifically on retrieval-augmented LLMs. We provide five plug- in modules in RETA-LLM to interact with LLMs and IR systems. The modules include request rewriting, document retrieval, passage extraction, answer generation, and fact checking modules. The framework of our RETA-LLM is shown in Figure 1. The workflow of RETA-LLM is as follows: First, RETA-LLM uses the request rewriting module to revise the current user request to make it complete and clear. Because users can issue a se- ries of questions to the RETA-LLM, the semantics of the current user request may be incomplete. For example, A user may ask “How about the School of Economics?” while the historical request is “In- troduce the majors in School of Information”. In this case, the precise meaning of the user is “Intro- duce the majors in School of Economics”. Since LLMs have shown remarkable abilities in rewriting queries in conversational dense retrieval (Mao et al., 2023), we feed the current user request and the pre- vious conversation histories to LLMs to perform rewriting.
2306.05212#8
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including {request rewriting, document retrieval, passage extraction, answer generation, and fact checking} modules. Our toolkit is publicly available at https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM.
http://arxiv.org/pdf/2306.05212
Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen
cs.IR
Technical Report for RETA-LLM
null
cs.IR
20230608
20230608
[ { "id": "2210.02414" }, { "id": "2208.05753" } ]
2306.05301
8
the interactions between the model, users, and the APIs of the tools by leveraging LLMs to serve as different kinds of agents. In this way, our simulation environment can gener- ate a substantial volume of tool-use instances without any manual intervention. Consequently, we have crafted an in- clusive tool-use dataset that comprises 3938 instances, ef- fectively showcasing the practical application of over 400 distinct tools. To verify whether our corpus can empower compact lan- guage models with the generalized tool-use ability, we con- duct experiments to train ToolAlpaca model on Vicuna (Chi- ang et al. 2023), a representative compact language model, and subsequently evaluate its performance on various un- seen tools. Through machine evaluation with GPT-4, we find that ToolAlpaca can effectively equip numerous unseen tools, ranging from real-world APIs to multi-modal tools, and it exhibits competitive performance with GPT-3.5. Fur- thermore, we investigate the effect of diversity. It is observed that even with the same number of instances, the model trained on more varied toolsets will achieve better perfor- mance. This underscores that diversity is a pivotal factor for ToolAlpaca to generalize tool learning with 3000 simulated cases. In summary, the main contributions of this paper are:
2306.05301#8
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Enabling large language models to utilize real-world tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tool-use abilities without tool-specific training. To address this question, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tool-use corpus and learn generalized tool-use abilities on compact language models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tool-use corpus by building a multi-agent simulation environment. The corpus contains 3938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to fine-tune compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tool-use capabilities comparable to those of extremely large language models like GPT-3.5, demonstrating that learning generalized tool-use ability is feasible for compact language models.
http://arxiv.org/pdf/2306.05301
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun
cs.CL
null
null
cs.CL
20230608
20230907
[ { "id": "2305.16504" }, { "id": "2305.13691" }, { "id": "2304.08244" }, { "id": "2303.08774" }, { "id": "2211.08264" }, { "id": "2304.08354" }, { "id": "2305.18752" }, { "id": "2212.14024" }, { "id": "2211.10435" }, { "id": "2210.03629" }, { "id": "2212.09689" }, { "id": "2306.06624" }, { "id": "2212.10560" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2304.09842" }, { "id": "2305.11206" }, { "id": "2302.07842" } ]
2306.05424
8
Figure 1: Architecture of Video-ChatGPT. Video-ChatGPT leverages the CLIP-L/14 visual encoder to extract both spatial and temporal video features. This is accomplished by averaging frame-level features across temporal and spatial dimensions respectively. The computed spatiotemporal features are then fed into a learnable linear layer, which projects them into the LLMs input space. In our approach, we utilize the Vicuna-v1.1 model, comprised of 7B parameters, and initialize it with weights from LLaVA [1]. with few learnable layers, tuned using a two-stage lightweight training. Additionally, they construct a video-specific dataset using off-the-shelf vision-language models [27, 4, 28, 26] for generating noisy detailed textual descriptions to enhance the training of video-centric conversational models. Different from VideoChat, we propose a novel human assisted and semi-automatic annotation framework for generation high quality instruction data for videos (see Sec. 4). Our simple and scalable architecture design utilizes pretrained CLIP [6] to generate spatiotemporal features which help Video-ChatGPT in generating meaningful video conversation. Further, we are the first to propose quantitative framework for evaluating video conversation tasks (see Sec. 4). # 3 Video-ChatGPT
2306.05424#8
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the underexplored field of video-based conversation by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with a LLM. The model is capable of understanding and generating human-like conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantiative evaluation framework for video-based dialogue models to objectively analyse the strengths and weaknesses of proposed models. Our code, models, instruction-sets and demo are released at https://github.com/mbzuai-oryx/Video-ChatGPT.
http://arxiv.org/pdf/2306.05424
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan
cs.CV
null
null
cs.CV
20230608
20230608
[ { "id": "2103.07461" }, { "id": "2302.13971" }, { "id": "2109.08472" }, { "id": "2303.05657" }, { "id": "2212.00280" }, { "id": "2305.06355" }, { "id": "2206.08155" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2005.14165" }, { "id": "2305.16355" }, { "id": "2212.03191" }, { "id": "2205.01068" } ]
2306.05499
8
The architecture of an LLM-integrated application is il- lustrated in the top part of Figure 1. The service provider typically creates an assortment of predefined prompts tailored to their specific needs (e.g., “Answer the following question as a kind assistant: <PLACE_HOLDER>”). The design pro- cedure meticulously takes into account how user inputs will be integrated with these prompts (for instance, the user’s ques- tion is placed into the placeholder), culminating in a combined prompt. When this combined prompt is fed to the LLM, it effectively generates output corresponding to the designated task. The output may undergo further processing by the appli- cation. This could trigger additional actions or services on the user’s behalf, such as invoking external APIs. Ultimately, the final output is presented to the user. This robust architecture Fe Benign user prompt Normal User Malicious User i f= Combined prompts Maw Predefined er prompt 4 iv er prompt question as a” PLACE, HOLDER>./ print "hello world" Figure 1: An LLM-integrated application with normal usage (top) and prompt injection (bottom). underpins a seamless and interactive user experience, foster- ing a dynamic exchange of information and services between the user and the LLM-integrated application. # 2.2 Prompt Injection
2306.05499#8
Prompt Injection attack against LLM-integrated Applications
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.
http://arxiv.org/pdf/2306.05499
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
cs.CR, cs.AI, cs.CL, cs.SE
null
null
cs.CR
20230608
20230608
[]
2306.04926
9
In addition to the development of powerful large language models, several LLMs have also been made available to researchers. For example, in February 2023, Meta publicly released Large Language Model Meta AI (LLaMA), an LLM that works with less compute and resources and provides researchers access to studying and fine-tuning LLMs. This allows researchers to further understand how LLMs work, to improve LLMs, and to reduce issues like bias and misinformation11. One application of LLaMA is Stanford Alpaca, a fine-tuned model that can behave similarly to OpenAI’s text-davinci-003 and follow instructions. However, due to ethical issues, safety concerns, and companies’ policies, both are only available for academic research, and LLaMA is released under a non-commercial license12. Opportunities that make the creation of a COVID-19-specific LLM are the availability of biomedical literature datasets and machine learning applications that process COVID-19 literature. For instance, in June 2020, the Machine Learning Google Developer Experts group (ML GDEs) released the first version of the Biomedical Research Extensive Archive To Help Everyone (BREATHE), which is a large-scale database with over 16 million biomedical articles from different repositories and hosted on Google BigQuery. This publicly accessible database contains titles, abstracts, and some full body texts and allows biomedical researchers
2306.04926#9
covLLM: Large Language Models for COVID-19 Biomedical Literature
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset.
http://arxiv.org/pdf/2306.04926
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230608
20230608
[]
2306.05087
9
Moreover, as illustrated in Figure 1, adopting PandaLM’s selected optimal hyperparameters covering optimizer selection, learning rate, number of training epochs, and learning rate scheduler brings noteworthy improvements. When assessed using GPT-4 with a set of 170 instructions, a group of five open language models, tuned with optimal hyperparameters selected by PandaLM, achieves an average of 47.0 superior responses and 26.2 inferior responses, outperforming those trained using Alpaca’s hyperparameters. Note that the training data remains the same for conducting fair comparisons. Moreover, when these LLMs are evaluated by human experts, using the same set of 170 instructions, they exhibit an average of 79.8 superior responses and 25.2 inferior responses, once again surpassing the performance of models trained with Alpaca’s hyperparameters. The experimental results underline the effectiveness of PandaLM in determining optimal hyperparameters for choosing the best LLMs. In addition, when the fine-tuned LLMs are assessed using tasks from the lm-eval[24], 2 50. mm Win mm Lose 240 330 20 10 ©" Tlama bloom cerebras_ opt — pythia as mm Win mm Lose 240 330 20 10 ©" Tlama bloom cerebras_ opt — pythia 80 mm Win mm Lose 260 B40. 20 ©" Tlama bloom cerebras_ opt — pythia
2306.05087#9
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishing such a benchmark is not a trivial task due to the challenges associated with evaluation accuracy and privacy protection. In response to these challenges, we introduce a judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets. It addresses vital subjective factors such as relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences. Our results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM enables the evaluation of LLM to be fairer but with less cost, evidenced by significant improvements achieved by models tuned through PandaLM compared to their counterparts trained with default Alpaca's hyperparameters. In addition, PandaLM does not depend on API-based evaluations, thus avoiding potential data leakage. All resources of PandaLM are released at https://github.com/WeOpenML/PandaLM.
http://arxiv.org/pdf/2306.05087
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.CL, cs.AI
null
null
cs.CL
20230608
20230608
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1803.05457" }, { "id": "2305.10403" }, { "id": "1807.05118" }, { "id": "2211.05100" }, { "id": "2302.10198" }, { "id": "2205.01068" }, { "id": "2003.05689" }, { "id": "1806.03822" }, { "id": "1711.05101" }, { "id": "2304.03208" }, { "id": "2304.01373" }, { "id": "2303.14742" }, { "id": "2303.04673" }, { "id": "2212.10560" }, { "id": "2211.08073" }, { "id": "2210.02414" }, { "id": "2304.03277" }, { "id": "2002.06305" }, { "id": "2305.13412" }, { "id": "2304.01196" } ]