doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.05949
149
[174] R. Parasuraman and V. Riley. Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors, 39(2):230–253, June 1997. ISSN 0018-7208. doi: 10.1518/001872097778543886. URL https://doi.org/10.1518/001872097778543886. Publisher: SAGE Publications Inc. [175] A. Parrish, A. Chen, N. Nangia, V. Padmakumar, J. Phang, J. Thompson, P. M. Htut, and S. Bowman. BBQ: A hand-built bias benchmark for question answering. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2086–2105. Association for Computational Linguistics, 2022. doi: 10.18653/v1/2022.findings-acl.165. URL https: //aclanthology.org/2022.findings-acl.165. 33 [176] Partnership on AI. ABOUT ML Resources Library. URL https://partnershiponai. org/about-ml-resources-library/.
2306.05949#149
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.05949
150
33 [176] Partnership on AI. ABOUT ML Resources Library. URL https://partnershiponai. org/about-ml-resources-library/. Investigating the emotional appeal of fake news using artificial intelligence and human contributions. Journal of Product & Brand Management, 29, May 2019. doi: 10.1108/JPBM-12-2018-2179. [178] S. Passi and M. Vorvoreanu. Overreliance on AI: Literature Review. Technical Report MSR-TR-2022-12, Microsoft, June 2022. URL https://www.microsoft.com/en-us/ research/publication/overreliance-on-ai-literature-review/. [179] G. Pennycook, T. D. Cannon, and D. G. Rand. Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology. General, 147(12):1865–1880, Dec. 2018. ISSN 1939-2222. doi: 10.1037/xge0000465.
2306.05949#150
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
150
20 B. sMolTalk clear Figure 9. The sMolTalk interface. Based on few-shot prompting LLMs can create code for visualization tools such as 3dmol.js. Since the advent of 3D visualization methods, chemists have employed computers to display their molecules of interest to better understand their underlying structure and properties. Nevertheless, a lot of chemists are not equipped with the required coding skills to use and customize their visualizations. Depending on the package, and its associated documentation, chemists might end up spending hours to days learning the details of the specific visualization software. We developed a natural language interface that generates code for 3dmol. js, an open-source visualization JavaScript library [47], meaning the visualizations are run in a web browser (Figure 9). The user input is fed into ChatGPT API, using the GPT-3.5-turbo model. We use in-context learning (few-shot prompting), giving several examples of the user input with the expected JavaScript code that manipulates the 3dmol.js viewer. Before the user submits further commands, we update the prompt with the current state of the viewer. The current implementation might lead to a one-stop solution for visualizing and retrieving properties for molecules. This would
2306.06283#150
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
151
[180] G. Pennycook, Z. Epstein, M. Mosleh, A. A. Arechar, D. Eckles, and D. G. Rand. Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855):590–595, Apr. 2021. ISSN 1476-4687. doi: 10.1038/s41586-021-03344-2. URL https://www.nature. com/articles/s41586-021-03344-2. Number: 7855 Publisher: Nature Publishing Group. [181] B. Perrigo. Inside Facebook’s African Sweatshop, 2022. URL https://time.com/ 6147458/facebook-africa-content-moderation-employee-treatment/. [182] Perspective API. Perspective API. URL https://www.perspectiveapi.com/#/home. [183] A. Piktus, C. Akiki, P. Villegas, H. Laurençon, G. Dupont, A. S. Luccioni, Y. Jernite, and A. Rogers. The ROOTS Search Tool: Data Transparency for LLMs, 2023-02-27. URL http://arxiv.org/abs/2302.14035.
2306.05949#151
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
151
update the prompt with the current state of the viewer. The current implementation might lead to a one-stop solution for visualizing and retrieving properties for molecules. This would accelerate chemists’ workflow for querying information about molecules. Furthermore, if an LLM is able to control structural software, it might be possible to perform reasoning on the molecular structure itself. For instance, in drug discovery, one may ask what functional group of the ligand needs to be changed for binding affinity to the protein to increase. Another example might involve proteins, looking at what amino acid residues could be mutated to cysteines in order to create new disulfide bonds between chains. This would presumably require specific fine-tuning and equipping the LLM with more tools. The approach of generating code and structural reasoning might be similar but is most likely going to require a different set of tools that were specifically developed for protein structure manipulation (such as PyMoL [48], or MolStar [49]). Then, another set of highly accurate tools for binding affinity predictions or protein folding is also required. The major problem encountered is prompt leakage, where examples from in-context learning would leak into the actual LLM output. For the best evaluation,
2306.06283#151
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
152
[184] V. Polonski. ciety, Jan. ai-trust-and-ai-fears-a-media-debate-that-could-divide-society. [185] L. Pozzobon, B. Ermis, P. Lewis, and S. Hooker. On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research, 2023-04-24. URL http://arxiv.org/abs/2304. 12397. [186] V. Prabhakaran, Z. Waseem, S. Akiwowo, and B. Vidgen. Online Abuse and Human Rights: WOAH Satellite Session at RightsCon 2020. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 1–6, Online, Nov. 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.alw-1.1. URL https://aclanthology.org/2020.alw-1.1.
2306.05949#152
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
152
also required. The major problem encountered is prompt leakage, where examples from in-context learning would leak into the actual LLM output. For the best evaluation, it is best to have as few and as different examples as possible. Moreover, although OpenAl’s GPT models can sometimes correctly recall protein data bank (PDB) IDs of proteins or Chemical Abstract Services (CAS) numbers of compounds, it’s not reliable, making tooling the models with API calls to PubChem, or the PDB, much more robust. We are currently developing an agent based on the ReAct approach [50] tooled with these APIs so that correct structures are always retrieved (i.e., to avoid the LLM needs to remember internally all such IDs). This framework would
2306.06283#152
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
153
[187] V. Prabhakaran, A. Mostafazadeh Davani, and M. Diaz. On Releasing Annotator-Level Labels and Information in Datasets. In Proceedings of The Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop, pages 133– 138. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.law-1.14. URL https://aclanthology.org/2021.law-1.14. [188] J. Prassl and M. Risak. The Legal Protection of Crowdworkers: Four Avenues for Workers’ Rights in the Virtual Realm. In P. Meil and V. Kirov, editors, Policy Implications of Virtual Work, Dynamics of Virtual Work, pages 273–295. Springer International Publishing, Cham, 2017. ISBN 978-3-319-52057-5. doi: 10.1007/978-3-319-52057-5_11. URL https://doi. org/10.1007/978-3-319-52057-5_11.
2306.05949#153
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
153
then help us iteratively add tools to the agent, creating a chatbot one can query about any molecule of interest, including the structural reasoning task mentioned above. Lastly, we hypothesize we could improve the generation of 3dmol.js code by using self-instruct fine-tuning. Using an external LLM with access to the documentation would create a dataset that could be used for fine-tuning. The same approach might be utilized for generating code for any other type of software, not just visualization packages. Therefore, such LLM could control molecular dynamics software, such as LAMMPS [51], or GROMACS [52]. # One sentence summaries a. Problem/Task Making bioinformatics tools, in particular the visualization software 3dmol . js acces- sible to non-experts. b. Approach Chat-interface by prompting a LLM to produce commands to 3dmol.js, which are then passed to the software. c. Results and Impact The LLM can, without consulting the documentation, generate code that often successfully performs the requested actions, demonstrating that LLM might help make tools more accessible by providing access to complex interfaces via natural language. d. Challenges and Future Work The approach suffers from robustness issues, which might be overcome with conditioning on a knowledge base, more advanced decoding strategies, or error handling (e.g., giving the agent access to error messages and logs and prompting it to reflect). 21
2306.06283#153
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
154
[189] S. Quach, P. Thaichon, K. D. Martin, S. Weaven, and R. W. Palmatier. Digital technologies: tensions in privacy and data. Journal of the Academy of Marketing Science, 50(6):1299– ISSN 1552-7824. doi: 10.1007/s11747-022-00845-y. URL https: 1323, Nov. 2022. //doi.org/10.1007/s11747-022-00845-y. [190] I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, and P. Barnes. Closing the AI accountability gap: Defining an end-to-end frame- work for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 33–44. ACM, 2020-01-27. ISBN 978-1-4503-6936- 7. doi: 10.1145/3351095.3372873. URL https://dl.acm.org/doi/10.1145/3351095. 3372873. 34
2306.05949#154
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.05949
155
34 [191] I. D. Raji, E. M. Bender, A. Paullada, E. Denton, and A. Hanna. AI and the Ev- In Proceedings of the Neural Infor- erything in the Whole Wide World Benchmark. mation Processing Systems Track on Datasets and Benchmarks. Curran, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/ 2021/file/084b6fbb10729ed4da8c3d3f5a3ae7c9-Paper-round2.pdf. [192] A. Rapp, L. Curti, and A. Boldi. The human side of human-chatbot interaction: A systematic International Journal literature review of ten years of research on text-based chatbots. of Human-Computer Studies, 151:102630, July 2021. ISSN 1071-5819. doi: 10.1016/j. ijhcs.2021.102630. URL https://www.sciencedirect.com/science/article/pii/ S1071581921000483.
2306.05949#155
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
155
Laboratory data management systems such as ELNs and laboratory in ‘ormation systems (LIMS) can record rich data about experimental samples and the experiments that have been performed on them. Then, the user interfaces for these systems programmed visualizations. capture the richness and diversity of real, interconnected, experimental data. al laboratory data and serve as a flexi whether an LLM could process digi a data management system. As a management assistant into the open- typically allow researchers to ex lore their samples with various pre- However, it is difficult for a traditional user interface to have enough flexibility to Here, our aim was to investigate le, conversational interface to roof-of-concept for this approach, we introduced an automated data sourced “Datalab” software, whic h we have been developing to address needs in experimental materials chemistry (https: //github.com/the-grey-group/datalab). Datalab can be used to capture data and metada' resulting devices. The system tracks data available via a JavaScri a Vue.js-based web user interface (U. and gave it access to raw responses models from OpenAI (gpt-3.5-turbo) at interpreting JSON
2306.06283#155
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
156
[193] M. Rauh, J. Mellor, J. Uesato, P.-S. Huang, J. Welbl, L. Weidinger, S. Dathathri, A. Glaese, G. Irving, I. Gabriel, W. Isaac, and L. A. Hendricks. Characteristics of harmful text: Towards rigorous benchmarking of language models, 2022. [194] T. Ray. Common but Different Futures: AI Inequity and Climate Change. URL https: //www.orfonline.org/research/common-but-different-futures/. [195] Read. What is the difference between Scope 1, 2 and 3 emissions, and what are companies doing to cut all three?, Sept. 2022. URL https://www.weforum.org/agenda/2022/09/ scope-emissions-climate-greenhouse-business/. [196] Republic of Korea. Input by the Government of the Republic of Korea on the Themes of an Expert Consultation on the Practical Application of the United Nations Guiding Principles on Business and Human Rights to the Activities of Technology Companies, 2022. [197] F. Richter. English Is the Internet’s Universal Language, 2022-02-21. URL https://www. statista.com/chart/26884/languages-on-the-internet.
2306.05949#156
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
156
web user interface (U. and gave it access to raw responses models from OpenAI (gpt-3.5-turbo) at interpreting JSON and inferring t a about materials chemistry sam each sample’s provenance through the research lifecycle and makes t t object notation (JSON) API that can be exp I). During the hackathon, we em rom datalab’s JSON API. We us les, their synthesis constituents, and his ored programmatically or with edded a prompted LLM into the UI ed the most recent publicly available via their Python API. We found he meaning of fields (including linked data) and can: that the LLM does a very good job
2306.06283#156
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
157
[198] R. Righi, S. Samoili, M. López Cobo, M. Vázquez-Prada Baillet, M. Cardona, and G. De Prato. The AI techno-economic complex System: Worldwide landscape, thematic subdomains and technological collaborations. Telecommunications Policy, 44(6):101943, July 2020. ISSN 0308-5961. doi: 10.1016/j.telpol.2020.101943. URL https://www.sciencedirect.com/ science/article/pii/S0308596120300355. [199] S. T. Roberts. Behind the Screen: Content Moderation in the Shadows of Social Media. 2021. ISBN 978-0-300-26147-9. URL https://yalebooks.yale.edu/9780300261479/ behind-the-screen. [200] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. High-Resolution Image Synthesis with Latent Diffusion Models, Apr. 2022. URL http://arxiv.org/abs/2112. 10752. arXiv:2112.10752 [cs].
2306.05949#157
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
157
1. Contextualize existing data: Explain a set of related experiments acronyms/ short-hand used by experimentalists or create simple t rom linked JSON responses, resolve ext summaries. # Contextualizing existing data Assistant Hello! I am your virtual assistant and I am here to data. How can I assist you t oday? help you manage your experimental User Can you provide a bri ef summary of this experiment? Assistant Based on the provided JSON data, terization of NaCoOz2 as a battery cathode mai reaction, and the starting materials used were his experiment erial. The synthesis method used was solid-state sodium carbonate and cobalt (II,IHI) oxide. The involves the synthesis and charac- synthesized NaCoO2 was then used to make an electrode, which was characterized using X-ray diffraction. The electrode was then incorporate into a coin ¢ chemically and characterized using various techniques. ell battery, which was cycled electro2. Generate teat and graphical re JSON response and convert to CSV, and even mermaid.js graph visualizations. presentations of the formats such as tab. data: Aggre, les (potential gate data from different fields in the ly formatted for a particular journal), 3. Suggest future experiments: Thi e assistant readily i additional characterization techniques, and synthesis protocols. entifies related materials to study, additives to try, Suggest experiments
2306.06283#157
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
158
[201] K. Roose. A Conversation With Bing’s Chatbot Left Me Deeply Unsettled. The New York ISSN 0362-4331. URL https://www.nytimes.com/2023/02/16/ Times, Feb. 2023. technology/bing-chatbot-microsoft-chatgpt.html. [202] D. Rozado. RightWingGPT – An AI Manifesting the Opposite Political Biases of ChatGPT, 2023-02-16. URL https://davidrozado.substack.com/p/rightwinggpt. [203] J. Sablosky. “Dangerous organizations: Facebook’s content moderation decisions and eth- nic visibility in Myanmar”. 43(6):1017–1042, 2021-09. ISSN 0163-4437, 1460-3675. doi: 10.1177/0163443720987751. URL http://journals.sagepub.com/doi/10.1177/ 0163443720987751.
2306.05949#158
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
158
3. Suggest future experiments: Thi e assistant readily i additional characterization techniques, and synthesis protocols. entifies related materials to study, additives to try, Suggest experiments User: What NMR nuclei could I probe in this sample? Assistant: Based on the information provided in the JSON file, the sample is NaCoO2, which contains sodium, cobalt, and oxygen. Therefore, the NMR nuclei that could be probed in this sample are: (a) Sodium-23 (23Na) NMR: This can provide information about the local environment and coordination of sodium ions in the NaCoOg lattice. (b) Cobalt-59 (59Co) NMR: This can provide information about the local environment and co- ordination of cobalt io ms in the NaCoOz lattice. (c) Oxygen-17 (170) NMR: This can provide information about the local environment and coor- dination of oxygen ion: s in the NaCoOz lattice. 22
2306.06283#158
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
159
[204] V. S. Sadasivan, A. Kumar, S. Balasubramanian, W. Wang, and S. Feizi. Can AI-Generated Text be Reliably Detected?, Mar. 2023. URL http://arxiv.org/abs/2303.11156. arXiv:2303.11156 [cs]. [205] C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. Denton, S. K. S. Ghasemipour, B. K. Ayan, S. S. Mahdavi, R. G. Lopes, T. Salimans, J. Ho, D. J. Fleet, and M. Norouzi. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding, 2022-05-23. URL http://arxiv.org/abs/2205.11487. 35
2306.05949#159
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
159
23 LLM Chat Block with contextual data (powered by GPT-3.5-turbo) J 4 Xx Hello! | am your virtual assistant and | am here to help you manage your experimental data. What can | help you with today? Please make me a mermaid flowchat showing how this cell was made from ‘other items in the database w | formatted Certainly! Here is a Mermaid flowchart that shows how the cell with item_id jdb1-1_e1_cl was made from other items in the database: In this flowchart, each rectangular box represents an item in the database, and the arrows between the boxes represent the relationships between the items. For example, SM1 and SM2 were used to create sample JDB1-1, which was then used to create sample E1, which was finally used to create the cell a Figure 10. The chatbot can generate mermaid.js markdown code, which our interface automatically recognizes and translates into the visualization. It is worth noting that NMR spectroscopy is a powerful technique for probing the local environment of atoms in a material, but it requires a relatively large amount of sample and may not be sensitive to all types of nuclei. Therefore, it may be necessary to use other techniques in conjunction with NMR to fully
2306.06283#159
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
160
35 [206] N. Sambasivan, E. Arnesen, B. Hutchinson, T. Doshi, and V. Prabhakaran. Re-imagining Algorithmic Fairness in India and Beyond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 315–328. ACM, 2021-03-03. ISBN 978-1-4503-8309-7. doi: 10.1145/3442188.3445896. URL https://dl.acm.org/doi/10. 1145/3442188.3445896. [207] S. Santurkar, E. Durmus, F. Ladhak, C. Lee, P. Liang, and T. Hashimoto. Whose opinions do language models reflect?, 2023. [208] M. Sap, D. Card, S. Gabriel, Y. Choi, and N. A. Smith. The Risk of Racial Bias in Hate Speech Detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678. Association for Computational Linguistics, 2019. doi: 10. 18653/v1/P19-1163. URL https://www.aclweb.org/anthology/P19-1163.
2306.05949#160
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
160
a relatively large amount of sample and may not be sensitive to all types of nuclei. Therefore, it may be necessary to use other techniques in conjunction with NMR to fully characterize the sample. User: What techniques (that I haven’t already used) should I use in conjunction with the NMR? Assistant: In addition to the techniques I mentioned earlier, you could use the following techniques in conjunction with NMR to fully characterize the NaCoO2 sample: (a) X-ray photoelectron spectroscopy (XPS): This can provide information about the chemical composition and oxidation state of the NaCoO2 sample, which can help to interpret the NUR spectra. (b) Transmission electron microscopy (TEM): This can provide information about the crystal structure, morphology, and defects of the NaCoOz sample, which can help to interpret the NMR spectra. (c) Electrochemical impedance spectroscopy (EIS): This can provide information about the elec- trical properties and performance of the NaCoO2 sample as an electrode material, which can help to interpret the NMR spectra. (d) Thermogravimetric analysis (TGA): This can provide information about the thermal stability and decomposition
2306.06283#160
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
161
[209] M. Sap, S. Swayamdipta, L. Vianna, X. Zhou, Y. Choi, and N. Smith. Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5884–5906. Association for Computational Linguistics, 2022. doi: 10.18653/v1/2022.naacl-main.431. URL https://aclanthology. org/2022.naacl-main.431. [210] M. K. Scheuerman, A. Hanna, and E. Denton. Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development. 5:1–37, 2021-10-13. ISSN 2573-0142. doi: 10.1145/3476058. URL https://dl.acm.org/doi/10.1145/3476058.
2306.05949#161
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
161
which can help to interpret the NMR spectra. (d) Thermogravimetric analysis (TGA): This can provide information about the thermal stability and decomposition behavior of the NaCoO2 sample, which can help to interpret the NMR spectra. By using a combination of these techniques, you can gain a more complete understanding of the NaCoO , sample The assistant performs quite well at certain tasks out-of-the-box, but we believe it should also be possible to further increase its general utility by fine-tuning the model on the domain knowledge present in a research
2306.06283#161
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
162
[211] R. Schwartz, A. Vassilev, K. Greene, L. Perine, A. Burt, and P. Hall. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, 2022-03-15. URL https: //nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf. [212] A. See, A. Pappu, R. Saxena, A. Yerukola, and C. D. Manning. Do Massively Pretrained In Proceedings of the 23rd Conference on Language Models Make Better Storytellers? Computational Natural Language Learning (CoNLL), pages 843–861, Hong Kong, China, Nov. 2019. Association for Computational Linguistics. doi: 10.18653/v1/K19-1079. URL https://aclanthology.org/K19-1079.
2306.05949#162
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
162
group’s Wiki and papers. Furthermore, while we have only given the model data from a single project so far (up to 8 samples/starting materials, 1700 tokens of JSON), it would be of great interest to provide the model with a larger context across multiple projects to attempt to facilitate cross-fertilization of ideas. One notable challenge in this area is the limited context size of currently available LLM models (e.g., 4097 tokens for GPT-3.5-turbo). Therefore, future work will investigate larger models (e.g., GPT-4 with 30K token context), as well as approaches to give existing LLMs access to larger context (e.g., an embedding-based approach or allowing an LLM agent to query the OpenAPI directly as needed). At present, we note that the scientific usefulness of this assistant is highly task- and model-dependent; however, any additional interface that can lower the barrier to improving data capture and dissemination in the field should be investigated further and will be a future development target for Datalab. # One sentence summaries a. Problem/Task Providing very flexible access to data in ELNs/LIMS. b. Approach Prompting of a large language model with questions provided in a chat interface and context coming from the response of the API of an LLM.
2306.06283#162
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
163
[213] F. Shahid and A. Vashistha. Decolonizing Content Moderation: Does Uniform Global Commu- nity Standard Resemble Utopian Equality or Western Power Hegemony? In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, pages 1–18, New York, NY, USA, Apr. 2023. Association for Computing Machinery. ISBN 978-1-4503-9421-5. doi: 10.1145/3544548.3581538. URL https://doi.org/10.1145/3544548.3581538. [214] H. Shaiken. Work transformed; automation and labor in the computer age. Holt, Rinehart and Winston, 1985. ISBN 978-0-03-042681-0. URL https://www.abebooks. com/first-edition/Work-transformed-automation-labor-computer-age/ 7635506864/bd. [215] S. Shan, J. Cryan, E. Wenger, H. Zheng, R. Hanocka, and B. Y. Zhao. Glaze: Protecting artists from style mimicry by text-to-image models, 2023.
2306.05949#163
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
163
b. Approach Prompting of a large language model with questions provided in a chat interface and context coming from the response of the API of an LLM. c. Results and Impact The system can successfully provide a novel interface to the data and let user interact with it in a very flexible and personalized way, e.g, creating custom summaries or visuals for which the developers did not implement specific tools. d. Challenges and Future Work Since the current approach relies on incorporating the response of the ELN/LIMS into the prompt, this limits how much context (i-e., how many experiments/samples) the system can be aware of. One potential remedy is to use retrieval-augmented generation, where the entries are embedded in a vector store and the agent will be able to query this database on put (parts of) the most relevant entries into the prompt. 24
2306.06283#163
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
164
[216] S. Shankar, Y. Halpern, E. Breck, J. Atwood, J. Wilson, and D. Sculley. No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World. arXiv, 2017-11-22. URL http://arxiv.org/abs/1711.08536. [217] M. ogy the-case-for-critical-caste-and-technology-studies-b987dcf20c8d. [218] O. Sharir, B. Peleg, and Y. Shoham. The Cost of Training NLP Models: A Concise Overview. 2020. doi: 10.48550/ARXIV.2004.08900. URL https://arxiv.org/abs/2004.08900.
2306.05949#164
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
164
D. BOLLaMa The field of chemistry is continuously evolving towards sustainability, with the optimization of chemical reactions being a key component [53]. The selection of optimal conditions, such as temperature, reagents, catalysts, and other additives, is challenging and time-consuming due to the vast search space and high cost of experiments [54]. Expert chemists typically rely on previous knowledge and intuition, leading to weeks or even months of experimentation [55]. Bayesian optimization (BO) has recently been applied to chemistry optimization tasks, outperforming humans in optimization speed and quality of solutions [55]. However, mainstream access to these tools remains limited due to requirements for programming knowledge and the numerous parameters these tools offer. To address this issue, we developed BOLLaMa. This artificial intelligence (AI)-powered chatbot simplifies BO for chemical reactions with an easy-to-use natural language interface, which facilitates access to a broader audience. (Ganson) (awed) (mee) Hiand welcome to BOLLeMa, the coolest llama for ‘optimization in chemistry! How can | assist you today? HiBOLLaMa! | want to optimize the additives to increase the yield of a
2306.06283#164
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
165
[219] R. Shelby, S. Rismani, K. Henne, g.-i. family=Moon, given=Ajung, N. Rostamzadeh, P. Nicholas, N. Yilla, J. Gallegos, A. Smart, E. Garcia, and G. Virk. Identifying Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction, 2023-02-08. URL http://arxiv.org/abs/2210.05791. 36 [220] J. Shi, Y. Liu, P. Zhou, and L. Sun. BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT, Feb. 2023. URL http://arxiv.org/abs/2304.12298. arXiv:2304.12298 [cs]. [221] Shiller. Narrative Economics. Oct. 2019. ISBN 978-0-691-18229-2. URL https://press. princeton.edu/books/hardcover/9780691182292/narrative-economics.
2306.05949#165
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
165
for ‘optimization in chemistry! How can | assist you today? HiBOLLaMa! | want to optimize the additives to increase the yield of a decarboxylative amination reaction, Let's start with 4 experiments, 10 Initialization ns Welcome to BOLLaMa Sure! You can start with bo init(n) these 4 additives: + <SMILES> BO Step data=( SMILES.1:4, SMILES_2:12, i} ‘Thanks, here are my results: 4%, 12%, Perfect! Based on your results, | suggest you ty this additive next: -<SMILES> bo_step(data) ‘Awesome! 30% this time. Let's continue tim glad you got better results! Try this next: -<SMILES> (| bo_step(data) Figure 11. Schematic overview of BOLLaMa. A LLM can act as an interface to a BO algorithm. An experimental chemist can bootstrap an optimization and then, via a chat interface, update the state of the simulation to which the bot responds with the recommended next steps. BOLLaMa combines LLMs with BO algorithms to assist chemical reaction optimization. The user-friendly interface allows even those with limited technical knowledge to engage with the tool.
2306.06283#165
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
166
[222] J. Shin and S. Chan-Olmsted. User perceptions and trust of explainable machine learning fake news detectors. International Journal of Communication, 17(0), 2022. ISSN 1932-8036. URL https://ijoc.org/index.php/ijoc/article/view/19534. [223] T. Simonite. When It Comes ISSN 1059-1028. to Gorillas, Google Photos Remains URL https://www.wired.com/story/ Blind. when-it-comes-to-gorillas-google-photos-remains-blind/. 2018. [224] A. Simpson. On Ethnographic Refusal: Indigeneity, ‘Voice’ and Colonial Citizenship. Junc- tures, (9), 2007. [225] M. Skjuve, A. Følstad, K. I. Fostervold, and P. B. Brandtzaeg. My Chatbot Companion - a Study of Human-Chatbot Relationships. International Journal of Human-Computer Studies, 149:102601, May 2021. ISSN 1071-5819. doi: 10.1016/j.ijhcs.2021.102601. URL https: //www.sciencedirect.com/science/article/pii/S1071581921000197.
2306.05949#166
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
166
combines LLMs with BO algorithms to assist chemical reaction optimization. The user-friendly interface allows even those with limited technical knowledge to engage with the tool. BOLLaMa’s current implementation provides two main tools: the initialization function and the optimization step function [56], that are retrieved on LLM-demand as shown in Figure 11. The primary contribution of this project is democratizing access to advanced BO techniques in chemistry, promoting widespread adoption of sustainable optimization tools, and impacting sustainability efforts within the community. This approach can be further enhanced to provide a more comprehensive assistant experi- ence, such as with additional recommendations or safety warnings, and improve the explainability of the BO process to foster user trust and informed decision-making. Key insights gained from this project include the critical role of accessibility in developing expert tools and the potential of LLMs in chemistry through various agent architectures [50]. In addition, the initial BO tool adapted for BOLLaMa was designed for closed-loop automated laboratories, emphasizing the need for accessible tools catering to diverse user backgrounds. One sentence summaries a. Problem/Task Giving scientists without coding and machine learning expertise access
2306.06283#166
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
167
Internet Policy Review, 10(3), Sep 2021. URL https://policyreview.info/articles/analysis/ beyond-individual-governing-ais-societal-harm. [227] I. Solaiman. The Gradient of Generative AI Release: Methods and Considerations. 2023. doi: 10.48550/ARXIV.2302.04844. URL https://arxiv.org/abs/2302.04844. [228] I. Solaiman and C. Dennison. Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets, 2021-11-23. URL http://arxiv.org/abs/2106.10328. [229] I. Solaiman, M. Brundage, J. Clark, A. Askell, A. Herbert-Voss, J. Wu, A. Radford, G. Krueger, J. W. Kim, S. Kreps, M. McCain, A. Newhouse, J. Blazakis, K. McGuffie, and J. Wang. Release Strategies and the Social Impacts of Language Models, Nov. 2019. URL http: //arxiv.org/abs/1908.09203. arXiv:1908.09203 [cs].
2306.05949#167
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.05949
168
[230] D. J. Solove. A Taxonomy of Privacy. University of Pennsylvania Law Review, 154(3): 477–564, 2006. ISSN 0041-9907. doi: 10.2307/40041279. URL https://www.jstor.org/ stable/40041279. Publisher: The University of Pennsylvania Law Review. [231] A. Spirling. Why open-source generative ai models are an ethical way forward for science. 616(7957):413–413, Apr 2023. doi: https://doi.org/10.1038/d41586-023-01295-4.
2306.05949#168
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
168
b. Approach LLM as a chat-interface for a Python package for Bayesian optimization by using ReAct- like approach in which the LLM has access to text-description of relevant functions (such as initialization and stepping of the BO run). c. Results and Impact The chat interface can successfully initialize a BO run and then convert obser- vations reported in natural language into calls to the stepping function of the BO tool. d. Challenges and Future Work As most LLM agents, the tools suffers from robustness issues and the correct functioning cannot be guaranteed for all possible prompts. 26
2306.06283#168
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
169
[232] A. Srivastava, A. Rastogi, A. Rao, A. A. M. Shoeb, A. Abid, A. Fisch, A. R. Brown, A. Santoro, A. Gupta, A. Garriga-Alonso, A. Kluska, A. Lewkowycz, A. Agarwal, A. Power, A. Ray, A. Warstadt, A. W. Kocurek, A. Safaya, A. Tazarv, A. Xiang, A. Parrish, A. Nie, A. Hussain, A. Askell, A. Dsouza, A. Slone, A. Rahane, A. S. Iyer, A. Andreassen, A. Madotto, A. Santilli, A. Stuhlmüller, A. Dai, A. La, A. Lampinen, A. Zou, A. Jiang, A. Chen, A. Vuong, A. Gupta, A. Gottardi, A. Norelli, A. Venkatesh, A. Gholamidavoodi, A. Tabassum, A. Menezes, A. Kirubarajan, A. Mullokandov, A.
2306.05949#169
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
169
II. A. Json file: Abstract: 15 nm diameter Si02 nanoparticles with a grafted block copolymer consisting of a 5 nm rubbery polyhexylmethacrylate (PHMA) inner block and a 30 nm outer block of matrix compatible polyglycidyimethacrylate (PGMA) were synthesized to toughen an epoxy. A systematic study of the effect of block copolymer graft density (from 0.07 to 0.7 chains/nm2) and block molecular weight (from 20 to 80 kg/mol) on the tensile behavior, fracture toughness, and fatigue properties was conducted. ... 'SiO2 nanoparticles", » "copolymer": "grafted block copolymer! L {"edges": [ { “source”: 1, “target: 6, "type": "properties" 27 Knowledge Extraction InsightGraph "} Figure 12. The Insight Graph interface. A suitably prompted LL scientific text that can be visualized using tools such as neo4j’s visi The traditional method of per find crucial in: graphs are sources of structured downstream machine-learning tas broad materials science domains empower material scientists to d orming a literature review i 58] as well as more-focused
2306.06283#169
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
170
A. Gholamidavoodi, A. Tabassum, A. Menezes, A. Kirubarajan, A. Mullokandov, A. Sabharwal, A. Herrick, A. Efrat, A. Erdem, A. Karaka¸s, B. R. Roberts, B. S. Loe, B. Zoph, B. Bojanowski, B. Özyurt, B. Hedayatnia, B. Neyshabur, B. Inden, B. Stein, B. Ekmekci, B. Y. Lin, B. Howald, C. Diao, C. Dour, C. Stinson, C. Ar- gueta, C. F. Ramírez, C. Singh, C. Rathkopf, C. Meng, C. Baral, C. Wu, C. Callison-Burch, C. Waites, C. Voigt, C. D. Manning, C. Potts, C. Ramirez, C. E. Rivera, C. Siro, C. Raffel, C. Ashcraft, C. Garbacea, D. Sileo, D. Garrette, D. Hendrycks, D. Kilman, D. Roth, D. Free-
2306.05949#170
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
170
sources of structured downstream machine-learning tas broad materials science domains empower material scientists to d orming a literature review i 58] as well as more-focused ormation on material properties, structure, reaction pathways, an information that enable data visualization, dat s. Knowledge graphs extracted from published scientific literature covering iscover new concepts and accelerate research. complex and hierarchical relationships for a knowledge graph M can create knowle: alization tools. [57] ge graph representations of nvolves months of reading relevant articles to applications. Knowledge a discovery, insights, and lomains such as polymer nanocomposites [59] Until recently, capturing s science literature was a within the materia! time-consuming effort, often spanning multi-disciplinary collaborations and many Ph.D. years. By leveraging zero to few-shot training and pre- with minimal technical expertise scientific publications can offer a . We envision th visual means to [58, 60, 61 concise an rained LLMs, it is now possib e to rapidly extract complex scientific entities at knowledge graphs built by LLMs based on launch a literature review. To demonstrate a proof of concept of a zero-shot entity and relationship extraction, we identified 200
2306.06283#170
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
171
C. Garbacea, D. Sileo, D. Garrette, D. Hendrycks, D. Kilman, D. Roth, D. Free- man, D. Khashabi, D. Levy, D. M. González, D. Perszyk, D. Hernandez, D. Chen, D. Ippolito, D. Gilboa, D. Dohan, D. Drakard, D. Jurgens, D. Datta, D. Ganguli, D. Emelin, D. Kleyko, D. Yuret, D. Chen, D. Tam, D. Hupkes, D. Misra, D. Buzan, D. C. Mollo, D. Yang, D.-H. Lee, E. Shutova, E. D. Cubuk, E. Segal, E. Hagerman, E. Barnes, E. Donoway, E. Pavlick, E. Rodola, E. Lam, E. Chu, E. Tang, E. Erdem, E. Chang, E. A. Chi, E. Dyer, E. Jerzak, E. Kim, E. E. Manyasi, E. Zheltonozhskii, F. Xia, F. Siar, F.
2306.05949#171
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
171
entities at knowledge graphs built by LLMs based on launch a literature review. To demonstrate a proof of concept of a zero-shot entity and relationship extraction, we identified 200 abstracts on polymer-nanocomposite materials for which detai able [62]. Each abstract was fed as a prom ed structured information was already avail- pt to GPT-3.5-turbo, a language model powering the popular ChatGPT web application by OpenAI. The instructions in our prompt consisted of an example JSON con- were then store zero-shot capabi lepending upon relationships in the output JSON response graph query language (Figure 12). [57] The an arbitrary entity and relationship types constrain the JSON schema to a standard format. While large language models on their own are prone to ha create structured databases empowers chemists/materials sci can create structured graphs dramatically exceeds the years into existing knowledge graphs. Access to structured data material science research, synthesizing details embedded in d knowledge graphs. Owing to the non-deterministic nature of LLMs, we found t. rocessing to search and build on existing knowledge leading to new insights. The speed at w other scientific fields could benefit from a similar use of LLMs to extract aining high-level schema
2306.06283#171
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
172
to search and build on existing knowledge leading to new insights. The speed at w other scientific fields could benefit from a similar use of LLMs to extract aining high-level schema and information on possible entities and pairwise relationships. The nodes and in a neo4j graph database using Cypher, a ities of the model allowed the specification of he information contained in the text. Given hat this required a change in the neo4j pipeline every time the prompt changed, we found it necessary to ucinations, leveraging them with guidance to entists with no expertise in natural language hich LLMs required for humans to manually curate data ases will accelerate the pace of data-driven ispersed scientific publications. Additionally, entities and relationships to build hat the output response would vary even when he same prompt was provided. An instruction constraining t systematic study comparing different foundation models, prompt techni rompt chaining, and the role of fine-tuning is needed to evaluate the precision and recall of extrac he JSON schema minimized the variability. A ues (zero-shot, one-shot, few-shot), ed entities
2306.06283#172
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
173
and relationships. Notably, pairwise links between the nodes are not often enough to model the complex nature of materials requiring improvement in the input schema. # One sentence summaries Problem/Task Extraction of entities and their relationships from text. # a. b. Approach Prompting of GPT-3.5-turbo prompted with abstract and example JSON and the task to extract entities and their relationships in a structure as provided in the example. c. Results and Impact The approach can successfully create meaningful JSON data structures with extracted entities and their relationships for hundreds of abstracts. d. Challenges and Future Work The non-deterministic behavior of LLMs can lead to variability and fragile behavior. To better understand this as well as the performance of this approach, more systematic benchmarking is needed. 28
2306.06283#173
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
174
F. Chollet, F. Rong, G. Mishra, G. I. Winata, G. de Melo, G. Kruszewski, G. Parascandolo, G. Mariani, G. Wang, G. Jaimovitch-López, G. Betz, G. Gur-Ari, H. Galijasevic, H. Kim, H. Rashkin, H. Hajishirzi, H. Mehta, H. Bogar, H. Shevlin, H. Schütze, H. Yakura, H. Zhang, H. M. Wong, I. Ng, I. Noble, J. Jumelet, J. Geissinger, J. Kernion, J. Hilton, J. Lee, J. F. Fisac, J. B. Simon, J. Koppel, J. Zheng, J. Zou, J. Koco´n, J. Thompson, J. Kaplan, J. Radom, J. Sohl-Dickstein, J. Phang, J. Wei, J. Yosinski, J. Novikova, J. Bosscher, J. Marsh, J. Kim, J. Taal, J. Engel, J. Alabi, J. Xu, J.
2306.05949#174
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
174
29 B. Extracting Structured Data from Free-form Organic Synthesis Text A suspension of compound 63 (0.22 q, 0.93 mmol) and 64 (0.33 q, 0.92 mmol) in THF/triethylamine (11 mL, 10/1) was stirred at room temperature for 48 h. After this time, a clear solution was formed. The solvent was removed under reduced pressure and the residue was purified by flash chromatography (silica gel, 10:1:0.1 chloroform/methanol/concentrated ammonium hydroxide) to provide the quanidine 65 (0.3 g, 60%) as a yellow solid. 1H NMR (300 MHz, DMSO-d6) 6 1.42 (s, 9H), 1.55 (m, 4H), 2.19 (s, 6H), 2.58 (m, 4H), 3.99 (m, 2H), 6.83 (d, 2H), 7.12 (d, 2H), 7.40 (br s, 2H), 9.02 (m, 2H) “amount” “reactionR “identif “amount” “reactionR AME", "value": "THE"}], ": 11,
2306.06283#174
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
175
Novikova, J. Bosscher, J. Marsh, J. Kim, J. Taal, J. Engel, J. Alabi, J. Xu, J. Song, J. Tang, J. Waweru, J. Burden, J. Miller, J. U. Balis, J. Berant, J. Frohberg, J. Rozen, J. Hernandez-Orallo, J. Boudeman, J. Jones, J. B. Tenenbaum, J. S. Rule, J. Chua, K. Kanclerz, K. Livescu, K. Krauth, K. Gopalakrishnan, K. Ignatyeva, K. Markert, K. D. Dhole, K. Gimpel, K. Omondi, K. Mathewson, K. Chiafullo, K. Shkaruta, K. Shridhar, K. McDonell, K. Richardson, L. Reynolds, L. Gao, L. Zhang, L. Dugan, L. Qin, L. Contreras-Ochando, L.-P. Morency, L. Moschella, L. Lam, L. Noble, L. Schmidt, L. He, L. O. Colón, L. Metz,
2306.05949#175
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
175
2H) “amount” “reactionR “identif “amount” “reactionR AME", "value": "THE"}], ": 11, "units": "MILLILITER" }}, mponents”: hylamine"}1, LILITER"}}, “identifie "amount": lui “reactionRo OLVENT™ Figure 13. The Organic Synthesis Parser interface. The top part shows text describing an organic reaction (https: //open-reaction- database. org/client/id/ord-1£99b308e17340cb8e0e3080c270£d08), which the finetuned LLM converts into structured JSON (bottom). A demo application can be found at https://qai222.github.io/LLM_ organic_synthesis/. a. Problem As data-driven approaches and machine learning (ML) techniques gain traction in the field of organic chemistry and its various subfields, it is becoming clear that, as most data in chemistry is represented by unstructured text, the predictive power of these approaches is limited by the lack of structured, well- curated data. Due to the large corpus of organic chemistry literature, manual conversion from unstructured text to structured
2306.06283#175
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
176
L. Moschella, L. Lam, L. Noble, L. Schmidt, L. He, L. O. Colón, L. Metz, L. K. ¸Senel, M. Bosma, M. Sap, M. ter Hoeve, M. Farooqi, M. Faruqui, M. Mazeika, M. Baturan, M. Marelli, M. Maru, M. J. R. Quintana, M. Tolkiehn, M. Giulianelli, M. Lewis, M. Potthast, M. L. Leavitt, M. Hagen, M. Schubert, M. O. Baitemirova, M. Arnaud, M. McElrath, M. A. Yee, M. Cohen, M. Gu, M. Ivanitskiy, M. Starritt, M. Strube, M. Sw˛edrowski, M. Bevilacqua, M. Yasunaga, M. Kale, M. Cain, M. Xu, M. Suzgun, M. Tiwari, M. Bansal, M. Aminnaseri, M. Geva, M. Gheini, M. V. T, N. Peng, N.
2306.05949#176
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
176
by the lack of structured, well- curated data. Due to the large corpus of organic chemistry literature, manual conversion from unstructured text to structured data is unrealistic, making software tools for this task necessary to improve or enable downstream applications, such as reaction prediction and condition recommendation. b. Solution In this project, we leverage the power of fine-tuned LLMs to extract reactant information rom organic synthesis text to structured data. 350 reaction entries were randomly selected from the Open Reaction Database (ORD) [63]. The field of reaction.notes.procedure_details is used as the input (prompt), and the field of reaction.inputs is used as the output (completion). 300 of these prompt- completion pairs were used to fine-tune a GPT-3 (OpenAI Davinci) model using the OpenAI command ine interface (version 0.27.2), and the rest were used for evaluation. In addition to this, we also explored fine-tuning the Alpaca-LoRA model [16, 64, 65] for this task. All data and scripts used in this project are available in the GitHub repository. c. Results and Discussion Surprisingly, the pre-trained language model (OpenAI Davinci),
2306.06283#176
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
177
M. Bansal, M. Aminnaseri, M. Geva, M. Gheini, M. V. T, N. Peng, N. Chi, N. Lee, N. G.-A. Krakover, N. Cameron, N. Roberts, N. Doiron, N. Nan- gia, N. Deckers, N. Muennighoff, N. S. Keskar, N. S. Iyer, N. Constant, N. Fiedel, N. Wen, O. Zhang, O. Agha, O. Elbaghdadi, O. Levy, O. Evans, P. A. M. Casares, P. Doshi, P. Fung, P. P. Liang, P. Vicol, P. Alipoormolabashi, P. Liao, P. Liang, P. Chang, P. Eckersley, P. M. Htut, P. Hwang, P. Miłkowski, P. Patil, P. Pezeshkpour, P. Oli, Q. Mei, Q. Lyu, Q. Chen, R. Banjade, R. E. Rudolph, R. Gabriel, R. Habacker, R. R. Delgado, R.
2306.05949#177
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
177
All data and scripts used in this project are available in the GitHub repository. c. Results and Discussion Surprisingly, the pre-trained language model (OpenAI Davinci), fine-tuned with only 300 prompt-completion pairs, is capable of generating valid JSON complying with the ORD data model. For the 50 prompt-completion pairs in evaluation, 93% of the components in reaction inputs were correctly extracted from the free text reaction description by the GPT-3 based model. The model also associates existing properties, such as volume or mass used in the reaction, to these components. In addition to recognizing in-text chemical entities (such as molecule names), as shown in Figure 13, tokens referencing external chemical entities (compound numbers) can also be captured by the model. On the other hand, while completing the prompts with extracted chemical information, the fine-tuned Alpaca-LoRA model was unable to properly construct a valid JSON complying with the ORD data model.
2306.06283#177
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
178
Q. Chen, R. Banjade, R. E. Rudolph, R. Gabriel, R. Habacker, R. R. Delgado, R. Millière, R. Garg, R. Barnes, R. A. Saurous, R. Arakawa, R. Raymaekers, R. Frank, R. Sikand, R. Novak, R. Sitelew, R. LeBras, R. Liu, R. Jacobs, R. Zhang, R. Salakhutdinov, R. Chi, R. Lee, R. Stovall, R. Teehan, R. Yang, S. Singh, S. M. Mohammad, S. Anand, S. Dillavou, S. Shleifer, S. Wiseman, S. Gruetter, S. R. Bowman, S. S. Schoenholz, S. Han, S. Kwatra, S. A. Rous, S. Ghazarian, S. Ghosh, S. Casey, S. Bischoff, S. Gehrmann, S. Schuster, S. Sadeghi, S. Hamdan, S. Zhou, S. Srivastava, S. Shi, S. Singh, S. Asaadi, S.
2306.05949#178
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
178
Despite these encouraging preliminary results, there are still challenges to a robust synthesis text parser. One of them is the ambiguous and often artificial boundary between descriptions of reactions and workups, which leads to misplaced chemical entities in the structured data, e.g., a solvent used in the extraction of products is instead labeled as a reaction solvent. The aforementioned external reference problem, where a compound number in the procedure is only explicitly identified in an earlier section of the manuscript, can only be solved by prompting the LLM with multiple paragraphs or even the entire document, adding more irrelevant tokens to the prompt. It is also important to prevent the LLM from “auto-completing” extracted named entities with information outside the prompt, e.g., the chemical is extracted as “sodium chloride” in the completion while it is only specified as “chloride” in the rompt. # One sentence summaries d. Problem/Task Extraction of structured reaction condition and procedure data from text. e. Approach Fine-tuning of LLMs on hundreds of prompt (unstructured text)- completion (extracted structured data) pairs.
2306.06283#178
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
179
Sadeghi, S. Hamdan, S. Zhou, S. Srivastava, S. Shi, S. Singh, S. Asaadi, S. S. Gu, S. Pachchigar, S. Toshniwal, S. Upadhyay, Shyamolima, Debnath, S. Shakeri, S. Thormeyer, S. Melzi, S. Reddy, S. P. Makini, S.-H. Lee, S. Torene, S. Hatwar, S. Dehaene, S. Divic, S. Ermon, S. Biderman, S. Lin, S. Prasad, S. T. Piantadosi, S. M. Shieber, S. Misherghi, S. Kiritchenko, S. Mishra, T. Linzen, T. Schuster, T. Li, T. Yu, T. Ali, T. Hashimoto, T.-L. Wu, T. Desbordes, T. Rothschild, T. Phan, T. Wang, T. Nkinyili, T. Schick, T. Kornev, T. Telleen-Lawton, T. Tunduny, T. Gerstenberg, T. Chang, T. Neeraj,
2306.05949#179
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.05949
180
T. Kornev, T. Telleen-Lawton, T. Tunduny, T. Gerstenberg, T. Chang, T. Neeraj, T. Khot, T. Shultz, U. Shaham, V. Misra, V. Demberg, V. Nyamai, V. Raunak, V. Ramasesh, V. U. Prabhu, V. Padmakumar, V. Srikumar, W. Fedus, W. Saunders, W. Zhang, W. Vossen, X. Ren, X. Tong, X. Zhao, X. Wu, X. Shen, Y. Yaghoobzadeh, Y. Lakretz, Y. Song, Y. Bahri, Y. Choi, Y. Yang, Y. Hao, Y. Chen, Y. Belinkov, Y. Hou, Y. Hou, Y. Bai, Z. Seid, Z. Zhao, Z. Wang, Z. J. Wang, Z. Wang, and Z. Wu. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models, June 2022. URL http://arxiv.org/abs/2206.04615.
2306.05949#180
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
180
31 C. TableToJson: Extracting structured information from tables in scientific papers Much of the scientific information published in research articles is presented in an unstructured format, primarily as free text, making it a difficult input for computational processing. However, relevant infor- mation in scientific literature is not only found in text form. Tables are commonly employed in scientific articles, e.g., to collect precursors and raw materials’ characteristics, synthesis conditions, synthesized ma- terials’ properties, or chemical process results. Converting this information into a structured data format is usually a manual time-consuming and tedious task. Neural-network-based table extraction methods and optical character recognition (OCR) [66], which can convert typed, handwritten, or printed documents into machine-encoded text, can be used to extract information from tables in PDF files. However, it is often not straightforward to extract the data in the desired structured format. Nonetheless, structured data is essentia. and analysis. In this context, JSON is a widely adopted structured data format due to its simplicity, flexibility and com- patibility with different specific JSON schema with mo and conform to a schema that d put tha erfectly matches the
2306.06283#180
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
181
format due to its simplicity, flexibility and com- patibility with different specific JSON schema with mo and conform to a schema that d put tha erfectly matches the for creating databases that aggregate research results, and enable data integration, comparison, rogramming languages and systems. However, obtaining structured data following a els can be challenging. The generated JSON needs to be syntactically correct efines the JSON’s structure. Models typically do not provide structured out- esired JSON schema. Some manual post-processing or data transformation is often necessary to map the extracted information to the appropriate schema fields. In this work, we have of scientific papers focused on json mo ule was used As a first data in tables. The input to obtaine while the model, al the HTM L tokens, com The use of the OpenAI mode use of jsonformer (https://github.com/1rgs/jsonformer), which implements a data processing pipeline that com ines the model gene studied two approaches to generate structured JSON from data contained in tables different research topics within the field of chemistry [67-73]. The Python 0 parse JSON data and validate the outputs. approach, the OpenAI text-davinci-003 model was used to
2306.06283#181
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
182
[233] E. Strubell, A. Ganesh, and A. McCallum. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650. Association for Computational Linguistics, 2019. doi: 10. 18653/v1/P19-1355. URL https://www.aclweb.org/anthology/P19-1355. [234] Z. Talat. Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 138–142. Association for Computational Linguistics, 2016. doi: 10.18653/v1/ W16-5618. URL http://aclweb.org/anthology/W16-5618. [235] Z. Talat and A. Lauscher. Back to the Future: On Potential Histories in NLP, Oct. 2022. URL http://arxiv.org/abs/2210.06245. arXiv:2210.06245 [cs].
2306.05949#182
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
182
the field of chemistry [67-73]. The Python 0 parse JSON data and validate the outputs. approach, the OpenAI text-davinci-003 model was used to generate structured JSON from he LLM is the HyperText Markup Language (HTML) code of the table, directly from the digital object identifier (DOI) of the article using the Python selenium library, output of the model is hough not tested in t: ext of the table and the schema, meets the requirements of this model (maximum 2049 input ared to 4097 for text-davinci-003). the data extracted in JSON form (Figure 14). The OpenAI text-curie-001 his work, can also be utilized if the number of input tokens, considering both to generate structured JSON was compared with a second approach, i.e., the ration with appropriate data transformation. This method introduces an efficient way to generate structured JSON using LLMs by generating only the content tokens and filling in the fixed tokens. This avoids generating a complete JSON string and parsing it. This approach ensures that the produced JSON is always syntactically correct and aligns with the specified schema. [74] “able class-table'> cthead><i><th
2306.06283#182
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
183
[236] Z. Talat, T. Davidson, D. Warmsley, and I. Weber. Understanding Abuse: A Typology of Abusive Language Detection Subtasks. In Proceedings of the First Workshop on Abusive Language Online, pages 78–84. Association for Computational Linguistics, 2017. doi: 10. 18653/v1/W17-3012. URL http://aclweb.org/anthology/W17-3012. 38 [237] Z. Talat, S. Lulz, J. Bingel, and I. Augenstein. Disembodied Machine Learning: On the Illusion of Objectivity in NLP. Jan. 2021. URL http://arxiv.org/abs/2101.11974.
2306.05949#183
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
183
approach ensures that the produced JSON is always syntactically correct and aligns with the specified schema. [74] “able class-table'> cthead><i><th rowspans"3"><emoT</em> CKItie<thcolipanc*7 >CHub>4Csub> Cf ROYAL SOciETY { Hesubo2</sub>O > CO + 3HesuD>2</sibect Aaa a NJC Cocsub>2<jsub> + Hesub>2</sub><ftho<RO<tf ce seed sore anaes: [ Celspanc"2">Wa3)tho<frctrmctio(acerst] 7 tmolcsup>—I-jsup>}cfthe<tho(acempticfem>, bf | M-dopad ceria wes “ios”, eee =o" molcsup>-Ie/sup>)<th></tr></thead><tbodyed —“7walline size mo Cow): 2.0, dy ag, 249</td><td>-13.928</td></tr><tr><td>450</t¢ Lattice strain, "tan 8 (xi@-3): 3.08, fétd>—
2306.06283#183
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
184
[238] Z. Talat, A. Névéol, S. Biderman, M. Clinciu, M. Dey, S. Longpre, S. Luccioni, M. Masoud, M. Mitchell, D. Radev, S. Sharma, A. Subramonian, J. Tae, S. Tan, D. Tunuguntla, and O. Van Der Wal. You reap what you sow: On the challenges of bias evaluation under multilingual settings. In Proceedings of BigScience Episode #5 – Workshop on Challenges & Perspectives in Creating Large Language Models, pages 26–41. Association for Computational Linguistics, 2022-05. URL https://aclanthology.org/2022.bigscience-1.3. [239] team. Generative AI and URL property the https://www.dreyfus.fr/en/2023/05/22/ protection of intellectual rights, May generative-ai-balancing-innovation-and-intellectual-property-rights-protection/. 2023. [240] The Ministry of Economy, Trade and Industry. Governance guidelines for implementa- tion of ai principles ver. 1.1, 2022. URL https://www.meti.go.jp/press/2021/01/ 20220125001/20220124003.html.
2306.05949#184
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
184
Lattice strain, "tan 8 (xi@-3): 3.08, fétd>— 12.140¢/td></tro<tr><td>S00</td><td>221.369] Lattice constant A ta>cit Tae 5 Seca popes Oe ayaa compose anodes Crpcmerespirremcn| Ih = a td><td>223.084</td><td>17.490</td><td>-36.. [3.773< https://doi.org/10.1039/D3NJ00316G Lattice strain, ‘tan 8 (xi8-3)! 3.24, Dislocation Lattice strain, Lattice Nigoged coving gens, eyigand cencane PT td><td>-3.655<td></tr><ft ceria NPs size Dyuy (om) S(nm?) (x10) 40) </td><td>~35,656¢/td><td>-5.22 NDC} 31.23, » NDCI 33.03, 9.161 x 10% 3.08 5.413 ¢ NDC2 31.29 5.406 "text-davinci -003" "age cris Dislocation den (na-2): 0.6000154, Lattice
2306.06283#184
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
185
[241] Thorn. Generative AI: Now is the Time for Safety by Design, May 2023. URL https: //www.thorn.org/blog/now-is-the-time-for-safety-by-design/. [242] N. Thylstrup and Z. Talat. Detecting ‘Dirt’ and ‘Toxicity’: Rethinking Content Moderation as Pollution Behaviour. SSRN Electronic Journal, 2020. ISSN 1556-5068. doi: 10.2139/ssrn. 3709719. URL https://www.ssrn.com/abstract=3709719. [243] N. Todoric and organizations and A. Chaudhuri. child 2018. Sept. sexual report https://blog.google/around-the-globe/google-europe/ Using AI abuse material to online, help detect URL using-ai-help-organizations-detect-and-report-child-sexual-abuse-material-online/. [244] J. A. Tomain. Online Privacy and the First Amendment: An Opt-In Approach to Data Processing, Feb. 2014. URL https://papers.ssrn.com/abstract=2573206.
2306.05949#185
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
185
NDC2 31.29 5.406 "text-davinci -003" "age cris Dislocation den (na-2): 0.6000154, Lattice constant A (A): 5.47 a JSON Figure 14. TableToJson. Extraction of structured information from scientific data in tables using LLMs. The input to the LLM model is the HTML code of a table contained in a scientific paper. The output of the LLM model is data structured in JSON form. Results can be visualized in this demo app: https: //vgvinter-tabletojson-app-kt5aiv. streamlit.app/. In our first approach, we directly asked the OpenAI text-davinci-003 model to generate a JSON object according to a desired JSON schema provided in the model prompt. The table content was also included in
2306.06283#185
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
186
[245] J. Tomlinson. Cultural Imperialism. In The Wiley-Blackwell Encyclo- ISBN 978-0- URL https:// _eprint: pedia of Globalization. 470-67059-0. onlinelibrary.wiley.com/doi/abs/10.1002/9780470670590.wbeog129. https://onlinelibrary.wiley.com/doi/pdf/10.1002/9780470670590.wbeog129. John Wiley & Sons, Ltd, 10.1002/9780470670590.wbeog129. 2012. doi: [246] B. Toole. On standpoint epistemology and epistemic peerhood: A defense of epistemic privilege. Journal of the American Philosophical Association, forthcoming. [247] Treasury Board of Canada Secretariat. Impact Assessment Algorithmic URL https://www.canada.ca/en/government/system/ Tool, digital-government/digital-government-innovations/responsible-use-ai/ algorithmic-impact-assessment.html. 2021-03-22. [248] UNICEF. URL UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf.
2306.05949#186
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
186
32 he prompt as HTML code. The accuracy in the prediction, calculated as the percentage of schema values generated correctly, is shown in Figure 15. In all examples, the OpenAI model was queried with a simple prompt, and it correctly extracted all the data in the table and inserted every value into the corresponding position in the schema, with 100% accuracy, providing as output a JSON object. This model also correctly generated both string and number values according to the type assigned in the schema. However, in two of the examples, the OpenAI model did not generate the JSON object name specified in the schema when he corresponding name was not found in the table, generating only the list of components. This was solved by modifying the object name in the schema to a term that more closely aligned with the content of the able. It appears that when the model could not establish a clear relationship between the provided name and the table content, it disregards that part of the schema during generation. These results indicate that he OpenAI text-davinci-003 model is able to convert scientific data from tables of research papers to a structured format following the approach used in this work, where the desired JSON schema was included in the model prompt. Nevertheless, the model retains a certain
2306.06283#186
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
187
[248] UNICEF. URL UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf. [249] United Nations Human Rights Office of the High Commissioner. Enhancing equality and countering discrimination. URL http://romena.ohchr.org/en/node/188. [250] United States Congress. Algorithmic Accountability Act of 2022, 2022-02-04. URL http: //www.congress.gov/. [251] C. Vaccari and A. Chadwick. Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 6(1): 205630512090340, Jan. 2020. ISSN 2056-3051, 2056-3051. doi: 10.1177/2056305120903408. URL http://journals.sagepub.com/doi/10.1177/2056305120903408. 39
2306.05949#187
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
187
papers to a structured format following the approach used in this work, where the desired JSON schema was included in the model prompt. Nevertheless, the model retains a certain degree of freedom to modify the requested scheme if it considers that something may be wrong. text-davinci-003 text-davinci-003 (schema in prompt) + jsonformer prompt = “Generate a JSON object extracting the information from this table in html code: + HTML_table + "Generate the JSON result with the following JSON schema and give only the JSON as output: * + JSON_schema JSON follows schema % values extracted ok JSON follows schema —_% values extracted ok carbon materials for CO, adsorption v 100% v 100% MOFs properties v 100% v 100% supercapacitor performance v 100% v 100% catalysts for CO, conversion xmava 100% v 94% b biomass properties v 100% v 100% anode materials for SOFCs xmava 100% v 80% 100% c perovskite cathodes for SOFCs v 100% d v 46% = 60% 86-100% MOFs properties: providing v 100% Vv (values in table are inserted a wrong schema to the model (new schema is created
2306.06283#187
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
188
39 [252] M. Veale and R. Binns. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. 4(2):205395171774353, 2017-12. ISSN 2053-9517, 2053- 9517. doi: 10.1177/2053951717743530. URL http://journals.sagepub.com/doi/10. 1177/2053951717743530. [253] A. Venigalla and L. Li. Mosaic LLMs (Part 2): GPT-3 quality for < $500k, 2022. URL https://www.mosaicml.com/blog/gpt-3-quality-for-500k. [254] J. Vincent. AI art tools Stable Diffusion and Midjourney targeted with copyright URL https://www.theverge.com/2023/1/16/23557098/ lawsuit generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart. - The Verge.
2306.05949#188
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
188
46% = 60% 86-100% MOFs properties: providing v 100% Vv (values in table are inserted a wrong schema to the model (new schema is created following the table) in the provided wrong schema) a the OpenAl model did not generate the JSON object name provided in the schema when this name was not included in the table, generating only the list of components; this was solved by using an object name closer to the table contents b | errors in the generation of compounds formulas due to the “-" character (e.g., “Fe\u2013” and “MnFe\u2@13N2” instead of “Fe-PYL” and “MnFe-N2”) c | errors in generating numbers with powers (e.g., 9.161 * 1@~); this was solved with an explanation in the prompt: “if you find numbers as 1.025 x 1@<sup>-3</sup>, this means 1.@25e-3" d | for this table (below) the OpenAl model generated all values correctly; jsonformer failed to generate molecule names (@.g., "Pr1-xSrxCo1-"), strings with the "." character (e.g,
2306.06283#188
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
189
[255] J. Vincent. Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day - The Verge, 2016. URL https://www.theverge.com/2016/3/24/11297050/ tay-microsoft-chatbot-racist. [256] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355. Association for Computational Linguistics, 2018. doi: 10.18653/v1/W18-5446. URL http://aclweb.org/anthology/W18-5446.
2306.05949#189
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
189
correctly; jsonformer failed to generate molecule names (@.g., "Pr1-xSrxCo1-"), strings with the "." character (e.g, “129\u2013369d") and numbers with powers (e.g. "5 .93\u@0d710"); after solving the generation of wrong names the accuracy increased from 46% to 60%, after solving the generation of numbers with powers it increased up to 86%, but the issues with the *-" character could not be solved systematically and the resulting accuracy varied between 86% and 100% for several attempts of JSON generation Composition a (Scm”") 0; (Sem™) CTE(10°K") References LayxSrMnO3 130-300 5.93 x 107 11-13 [90] LayxSr,CoO3 4200-1600 0.22 19-20 [91,92] LajxSt,FeO3 129-369 0.205-5.6 x 10° 12.2-16.3 {[93), [24], [951] Lay,St,CoFeOs 87-1050 0.058-8 x 10° 148-214 [96,97] PrtxShxCO4yFeyO3 76-950 1.5%
2306.06283#189
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
190
[257] A. Wang, V. V. Ramaswamy, and O. Russakovsky. Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Eval- In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages uation. 336–349. ACM, 2022-06-21. ISBN 978-1-4503-9352-2. doi: 10.1145/3531146.3533101. URL https://dl.acm.org/doi/10.1145/3531146.3533101. [258] L. Weidinger, J. Mellor, M. Rauh, C. Griffin, J. Uesato, P.-S. Huang, M. Cheng, M. Glaese, B. Balle, A. Kasirzadeh, Z. Kenton, S. Brown, W. Hawkins, T. Stepleton, C. Biles, A. Birhane, J. Haas, L. Rimell, L. A. Hendricks, W. Isaac, S. Legassick, G. Irving, and I. Gabriel. Ethical and social risks of harm from Language Models, 2021-12-08. URL http://arxiv.org/ abs/2112.04359.
2306.05949#190
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
190
87-1050 0.058-8 x 10° 148-214 [96,97] PrtxShxCO4yFeyO3 76-950 1.5% 10°9.4.4x 105 12.8-21.3 [95,98] Figure 15. TableToJson. Results of the structured JSON generation of tables contained in scientific articles. Two approaches are compared: (i) the use of an OpenAI model prompted with the desired JSON schema, and (ii) the use of an OpenAI model together with jsonformer. The second approach used to generate structured information was a version of the jsonformer approach adapted for use with OpenAI LLMs (https://github.com/martinezp1/jsonformer/tree/add-openai), with the implementation of the inclusion of the table text as an input parameter to the jsonformer function.
2306.06283#190
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
191
[259] L. Weidinger, K. McKee, R., R. Everett, S. Huang, T. Zhu, M. Chadwick, C. Summerfield, and I. Gabriel. Using the Veil of Ignorance to align AI systems with principles of justice, 2023. URL https://www.pnas.org/doi/10.1073/pnas.2213709120. [260] B. Weiser. Here’s What Happens When Your Lawyer Uses ChatGPT. The New York Times, May 2023. ISSN 0362-4331. URL https://www.nytimes.com/2023/05/27/nyregion/ avianca-airline-lawsuit-chatgpt.html. [261] R. Weitzer. Racial discrimination in the criminal justice system: Findings and problems in the literature. Journal of Criminal Justice, 24(4):309–322, Jan. 1996. ISSN 0047-2352. doi: 10.1016/0047-2352(96)00015-3. URL https://www.sciencedirect.com/science/ article/pii/0047235296000153.
2306.05949#191
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
191
text-davinci-003 text-davinci-003 (schema in prompt) + jsonformer prompt = “Generate a JSON object extracting the information from this table in html code: + HTML_table + "Generate the JSON result with the following JSON schema and give only the JSON as output: * + JSON_schema JSON follows schema % values extracted ok JSON follows schema —_% values extracted ok carbon materials for CO, adsorption v 100% v 100% MOFs properties v 100% v 100% supercapacitor performance v 100% v 100% catalysts for CO, conversion xmava 100% v 94% b biomass properties v 100% v 100% anode materials for SOFCs xmava 100% v 80% 100% c perovskite cathodes for SOFCs v 100% d v 46% = 60% 86-100% MOFs properties: providing v 100% Vv (values in table are inserted a wrong schema to the model (new schema is created following the table) in the provided wrong schema)
2306.06283#191
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
192
[262] D. Wen, S. M. Khan, A. Ji Xu, H. Ibrahim, L. Smith, J. Caballero, L. Zepeda, C. de Blas Perez, A. K. Denniston, X. Liu, and R. N. Matin. Characteristics of publicly available skin cancer image datasets: a systematic review. The Lancet. Digital Health, 4(1):e64–e74, Jan. 2022. ISSN 2589-7500. doi: 10.1016/S2589-7500(21)00252-1. [263] F. Westin and S. Chiasson. Opt out of privacy or "go home": understanding reluctant privacy behaviours through the FoMO-centric design paradigm. In Proceedings of the New Security Paradigms Workshop, NSPW ’19, pages 57–67, New York, NY, USA, Jan. 2020. Association for Computing Machinery. ISBN 978-1-4503-7647-1. doi: 10.1145/3368860.3368865. URL https://doi.org/10.1145/3368860.3368865.
2306.05949#192
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.05949
193
[264] C. C. Williams and A. Efendic. Evaluating the relationship between marginalization and participation in undeclared work: lessons from Bosnia and Herzegovina. Southeast European and Black Sea Studies, 21(3):481–499, July 2021. ISSN 1468-3857. doi: 10.1080/14683857. 2021.1928419. URL https://doi.org/10.1080/14683857.2021.1928419. Publisher: Routledge _eprint: https://doi.org/10.1080/14683857.2021.1928419. 40 [265] L. Winner. Do Artifacts Have Politics? Daedalus, 109(1), 1980. URL http://www.jstor. org/stable/20024652. [266] A. Wolfers. "National Security" as an Ambiguous Symbol. Political Science Quarterly, 67(4): 481–502, 1952. ISSN 0032-3195. doi: 10.2307/2145138. URL https://www.jstor.org/ stable/2145138. Publisher: [Academy of Political Science, Wiley].
2306.05949#193
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
193
33 Detection of strings indicating values was also added when the schema type is number, as “nan”, “NaN”, “NA”, and “NAN” entries are common in research data tables. The OpenAI text-davinci-003 model was used. In this case, the model was prompted with the desired JSON schema and the HTML code of the studied table. Jsonformer reads the keys from the JSON schema and only delegates the generation of the value tokens to the language model, ensuring that a valid JSON is generated by the LLM model. For this approach, the accuracy in the prediction is also shown in Figure 15. The use of the OpenAl text-davinci-003 model together with jsonformer generated valid JSON objects with 100% accuracy for most of the tables evaluated using a simple prompt. Figure 16 shows the results of one of the examples studied, where using a simple descriptive prompt denoting the type of input text, this approach correctly generated structured data JSON from a table with a complex header. However, it was detected that when the values to be generated contain special characters or specific texts, a more detailed prompt with some simple examples, but without finetuning, can be necessary to provide good results, as
2306.06283#193
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
194
[267] C.-J. Wu, R. Raghavendra, U. Gupta, B. Acun, N. Ardalani, K. Maeng, G. Chang, F. A. Behram, J. Huang, C. Bai, M. Gschwind, A. Gupta, M. Ott, A. Melnikov, S. Candido, D. Brooks, G. Chauhan, B. Lee, H.-H. S. Lee, B. Akyildiz, M. Balandat, J. Spisak, R. Jain, M. Rabbat, and K. Hazelwood. Sustainable AI: Environmental Implications, Challenges and Opportunities. URL http://arxiv.org/abs/2111.00364. ’He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says, Mar. 2023. URL https://www.vice.com/en/article/pkadgm/ man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says.
2306.05949#194
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
194
be generated contain special characters or specific texts, a more detailed prompt with some simple examples, but without finetuning, can be necessary to provide good results, as shown in Figure 17 for a special numeric notation that included power numbers. Biomass type: [ t gee Sample: "CHET", Sample: "AS", Ultimate Analysis (wt%, db): { Ultinate Analysis (wt%, db): { c: 51.3, Ultimate analysis Proximate analysis ‘HHV ‘HIO He density an N: 0.4, a Hr 5.4, sample (oe, db) (4) (0%, co) (MUKg, co) rato (giem> |r sas) He 5.4, © NH 8 Of Me Ash WM Fe! EmpiyCal EnplyCot_Emprycot | St 9.0% 0: 42.58 geod b AS 4944 031 685 005 4290 65 145 700 106 19505 216 1252| Jy Proxinate Analysis (wt, db): { Proximate Analysis (wt%, db): { pragraeers CHE 5022 034 555 001 4341 84 047 B12 183 19:09 2031288 | ne Gut): oosy We (ett): 8.2, CHET 51.30 040 540 0.02 4269 82 029 800 197 19.588
2306.06283#194
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
195
[269] A. Xu, E. Pathak, E. Wallace, S. Gururangan, M. Sap, and D. Klein. Detoxifying Language Models Risks Marginalizing Minority Voices, Apr. 2021. URL http://arxiv.org/abs/ 2104.06390. arXiv:2104.06390 [cs]. [270] J. Zaller and S. Feldman. A Simple Theory of the Survey Response: Answering Questions ver- sus Revealing Preferences. American Journal of Political Science, 36(3):579–616, 1992. ISSN 0092-5853. doi: 10.2307/2111583. URL https://www.jstor.org/stable/2111583. Publisher: [Midwest Political Science Association, Wiley].
2306.05949#195
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
195
19:09 2031288 | ne Gut): oosy We (ett): 8.2, CHET 51.30 040 540 0.02 4269 82 029 800 197 19.588 201 1.275 Ash: 1.45, ve: 80.0, vu 728.9, Fe: 19.7 CS 47.96 274 593 0.21 3526 67 7.90 704 21.7 19.067 267 1.156 FC: 19.6 }. - GP 4550 182 605 017 3473 16 1273 678 107 18682 2311210 | My uayug, aby: 10.565, HW (N9/kg, db): 19.588, » db): 19.565, W/o: 2.01, 08 5121 029 601 003 4188 43 058 815 179 2051 2.28.2 | H/O0: 2.16, le density (g/cn3): He density (g/em3): 1.252 | naa PCL 5289 0.44 606 0.03 3046 10.1 112 765 224 20976 2441287 | 4, ul T see Sample: "Cs", cH prompt="Generate an object with the Analysis (wt, db): { ye cee (wt®, do): { following schema extracting the C: 50.22, noe information from the provided table =“ ° 4
2306.06283#195
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.05949
196
[271] S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin, T. Mihaylov, M. Ott, S. Shleifer, K. Shuster, D. Simig, P. S. Koura, A. Sridhar, T. Wang, and L. Zettlemoyer. OPT: Open Pre-trained Transformer Language Models, 2022-06-21. URL http://arxiv.org/abs/2205.01068. [272] J. Zhao, T. Wang, M. Yatskar, R. Cotterell, V. Ordonez, and K. Chang. Gender bias in contextualized word embeddings. CoRR, abs/1904.03310, 2019. URL http://arxiv.org/ abs/1904.03310. [273] D. Zhuang, X. Zhang, S. L. Song, and S. Hooker. Randomness In Neural Network Training: Characterizing The Impact of Tooling, 2021-06-22. URL http://arxiv.org/abs/2106. 11872. 41
2306.05949#196
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Generative AI systems across modalities, ranging from text, image, audio, and video, have broad social impacts, but there exists no official standard for means of evaluating those impacts and which impacts should be evaluated. We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society. We describe specific social impact categories and how to approach and conduct evaluations in the base technical system, then in people and society. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to all modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what is able to be evaluated in society, each with their own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. We are concurrently crafting an evaluation repository for the AI research community to contribute existing evaluations along the given categories. This version will be updated following a CRAFT session at ACM FAccT 2023.
http://arxiv.org/pdf/2306.05949
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
cs.CY, cs.AI
null
null
cs.CY
20230609
20230612
[ { "id": "2007.04068" }, { "id": "2305.09800" }, { "id": "1908.09203" }, { "id": "2202.05520" }, { "id": "2302.10329" }, { "id": "2107.03374" }, { "id": "2210.06245" }, { "id": "2211.02001" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2301.10226" }, { "id": "2202.02647" }, { "id": "2112.10752" }, { "id": "2206.04615" }, { "id": "2202.00885" }, { "id": "2010.15581" }, { "id": "2305.09941" }, { "id": "2301.04246" }, { "id": "2304.12298" }, { "id": "2203.09509" }, { "id": "2207.14157" }, { "id": "2102.09692" }, { "id": "1804.10999" }, { "id": "2303.11156" }, { "id": "2104.06390" }, { "id": "2002.05651" } ]
2306.06283
196
db): { ye cee (wt®, do): { following schema extracting the C: 50.22, noe information from the provided table =“ ° 4 H: 5.93, in html code:" etatais 5: 0.21, 0: 43.41 . y Lo : ects ‘ roximate Analysis (wt%, db) Proxinate Analysis (wt%, db): { Prexinate, Analys ‘ MC (wth): 8.4, Ash: 7.9 Ash: 0.47, wi: 70.4, vat: 81.2, i 70-4, FC: 21.7 5 ), Mv (nfegy db): 29.209 th (Dk, 9): 39.067, W/O: 2.03,” , H/0: 2.67, i He density (g/em3): 1.156 He density (g/cm3): 1.268 5 Figure 16. TableToJson. Structured JSON generation of tables contained in scientific articles using a prompt with a simple description of the type of input text. One example is shown for a table that contains data on properties of biomass materials [71]. As shown in Figure 15, in one of these examples, an accuracy of 94% was obtained from a table containing a few catalyst names that included the
2306.06283#196
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
197
materials [71]. As shown in Figure 15, in one of these examples, an accuracy of 94% was obtained from a table containing a few catalyst names that included the “-” character, and those values were erroneously generated. In another example, an accuracy of 80% was initially obtained due to errors in the generation of numbers with powers (e.g., 9.161 x 10*), which could be solved by adding an explanation in the prompt: "if you find numbers as 1.025 x 10<sup>3</sup>, this means 1.025e-3", increasing the accuracy to 100%. Next, a table with more complex content (long molecule names, hyphens, power numbers, subscripts, and superscripts...) was selected (Figure 15), resulting in an accuracy of 46% in the JSON generation, meaning that only 46% of the schema values were correctly generated. The erroneous generation of long formula or molecule names with a mixture of letters and numbers as subscripts could be solved by increasing the value of the max_string_token_length argument of the jsonformer function to get a longer response where the end of the string can be detected more easily, which increased the accuracy to 60%. Jsonformer also showed some issues in
2306.06283#197
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
198
function to get a longer response where the end of the string can be detected more easily, which increased the accuracy to 60%. Jsonformer also showed some issues in this example in generating power numbers, which are represented as 10<sup>—n</sup> in the input HTML text. As mentioned above, this was solved by adding a specific explanation in the prompt, increasing the accuracy to 86%. A specific explanation was also included in the prompt to address the issues related to the presence of hyphens in the text. Still, this problem could not be solved systematically, and the resulting accuracy varied between 86% and 100% for several JSON generation attempts. In this particular case, the generated value provided by the model included Unicode text instead of the “-” character (and usually several “\” characters). An instruction to "decode Unicode characters in your response"
2306.06283#198
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
199
Biomass type: [ t gee Sample: "CHET", Sample: "AS", Ultimate Analysis (wt%, db): { Ultinate Analysis (wt%, db): { c: 51.3, Ultimate analysis Proximate analysis ‘HHV ‘HIO He density an N: 0.4, a Hr 5.4, sample (oe, db) (4) (0%, co) (MUKg, co) rato (giem> |r sas) He 5.4, © NH 8 Of Me Ash WM Fe! EmpiyCal EnplyCot_Emprycot | St 9.0% 0: 42.58 geod b AS 4944 031 685 005 4290 65 145 700 106 19505 216 1252| Jy Proxinate Analysis (wt, db): { Proximate Analysis (wt%, db): { pragraeers CHE 5022 034 555 001 4341 84 047 B12 183 19:09 2031288 | ne Gut): oosy We (ett): 8.2, CHET 51.30 040 540 0.02 4269 82 029 800 197 19.588 201 1.275 Ash: 1.45, ve: 80.0, vu 728.9, Fe: 19.7 CS 47.96 274 593 0.21 3526 67 7.90 704 21.7 19.067 267 1.156 FC:
2306.06283#199
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
200
vu 728.9, Fe: 19.7 CS 47.96 274 593 0.21 3526 67 7.90 704 21.7 19.067 267 1.156 FC: 19.6 }. - GP 4550 182 605 017 3473 16 1273 678 107 18682 2311210 | My uayug, aby: 10.565, HW (N9/kg, db): 19.588, » db): 19.565, W/o: 2.01, 08 5121 029 601 003 4188 43 058 815 179 2051 2.28.2 | H/O0: 2.16, le density (g/cn3): He density (g/em3): 1.252 | naa PCL 5289 0.44 606 0.03 3046 10.1 112 765 224 20976 2441287 | 4, ul T see Sample: "Cs", cH prompt="Generate an object with the Analysis (wt, db): { ye cee (wt®, do): { following schema extracting the C: 50.22, noe information from the provided table =“ ° 4 H: 5.93, in html code:" etatais 5: 0.21, 0: 43.41 . y Lo : ects ‘ roximate Analysis (wt%, db) Proxinate Analysis (wt%, db): { Prexinate, Analys
2306.06283#200
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
202
34 was then included in the prompt. Although this solution sometimes yielded satisfactory results, it did not systematically guarantee correct output. These results indicate that the OpenAI model combined with jsonformer can provide wrong outputs when the values to be generated contain some special characters, “ such as the character in this example. This issue requires further investigation to be improved. 9.154 = 10° Lattice constant A ( } by } { Ni-doped ceria NP: "Noca", prompt="Generate an object with the promp’ jenerate an object with the following schema extracting the following schema extracting the information from the provided table information from the provided table in html code:" in html code (if you find numbers as 1.025 x 10<sup>-3</sup>, this means 1.025e-3):" Dislocation density, 5 { { ea Ce-doped SOFC anodes: [ ce-doped SOFC anodes: (nm) A Ni-doped ceria NP: “Noc1", Ni-doped ceria NP: "Noca”, stathine st Se ameaery ine—st + 9.161 x 10-4 Dislocation density, 6 (nm.2): 9.161,
2306.06283#202
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
203
"Noca”, stathine st Se ameaery ine—st + 9.161 x 10-4 Dislocation density, 6 (nm.2): 9.161, Dislocation density, 6 (nm-2): - Lattice constant A (A): 5.413. Lattice constant a (A): 5.413 1.021 x 10° } } Joceeoe Ni-doped ceria ne: "Noc2", Ni-doped ceria Np: Dislocation density, 6 (nm-2) Lattice constant A (A) Lattice constant A (A): 5.403 Ni-doped ceria NP: "NoCa", ws Sean Dislocation density, 6 (nm-2): Lattice constant A (A): 5.407 sppeeie tse Dislocation density, 6 (nm-2): 2.0009154, Lattice constant A (A): 5.407 Figure 17. TableToJson. Structured JSON generation of a table contained in a scientific article using a standard prompt and a prompt with a few simple examples of the special numeric notation found in some of the cells of the input table [72]. Lastly, for one of the examples, a test was performed by providing a wrong schema to the model (Figure 15). In this
2306.06283#203
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
204
of the cells of the input table [72]. Lastly, for one of the examples, a test was performed by providing a wrong schema to the model (Figure 15). In this case, as expected, jsonformer inserted the values contained in the table into the given wrong schema in a more or less ordered fashion, generating an invalid output. However, the OpenAI model created a new schema according to the table structure and headers, providing a decide what may be wrong with the user’s query. An example o valid result, and confirming its freedom to these results is shown in Figure 18. The two approaches used in this work showed a good performance in the generation of JSON format when the data contained in the table are regular strings or numbers, with an accuracy of 100% in most of the examples. The results of this work show that, although the OpenAI text-davinci-003 is able to easily extract structured information from tables and give a valid JSON output, this approach cannot guarantee that the outputs will always follow a specific schema. On the other hand, although jsonformer may present problems when special characters need to be generated, some o these issues have been solved with careful prompting, and others could probably be solved with further research. It can be concluded that jsonformer can be a
2306.06283#204
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
205
some o these issues have been solved with careful prompting, and others could probably be solved with further research. It can be concluded that jsonformer can be a powerful tool for the generation of structured data from unstructured information in most tables, ensuring the generation of valid JSON syntax as the output of LLMs that always complies with the provided schema. The use of jsonformer could facilitate and promote t. numerous topics within the field of chemistry, especially in exper structured data is very scarce. One sentence summaries he creation of databases and datasets for imental domains, where the availability of a. Problem/Task Extracting structured data in a JSON-schema-compliant form from HTML tables.
2306.06283#205
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
206
table from the “MOFs properties” example BET (mig) Vpore (e/a) MOFs Np calcd Ne CH, caled p(gicm>) —_ metal (mmol/cc) HKUST-1 1850 2064 078 078 078 0.883 438 NiMOF-74 1360 1240 ost 052 0.49 1.206 174 text-davinel 008 text-davinci-003 schema from the “catalysts CO,” example + jsonformer talystco2 = { { Carbon-supported catalysts Co2: [ ors: “aust-2", ql 12: 1950, catalyst: “HKusr-1", caled: 2064, synthesis method: "i2", Vporeni2: 0.78, Pronoter incorporation? “calcd”, Vporecits 0.78, x Co2 (%): 1850, $ 0 (%): 2064, 883, + tstring"}y 5 C1 (%): 0.78, etalmol/cc: 4.38 type”: “string”, S C2-C4 (%): 0.78, . 4 re sting"), S 5+ (%): 0.78, Temperature (°C): 0.883,
2306.06283#206
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
207
S C2-C4 (%): 0.78, . 4 re sting"), S 5+ (%): 0.78, Temperature (°C): 0.883, foFs: “hi-noF-74", P (ta): 4.38 na: 1350, » caled: 1220, Vporet2: 0.51, catalyst: “hi-nor- Vporecia: 0.52, synthesis method: ‘aledvpore: 0.49, Promoter incorporation: "calcd", pg/en3: 1.206, b x Co2 (%): 1350, petalmol/cc: "7.74 » S €0 (R): 1240, » » S ca (%): 0.51, » S C2-C4 (%): 0.52, > 5 654 (B): 0.49, Temperature (°C): 1.206, P (ipa): 7.74 h Figure 18. TableToJson. Results of the structured JSON generation of a table after providing the model with a wrong schema. The output generated using the OpenAI model together with jsonformer is shown on the left (values in the table are inserted in the provided wrong schema), while the output generated using directly the OpenAI model is shown on the right (a new schema is created following
2306.06283#207
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
208
(values in the table are inserted in the provided wrong schema), while the output generated using directly the OpenAI model is shown on the right (a new schema is created following the table content). b. Approach Two approaches were compared: Direct prompting of OpenAl’s text-davinci-003 model with the input table and the JSON schema, as well as the Jsonformer approach, which only samples from a subset of tokens in field-wise generation steps. c. Results and Impact Both approaches can extract data in schema-compliant from tables with high success rates. Due to hard-coded decoding rules, Jsonformer failed in some cases. d. Challenges and Future Work While the Jsonformer approach can guarantee valid syntax, it can fail in cases that were not considered in the development of the decoding rules. Hence, future work is needed for increasing the general applicability of constrained decoding strategies.
2306.06283#208
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]
2306.06283
209
# D. AbstractToTitle & TitleToAbstract: text summarization and text generation # 1. Problem Text summarization and text generation are some of most the common tasks in natural language processing (NLP). Often it is tricky to obtain well-defined and curated datasets for these tasks. Also, evaluating the performance of an NLP model is challenging because there is no unique way to summarize and generate text. Luckily, there are many publicly available manuscripts for chemistry and materials science in open access platforms such as arXiv and PubChem. These datasets can be used along with LLMs to solve problems such as: 1) given title of the article; generate an abstract, 2) given an abstract; generate a title, which should be similar to the one available in the literature. Such models can greatly help in technical writing, especially with a human expert in the loop. # 2. Solution The above challenging tasks require an organized infrastructure for curated data and tools. JARVIS- ChemNLP [75] is an integrated library for materials chemistry text data for NLP tasks such as text classifi- cation, token classification, abstractive summarization, text generation, and integrating with existing DFT databases. ChemNLP uses transformer models to generate titles from abstracts and vice versa for arXiv condensed matter category articles.
2306.06283#209
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.
http://arxiv.org/pdf/2306.06283
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
null
null
cond-mat.mtrl-sci
20230609
20230714
[ { "id": "2209.08203" }, { "id": "2212.04450" } ]