bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=QyRganPqPz
@inproceedings{ tian2023using, title={Using Chain-of-Thought Prompting for Interpretable Recognition of Social Bias}, author={Jacob-Junqi Tian and Omkar Dige and D. Emerson and Faiza Khattak}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=QyRganPqPz} }
Given that language models are trained on vast datasets that may contain inherent biases, there is a potential danger of inadvertently perpetuating systemic discrimination. Consequently, it becomes essential to examine and address biases in language models, integrating fairness into their development to ensure that these models are equitable and free of bias. In this work, we demonstrate the importance of reasoning in zero-shot stereotype identification based on Vicuna-13B & -33B and LLaMA-2-chat-13B & -70B. Although we observe improved accuracy by scaling from 13B to larger models, we show that the performance gain from reasoning significantly exceeds the gain from scaling up. Our findings suggest that reasoning is a key factor that enables LLMs to transcend the scaling law on out-of-domain tasks such as stereotype identification. Additionally, through a qualitative analysis of select reasoning traces, we highlight how reasoning improves not just accuracy, but also the interpretability of the decision.
Using Chain-of-Thought Prompting for Interpretable Recognition of Social Bias
[ "Jacob-Junqi Tian", "Omkar Dige", "D. Emerson", "Faiza Khattak" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QbXf5BqyXp
@inproceedings{ chan2023hazards, title={Hazards from Increasingly Accessible Fine-Tuning of Downloadable Foundation Models}, author={Alan Chan and Benjamin Bucknall and Herbie Bradley and David Krueger}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=QbXf5BqyXp} }
Public release of the weights of pretrained foundation models, otherwise known as downloadable access \citep{solaiman_gradient_2023}, enables fine-tuning without the prohibitive expense of pretraining. Our work argues that increasingly accessible fine-tuning of downloadable models may increase hazards. First, we highlight research to improve the accessibility of fine-tuning. We split our discussion into research that A) reduces the computational cost of fine-tuning and B) improves the ability to share that cost across more actors. Second, we argue that increasingly accessible fine-tuning methods may increase hazard through facilitating malicious use and making oversight of models with potentially dangerous capabilities more difficult. Third, we discuss potential mitigatory measures, as well as benefits of more accessible fine-tuning. Given substantial remaining uncertainty about hazards, we conclude by emphasizing the urgent need for the development of mitigations.
Hazards from Increasingly Accessible Fine-Tuning of Downloadable Foundation Models
[ "Alan Chan", "Benjamin Bucknall", "Herbie Bradley", "David Krueger" ]
Workshop/SoLaR
2312.14751
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=QSPHfgw5fp
@inproceedings{ diaz2023developing, title={Developing A Conceptual Framework for Analyzing People in Unstructured Data}, author={Mark Diaz and Sunipa Dev and Emily Reif and Emily Denton and Vinodkumar Prabhakaran}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=QSPHfgw5fp} }
Unstructured data used in foundation model development is a challenge for systematic analyses to make data use and documentation decisions. From a Responsible AI perspective, these decisions often rely upon understanding how people are represented in data. We propose a framework to guide analysis of human representation in unstructured data and identify downstream risks.
Developing A Conceptual Framework for Analyzing People in Unstructured Data
[ "Mark Diaz", "Sunipa Dev", "Emily Reif", "Emily Denton", "Vinodkumar Prabhakaran" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PjSx6xAUOu
@inproceedings{ zhao2023breaking, title={Breaking Physical and Linguistic Borders: Privacy-Preserving Multilingual Prompt Tuning for Low-Resource Languages}, author={Wanru Zhao and Yihong Chen}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=PjSx6xAUOu} }
Pretrained large language models (LLMs) have emerged as a cornerstone in modern natural language processing, with their utility expanding to various applications and languages. However, the fine-tuning of multilingual LLMs, particularly for low-resource languages, is fraught with challenges steming from data-sharing restrictions (the physical border) and from the inherent linguistic differences (the linguistic border). These barriers hinder users of various languages, especially those in low-resource regions, from fully benefiting from the advantages of LLMs. To address these challenges, we propose the Federated Prompt Tuning Paradigm for multilingual scenarios, which utilizes parameter-efficient fine-tuning while adhering to privacy restrictions. We have designed a comprehensive set of experiments and analyzed them using a novel notion of language distance to underscore the strengths of this paradigm: Even under computational constraints, our method not only bolsters data efficiency but also facilitates mutual enhancements across languages, particularly benefiting low-resource ones. Compared to traditional local cross-lingual transfer tuning methods, our approach achieves 6.9\% higher accuracy, reduces the training parameters by over 99\%, and demonstrates stronger cross-lingual generalization. Such findings underscore the potential of our approach to promote social equality, ensure user privacy, and champion linguistic diversity.
Breaking Physical and Linguistic Borders: Privacy-Preserving Multilingual Prompt Tuning for Low-Resource Languages
[ "Wanru Zhao", "Yihong Chen" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=OOetzM2riA
@inproceedings{ deng2023measuring, title={Measuring Feature Sparsity in Language Models}, author={Mingyang Deng and Lucas Tao and Joe Benton}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=OOetzM2riA} }
Recent works have proposed that activations in language models can be modelled as sparse linear combinations of vectors corresponding to features of input text. Under this assumption, these works aimed to reconstruct feature directions using sparse coding. We develop metrics to assess the success of these sparse coding techniques and test the validity of the linearity and sparsity assumptions. We show our metrics can predict the level of sparsity on synthetic sparse linear activations, and can distinguish between sparse linear data and several other distributions. We use our metrics to measure levels of sparsity in several language models. We find evidence that language model activations can be accurately modelled by sparse linear combinations of features, significantly more so than control datasets. We also show that model activations appear to be sparsest in the first and final layers.
Measuring Feature Sparsity in Language Models
[ "Mingyang Deng", "Lucas Tao", "Joe Benton" ]
Workshop/SoLaR
2310.07837
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=OFqYErcNDO
@inproceedings{ wang2023beyond, title={Beyond Reverse {KL}: Generalizing Direct Preference Optimization with Diverse Divergence Constraints}, author={Chaoqi Wang and Yibo Jiang and Chenghao Yang and Han Liu and Yuxin Chen}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=OFqYErcNDO} }
The increasing capabilities of large language models (LLMs) raise opportunities for artificial general intelligence but concurrently amplify safety concerns, such as potential misuse of AI systems, necessitating effective AI alignment. Reinforcement Learning from Human Feedback (RLHF) has emerged as a promising pathway towards AI alignment but brings forth challenges due to its complexity and dependence on a separate reward model. Direct Preference Optimization (DPO) has been proposed as an alternative; and it remains equivalent to RLHF under the reverse KL regularization constraint. This paper presents $f$-DPO, a generalized approach to DPO by incorporating diverse divergence constraints. We show that under certain $f$-divergences, including Jensen-Shannon divergence, forward KL divergences and $\alpha$-divergences, the complex relationship between the reward and optimal policy can also be simplified by addressing the Karush–Kuhn–Tucker conditions. This eliminates the need for estimating the normalizing constant in the Bradley-Terry model and enables a tractable mapping between the reward function and the optimal policy. Our approach optimizes LLMs to align with human preferences in a more efficient and supervised manner under a broad set of divergence constraints. Empirically, adopting these divergences ensures a balance between alignment performance and generation diversity. Importantly, our $f$-DPO outperforms PPO-based methods in divergence efficiency, and divergence constraints directly influence expected calibration error (ECE).
Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints
[ "Chaoqi Wang", "Yibo Jiang", "Chenghao Yang", "Han Liu", "Yuxin Chen" ]
Workshop/SoLaR
2309.16240
[ "" ]
https://huggingface.co/papers/2309.16240
2
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=LGjqs5ILdN
@inproceedings{ fr{\"a}nken2023social, title={Social Contract {AI}: Aligning {AI} Assistants with Implicit Group Norms}, author={Jan-Philipp Fr{\"a}nken and Samuel Kwok and Peixuan Ye and Kanishk Gandhi and Dilip Arumugam and Jared Moore and Alex Tamkin and Tobias Gerstenberg and Noah Goodman}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=LGjqs5ILdN} }
We explore the idea of aligning an AI assistant by inverting a model of users' (unknown) preferences from observed interactions. To validate our proposal, we run proof-of-concept simulations in the economic ultimatum game, formalizing user preferences as policies that guide the actions of simulated players. We find that the AI assistant accurately aligns its behavior to match standard policies from the economic literature (e.g., selfish, altruistic). However, the assistant’s learned policies lack robustness and exhibit limited generalization in an out-of-distribution setting when confronted with a currency (e.g., grams of medicine) that was not included in the assistant's training distribution. Additionally, we find that when there is inconsistency in the relationship between language use and an unknown policy (e.g., an altruistic policy combined with rude language), the assistant's learning of the policy is slowed. Overall, our preliminary results suggest that developing simulation frameworks in which AI assistants need to infer preferences from diverse users can provide a valuable approach for studying practical alignment questions.
Social Contract AI: Aligning AI Assistants with Implicit Group Norms
[ "Jan-Philipp Fränken", "Samuel Kwok", "Peixuan Ye", "Kanishk Gandhi", "Dilip Arumugam", "Jared Moore", "Alex Tamkin", "Tobias Gerstenberg", "Noah Goodman" ]
Workshop/SoLaR
2310.17769
[ "https://github.com/janphilippfranken/scai" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=L2ZIcu5fxS
@inproceedings{ fluri2023evaluating, title={Evaluating Superhuman Models with Consistency Checks}, author={Lukas Fluri and Daniel Paleka and Florian Tram{\`e}r}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=L2ZIcu5fxS} }
If machine learning models were to achieve superhuman abilities at various reasoning or decision-making tasks, how would we go about evaluating such models, given that humans would necessarily be poor proxies for ground truth? In this paper, we propose a framework for evaluating superhuman models via consistency checks. Our premise is that while the correctness of superhuman decisions may be impossible to evaluate, we can still surface mistakes if the model's decisions fail to satisfy certain logical, human-interpretable rules. We investigate two tasks where correctness of decisions is hard to verify: due to either superhuman model abilities, or to otherwise missing ground truth: evaluating chess positions and forecasting future events. Regardless of a model's (possibly superhuman) performance on these tasks, we can discover logical inconsistencies in decision making: a chess engine assigning opposing valuations to semantically identical boards; or GPT-4 forecasting that sports records will evolve non-monotonically over time.
Evaluating Superhuman Models with Consistency Checks
[ "Lukas Fluri", "Daniel Paleka", "Florian Tramèr" ]
Workshop/SoLaR
2306.09983
[ "https://github.com/ethz-spylab/superhuman-ai-consistency" ]
https://huggingface.co/papers/2306.09983
0
0
0
3
[]
[]
[ "latticeflow/compl-ai-board" ]
[]
[]
[ "latticeflow/compl-ai-board" ]
1
oral
null
https://openreview.net/forum?id=Jct5Lup1DJ
@inproceedings{ naihin2023testing, title={Testing Language Model Agents Safely in the Wild}, author={Silen Naihin and David Atkinson and Marc Green and Merwane Hamadi and Craig Swift and Douglas Schonholtz and Adam Tauman Kalai and David Bau}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=Jct5Lup1DJ} }
A prerequisite for safe autonomy-in-the-wild is safe testing-in-the-wild. Yet real-world autonomous tests face several unique safety challenges, both due to the possibility of causing harm during a test, as well as the risk of encountering new unsafe agent behavior through interactions with real-world and potentially malicious actors. We propose a framework for conducting safe autonomous agent tests on the open internet: agent actions are audited by a context-sensitive monitor that enforces a stringent safety boundary to stop an unsafe test, with suspect behavior ranked and logged to be examined by humans. We a design a basic safety monitor that is flexible enough to monitor existing LLM agents, and, using an adversarial simulated agent, we measure its ability to identify and stop unsafe situations. Then we apply the safety monitor on a battery of real-world tests of AutoGPT, and we identify several limitations and challenges that will face the creation of safe in-the-wild tests as autonomous agents grow more capable.
Testing Language Model Agents Safely in the Wild
[ "Silen Naihin", "David Atkinson", "Marc Green", "Merwane Hamadi", "Craig Swift", "Douglas Schonholtz", "Adam Tauman Kalai", "David Bau" ]
Workshop/SoLaR
2311.10538
[ "" ]
https://huggingface.co/papers/2311.10538
8
9
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=Inj5PhZfRn
@inproceedings{ choi2023komultitext, title={KoMultiText: Large-Scale Korean Text Dataset for Classifying Biased Speech in Real-World Online Services}, author={Dasol Choi and Jooyoung Song and Eunsun Lee and Seo Jin woo and HeeJune Park and Dongbin Na}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=Inj5PhZfRn} }
With the growth of online services, the need for advanced text classification algorithms, such as sentiment analysis and biased text detection, has become increasingly evident. The anonymous nature of online services often leads to the presence of biased and harmful language, posing challenges to maintaining the health of online communities. This phenomenon is especially relevant in South Korea, where large-scale hate speech detection algorithms have not yet been broadly explored. In this paper, we introduce "KoMultiText", a new comprehensive, large-scale dataset collected from a well-known South Korean SNS platform. Our proposed dataset provides annotations including (1) Preferences, (2) Profanities, and (3) Nine types of Bias for the text samples, enabling multi-task learning for simultaneous classification of user-generated texts. Leveraging state-of-the-art BERT-based language models, our approach surpasses human-level accuracy across diverse classification tasks, as measured by various metrics. Beyond academic contributions, our work can provide practical solutions for real-world hate speech and bias mitigation, contributing directly to the improvement of online community health. Our work provides a robust foundation for future research aiming to improve the quality of online discourse and foster societal well-being. All source codes and datasets are publicly accessible at https://github.com/Dasol-Choi/KoMultiText.
KoMultiText: Large-Scale Korean Text Dataset for Classifying Biased Speech in Real-World Online Services
[ "Dasol Choi", "Jooyoung Song", "Eunsun Lee", "Seo Jin woo", "HeeJune Park", "Dongbin Na" ]
Workshop/SoLaR
2310.04313
[ "https://github.com/dasol-choi/komultitext" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HoIEKQhiRs
@inproceedings{ gruetzemacher2023an, title={An International Consortium for {AI} Risk Evaluations}, author={Ross Gruetzemacher and Alan Chan and {\v{S}}t{\v{e}}p{\'a}n Los and Kevin Frazier and Sim{\'e}on Campos and Matija Franklin and James Fox and Jose Hernandez-Orallo and Christin Manning and Philip Tomei and Kyle Kilian}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=HoIEKQhiRs} }
Given rapid progress in AI and potential risks from next-generation frontier AI systems, the urgency to create and implement AI governance and regulatory schemes is apparent. A regulatory gap has permitted labs to conduct research, development, and deployment with minimal oversight or guidance. In response, frontier AI evaluations have been proposed as a way of assessing risks from the development and deployment of frontier AI systems. Yet, the budding AI risk evaluation ecosystem faces significant present and future coordination challenges, such as a limited diversity of evaluators, suboptimal allocation of effort, and races to the bottom. As a solution, this paper proposes an international consortium for AI risk evaluations, comprising both AI developers and third-party AI risk evaluators. Such a consortium could play a critical role in international efforts to mitigate societal-scale risks from advanced AI. In this paper, we discuss the current evaluation ecosystem and its problems, introduce the proposed consortium, review existing organizations performing similar functions in other domains, and, finally, we recommend concrete steps toward establishing the proposed consortium.
An International Consortium for AI Risk Evaluations
[ "Ross Gruetzemacher", "Alan Chan", "Štěpán Los", "Kevin Frazier", "Siméon Campos", "Matija Franklin", "James Fox", "Jose Hernandez-Orallo", "Christin Manning", "Philip Tomei", "Kyle Kilian" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HiMzlsYGZ8
@inproceedings{ huang2023citation, title={Citation: A Key to Building Responsible and Accountable Large Language Models}, author={Jie Huang and Kevin Chang}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=HiMzlsYGZ8} }
Large Language Models (LLMs) bring transformative benefits alongside unique challenges, including intellectual property (IP) and ethical concerns. This position paper explores a novel angle to mitigate these risks, drawing parallels between LLMs and established web systems. We identify "citation" as a crucial yet missing component in LLMs, which could enhance content transparency and verifiability while addressing IP and ethical dilemmas. We further propose that a comprehensive citation mechanism for LLMs should account for both non-parametric and parametric content. Despite the complexity of implementing such a mechanism, along with the inherent potential pitfalls, we advocate for its development. Building on this foundation, we outline several research problems in this area, aiming to guide future explorations towards building more responsible and accountable LLMs.
Citation: A Key to Building Responsible and Accountable Large Language Models
[ "Jie Huang", "Kevin Chang" ]
Workshop/SoLaR
2307.02185
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Fc2FaS9mYJ
@inproceedings{ huang2023towards, title={Towards Optimal Statistical Watermarking}, author={Baihe Huang and Banghua Zhu and Hanlin Zhu and Jason Lee and Jiantao Jiao and Michael Jordan}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=Fc2FaS9mYJ} }
We study statistical watermarking by formulating it as a hypothesis testing problem, a general framework which subsumes all previous statistical watermarking methods. Key to our formulation is a coupling of the output tokens and the rejection region, realized by pseudo-random generators in practice, that allows non-trivial trade-off between the Type I error and Type II error. We characterize the Uniformly Most Powerful (UMP) watermark in this context. In the most common scenario where the output is a sequence of $n$ tokens, we establish matching upper and lower bounds on the number of i.i.d. tokens required to guarantee small Type I and Type II errors. Our rate scales as $\Theta(h^{-1} \log (1/h))$ with respect to the average entropy per token $h$ and thus greatly improves the $O(h^{-2})$ rate in the previous works. For scenarios where the detector lacks knowledge of the model's distribution, we introduce the concept of model-agnostic watermarking and establish the minimax bounds for the resultant increase in Type II error. Moreover, we formulate the robust watermarking problem where user is allowed to perform a class of perturbation on the generated texts, and characterize the optimal type II error of robust UMP tests via a linear programming problem. To the best of our knowledge, this is the first systematic statistical treatment on the watermarking problem with near-optimal rates in the i.i.d. setting, and might be of interest for future works.
Towards Optimal Statistical Watermarking
[ "Baihe Huang", "Banghua Zhu", "Hanlin Zhu", "Jason Lee", "Jiantao Jiao", "Michael Jordan" ]
Workshop/SoLaR
2312.07930
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=FMINWxrHOJ
@inproceedings{ mukobi2023superhf, title={Super{HF}: Supervised Iterative Learning from Human Feedback}, author={Gabriel Mukobi and Peter Chatain and Su Fong and Robert Windesheim and Gitta Kutyniok and Kush Bhatia and Silas Alberti}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=FMINWxrHOJ} }
While large language models demonstrate remarkable capabilities, they often present challenges in terms of safety, alignment with human values, and stability during training. Here, we focus on two prevalent methods used to align these models, Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). SFT is simple and robust, powering a host of open-source models, while RLHF is a more sophisticated method used in top-tier models like ChatGPT but also suffers from instability and susceptibility to reward hacking. We propose a novel approach, Supervised Iterative Learning from Human Feedback (SuperHF), which seeks to leverage the strengths of both methods. Our hypothesis is two-fold: we posit that the reward model used in RLHF is critical for efficient data use and model generalization and that the use of Proximal Policy Optimization (PPO) in RLHF may not be necessary and could contribute to instability issues. SuperHF replaces PPO with a simple supervised loss and a Kullback-Leibler (KL) divergence prior. It creates its own training data by repeatedly sampling a batch of model outputs and filtering them through the reward model in an online learning regime. We then break down the reward optimization problem into three components: robustly optimizing the training rewards themselves, preventing reward hacking—or exploitation of the reward model that can degrade model performance—as measured by a novel METEOR similarity metric, and maintaining good performance on downstream evaluations. Our experimental results show SuperHF exceeds PPO-based RLHF on the training objective, easily and favorably trades off high reward with low reward hacking, improves downstream calibration, and performs the same on our GPT-4 based qualitative evaluation scheme all the while being significantly simpler to implement, highlighting SuperHF's potential as a competitive language model alignment technique.
SuperHF: Supervised Iterative Learning from Human Feedback
[ "Gabriel Mukobi", "Peter Chatain", "Su Fong", "Robert Windesheim", "Gitta Kutyniok", "Kush Bhatia", "Silas Alberti" ]
Workshop/SoLaR
2310.16763
[ "https://github.com/openfeedback/superhf" ]
https://huggingface.co/papers/2310.16763
1
1
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=FKwtKzglFb
@inproceedings{ yu2023training, title={Training Private and Efficient Language Models with Synthetic Data from {LLM}s}, author={Da Yu and Arturs Backurs and Sivakanth Gopi and Huseyin Inan and Janardhan Kulkarni and Zinan Lin and Chulin Xie and Huishuai Zhang and Wanrong Zhang}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=FKwtKzglFb} }
Language models are pivotal in modern text-based applications, offering many productivity features like next-word prediction, smart composition, and summarization. In many applications, these models must be lightweight to meet inference time and computational cost requirements. Furthermore, due to the inherent sensitivity of their training data, it is essential to train those models in a privacy-preserving manner. While it is well established that training large models with differential privacy (DP) leads to favorable utility-vs-privacy trade offs, training lightweight models with DP remains an open challenge. This paper explores the use of synthetic data generated from a DP fine-tuned large language model (LLM) to train lightweight models. The key insight behind our framework is that LLMs are better suited for private fine-tuning, and hence using the synthetic data is one way to transfer such capability to smaller models. Our framework can also be interpreted as doing {\em sampling based} Knowledge Distillation in DP setting. It's noteworthy that smaller models can be trained on synthetic data using non-private optimizers, thanks to the post-processing property of DP. We empirically demonstrate that our new approach significantly improves downstream performance compared to directly train lightweight models on real data with DP. For instance, using a model with just 4.4 million parameters, we achieve 97\% relative performance compared to the non-private counterparts in both medical and conversational corpus.
Training Private and Efficient Language Models with Synthetic Data from LLMs
[ "Da Yu", "Arturs Backurs", "Sivakanth Gopi", "Huseyin Inan", "Janardhan Kulkarni", "Zinan Lin", "Chulin Xie", "Huishuai Zhang", "Wanrong Zhang" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=DRk4bWKr41
@inproceedings{ laine2023towards, title={Towards a Situational Awareness Benchmark for {LLM}s}, author={Rudolf Laine and Alexander Meinke and Owain Evans}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=DRk4bWKr41} }
Among the facts that LLMs can learn is knowledge about themselves and their situation. This knowledge, and ability to make inferences based on it, is called situational awareness. Situationally aware models can be more helpful, but also pose risks. For example, situationally aware models could game testing setups by knowing they are being tested and acting differently. We create a new benchmark, SAD (Situational Awareness Dataset), for LLM situational awareness in two categories that are especially relevant for future AI risks. SAD-influence tests whether LLMs can accurately assess how they can or cannot influence the world. SAD-stages tests if LLMs can recognize if a particular input is likely to have come from a given stage of the LLM lifecycle (pretraining, supervised fine-tuning, testing, and deployment). Only the most capable models do better than chance. If the prompt tells the model that it is an LLM, scores increase by 9-21 percentage points for models on SAD-influence, while having mixed effects on SAD-stages.
Towards a Situational Awareness Benchmark for LLMs
[ "Rudolf Laine", "Alexander Meinke", "Owain Evans" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=AW1T4xxZ6F
@inproceedings{ nitsure2023risk, title={Risk Assessment and Statistical Significance in the Age of Foundation Models}, author={Apoorva Nitsure and Youssef Mroueh and Mattia Rigotti and Kristjan Greenewald and Brian Belgodere and Mikhail Yurochkin and Jiri Navratil and Igor Melnyk and Jarret Ross}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=AW1T4xxZ6F} }
We propose a distributional framework for assessing socio-technical risks of foundation models with quantified statistical significance. Our approach hinges on a new statistical relative testing based on first and second order stochastic dominance of real random variables. We show that the second order statistics in this test are linked to mean-risk models commonly used in econometrics and mathematical finance to balance risk and utility when choosing between alternatives. Using this framework, we formally develop a risk-aware approach for foundation model selection given guardrails quantified by specified metrics. Inspired by portfolio optimization and selection theory in mathematical finance, we define a \emph{metrics portfolio} for each model as a means to aggregate a collection of metrics, and perform model selection based on the stochastic dominance of these portfolios. We use our framework to compare various large language models regarding risks related to drifting from instructions and outputting toxic content.
Risk Assessment and Statistical Significance in the Age of Foundation Models
[ "Apoorva Nitsure", "Youssef Mroueh", "Mattia Rigotti", "Kristjan Greenewald", "Brian Belgodere", "Mikhail Yurochkin", "Jiri Navratil", "Igor Melnyk", "Jarret Ross" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=9xhUufywBX
@inproceedings{ desai2023an, title={An Archival Perspective on Pretraining Data}, author={Meera Desai and Abigail Jacobs and Dallas Card}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=9xhUufywBX} }
Research in NLP on pretraining data has largely focused on identifying and mitigating downstream risks in models. We argue that more critical attention is needed to pretraining datasets and the systems that produce them. To highlight the broader range of impacts of pretraining corpora, we consider the analogy between pretraining datasets and archives. Within the broader ecosystem of datasets and models, we focus especially on processes involved in the creation of pretraining data. By adopting an archives perspective, we surface impacts beyond directly shaping model behavior, including the role of pretraining data corpora as independent data artifacts and the ways that their collection shape future practices. In particular, we explore research in NLP that parallels archival practices of appraisal: we consider the practices of filtering of pretraining data and critically examine the problem formulations taken on by this work. In doing so, we underscore how choices about what is included in pretraining data are necessarily subjective decisions about values. We conclude by drawing on archival studies to offer insights on paths forward.
An Archival Perspective on Pretraining Data
[ "Meera Desai", "Abigail Jacobs", "Dallas Card" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=9O6hbiDLU7
@inproceedings{ yang2023bayesian, title={Bayesian low-rank adaptation for large language models}, author={Adam Yang and Maxime Robeyns and Xi Wang and Laurence Aitchison}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=9O6hbiDLU7} }
Low-rank adaptation (LoRA) has emerged as a new paradigm for cost-efficient fine-tuning of large language models (LLMs). However, fine-tuned LLMs often become overconfident especially when fine-tuned on small datasets. Bayesian methods, with their inherent ability to estimate uncertainty, serve as potent tools to mitigate overconfidence and enhance calibration. In this work, we introduce Laplace-LoRA, which applies a Bayesian approach to the LoRA parameters. Specifically, Laplace-LoRA applies a Laplace approximation to the posterior over the LoRA parameters, considerably improving the calibration of fine-tuned LLMs.
Bayesian low-rank adaptation for large language models
[ "Adam Yang", "Maxime Robeyns", "Xi Wang", "Laurence Aitchison" ]
Workshop/SoLaR
2308.13111
[ "https://github.com/maximerobeyns/bayesian_lora" ]
https://huggingface.co/papers/2308.13111
0
0
0
4
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=8iXdNXW34d
@inproceedings{ hebenstreit2023a, title={A collection of principles for guiding and evaluating large language models}, author={Konstantin Hebenstreit and Robert Praas and Matthias Samwald}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=8iXdNXW34d} }
Large language models (LLMs) demonstrate outstanding capabilities, but challenges remain regarding their ability to solve complex reasoning tasks, as well as their transparency, robustness, truthfulness, and ethical alignment. In this preliminary study, we compile a set of core principles for steering and evaluating the reasoning of LLMs by curating literature from several relevant strands of work: structured reasoning in LLMs, self-evaluation/self-reflection, explainability, AI system safety/security, guidelines for human critical thinking, and ethical/regulatory guidelines for AI. We identify and curate a list of 220 principles from literature, and derive a set of 37 core principles organized into seven categories: assumptions and perspectives, reasoning, information and evidence, robustness and security, ethics, utility, and implications. We conduct a small-scale expert survey, eliciting the subjective importance experts assign to different principles and lay out avenues for future work beyond our preliminary results. We envision that the development of a shared model of principles can serve multiple purposes: monitoring and steering models at inference time, improving model behavior during training, and guiding human evaluation of model reasoning.
A collection of principles for guiding and evaluating large language models
[ "Konstantin Hebenstreit", "Robert Praas", "Matthias Samwald" ]
Workshop/SoLaR
2312.10059
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6pnRSD1xFe
@inproceedings{ bel{\'e}m2023are, title={Are Models Biased on Text without Gender-related Language?}, author={Catarina Bel{\'e}m and Preethi Seshadri and Yasaman Razeghi and Sameer Singh}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=6pnRSD1xFe} }
As large language models (LLMs) are increasingly deployed for a variety of applications, it is imperative to measure and understand how gender biases present in the training data influence model behavior. Previous works construct benchmarks around known stereotypes (e.g., occupations) and demonstrate high levels of gender bias in LLMs, raising serious concerns about models exhibiting undesirable behaviors. We expand on existing literature by asking the question: \textit{Do large language models still favor one gender over the other in non-stereotypical settings?} To tackle this question, we restrict LLM evaluation to a \textit{neutral} subset, in which sentences are free of pronounced word-gender associations. After quantifying these associations in terms of pretraining data statistics, we use them to (1) create a new benchmark and (2) adapt popular gender pronoun benchmarks -- Winobias and Winogender -- removing sentences with strongly gender-correlated words. Surprisingly, when assessing $20+$ models in the proposed benchmarks, we still detect critically high gender bias across all tested models. For instance, after adjusting for strong word-gender associations, we find that all models still exhibit clear gender preferences in about $60$%-$95$% of the sentences, representing a small change (up to $10$%) from the original benchmark.
Are Models Biased on Text without Gender-related Language?
[ "Catarina Belém", "Preethi Seshadri", "Yasaman Razeghi", "Sameer Singh" ]
Workshop/SoLaR
2405.00588
[ "https://github.com/ucinlp/unstereo-eval" ]
https://huggingface.co/papers/2405.00588
0
0
0
4
[]
[ "ucinlp/unstereo-eval" ]
[]
[]
[ "ucinlp/unstereo-eval" ]
[]
1
poster
null
https://openreview.net/forum?id=6mreYNKLKv
@inproceedings{ hazineh2023linear, title={Linear Latent World Models in Simple Transformers: A Case Study on Othello-{GPT}}, author={Dean Hazineh and Zechen Zhang and Jeffrey Chiu}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=6mreYNKLKv} }
Foundation models exhibit significant capabilities in decision-making and logical deductions. Nonetheless, a continuing discourse persists regarding their genuine understanding of the world as opposed to mere stochastic mimicry. This paper meticulously examines a simple transformer trained for Othello, extending prior research to enhance comprehension of the emergent world model of Othello-GPT. The investigation reveals that Othello-GPT encapsulates a linear representation of opposing pieces, a factor that causally steers its decision-making process. This paper further elucidates the interplay between the linear world representation and causal decision-making, and their dependence on layer depth and model complexity.
Linear Latent World Models in Simple Transformers: A Case Study on Othello-GPT
[ "Dean Hazineh", "Zechen Zhang", "Jeffrey Chiu" ]
Workshop/SoLaR
2310.07582
[ "https://github.com/deanhazineh/emergent-world-representations-othello" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6mHKQkV8NY
@inproceedings{ kirk2023the, title={The Empty Signifier Problem: Towards Clearer Paradigms for Operationalising ''Alignment'' in Large Language Models}, author={Hannah Kirk and Bertie Vidgen and Paul Rottger and Scott Hale}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=6mHKQkV8NY} }
In this paper, we address the concept of ``alignment'' in large language models (LLMs) through the lens of post-structuralist socio-political theory, specifically examining its parallels to empty signifiers. To establish a shared vocabulary around how abstract concepts of alignment are operationalised in empirical datasets, we propose a framework that demarcates: 1) which dimensions of model behaviour are considered important, then 2) how meanings and definitions are ascribed to these dimensions, and by whom. We situate existing empirical literature and provide guidance on deciding which paradigm to follow. Through this framework, we aim to foster a culture of transparency and critical evaluation, aiding the community in navigating the complexities of aligning LLMs with human populations.
The Empty Signifier Problem: Towards Clearer Paradigms for Operationalising "Alignment” in Large Language Models
[ "Hannah Kirk", "Bertie Vidgen", "Paul Rottger", "Scott Hale" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6U1HlBXezs
@inproceedings{ siththaranjan2023understanding, title={Understanding Hidden Context in Preference Learning: Consequences for {RLHF}}, author={Anand Siththaranjan and Cassidy Laidlaw and Dylan Hadfield-Menell}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=6U1HlBXezs} }
In practice, preference learning from human feedback depends on incomplete data with hidden context. Hidden context refers to data that affects the feedback received, but which is not represented in the data used to train a preference model. This captures common issues of data collection, such as having human annotators with varied preferences, cognitive processes that result in seemingly irrational behavior, and combining data labeled according to different criteria. We prove that standard applications of preference learning, including reinforcement learning from human feedback (RLHF), implicitly aggregate over hidden contexts according to a well-known voting rule called _Borda count_. We show this can produce counter-intuitive results that are very different from other methods which implicitly aggregate via expected utility. Furthermore, our analysis formalizes the way that preference learning from users with diverse values tacitly implements a social choice function. A key implication of this result is that annotators have an incentive to misreport their preferences in order to influence the learned model, leading to vulnerabilities in the deployment of RLHF. As a step towards mitigating these problems, we introduce a class of methods called _distributional preference learning_ (DPL). DPL methods estimate a distribution of possible score values for each alternative in order to better account for hidden context. Experimental results indicate that applying DPL to RLHF for LLM chatbots identifies hidden context in the data and significantly reduces subsequent jailbreak vulnerability.
Understanding Hidden Context in Preference Learning: Consequences for RLHF
[ "Anand Siththaranjan", "Cassidy Laidlaw", "Dylan Hadfield-Menell" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=5hY19x1hdq
@inproceedings{ richter2023subtle, title={Subtle Misogyny Detection and Mitigation: An Expert-Annotated Dataset}, author={Anna Richter and Brooklyn Sheppard and Allison Cohen and Elizabeth Smith and Tamara Kneese and Carolyne Pelletier and Ioana Baldini and Yue Dong}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=5hY19x1hdq} }
Using novel approaches to dataset development, the Biasly dataset captures the nuance and subtlety of misogyny in ways that are unique within the literature. Built in collaboration with multi-disciplinary experts and annotators themselves, the dataset contains annotations of movie subtitles, capturing colloquial expressions of misogyny in North American film. The dataset can be used for a range of NLP tasks, including classification, severity score regression, and text generation for rewrites. In this paper, we discuss the methodology used, analyze the annotations obtained, and provide baselines using common NLP algorithms in the context of misogyny detection and mitigation. We hope this work will promote AI for social good in NLP for bias detection, explanation and removal.
Subtle Misogyny Detection and Mitigation: An Expert-Annotated Dataset
[ "Anna Richter", "Brooklyn Sheppard", "Allison Cohen", "Elizabeth Smith", "Tamara Kneese", "Carolyne Pelletier", "Ioana Baldini", "Yue Dong" ]
Workshop/SoLaR
2311.09443
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=4JNa7bg4tn
@inproceedings{ anderljung2023towards, title={Towards Publicly Accountable Frontier {LLM}s}, author={Markus Anderljung and Everett Smith and Joe O'Brien and Lisa Soder and Benjamin Bucknall and Emma Bluemke and Jonas Schuett and Robert Trager and Lacey Strahm and Rumman Chowdhury}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=4JNa7bg4tn} }
With the increasing integration of frontier large language models (LLMs) into society and the economy, decisions related to their training, deployment, and use have far-reaching implications. These decisions should not be left solely in the hands of frontier LLM developers. LLM users, civil society and policymakers need trustworthy sources of information to steer such decisions for the better. Involving outside actors in the evaluation of these systems (external scrutiny) offers a solution: it can help provide information that is more accurate and complete. Despite encouraging signs of increasing external scrutiny of frontier LLMs, its success is not assured. In this paper, we survey six requirements for effective external scrutiny of frontier AI systems and organize them under the ASPIRE framework: Access, Searching attitude, Proportionality to the risks, Independence, Resources, and Expertise. We then illustrate how external scrutiny might function throughout the AI lifecycle.
Towards Publicly Accountable Frontier LLMs
[ "Markus Anderljung", "Everett Smith", "Joe O'Brien", "Lisa Soder", "Benjamin Bucknall", "Emma Bluemke", "Jonas Schuett", "Robert Trager", "Lacey Strahm", "Rumman Chowdhury" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=4Edaw1GjNU
@inproceedings{ gould2023successor, title={Successor Heads: Recurring, Interpretable Attention Heads In The Wild}, author={Rhys Gould and Euan Ong and George Ogden and Arthur Conmy}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=4Edaw1GjNU} }
In this work we present successor heads: attention heads that increment tokens with a natural ordering, such as numbers, months, and days. For example, successor heads increment ‘Monday’ into ‘Tuesday’. We explain the successor head behavior with an approach rooted in mechanistic interpretability, the field that aims to explain how models complete tasks in human-understandable terms. Existing research in this area has found interpretable language model components in small toy models. However, results in toy models have not yet led to insights that explain the internals of frontier models and little is currently understood about the internal operations of large language models. In this paper, we analyze the behavior of successor heads in large language models (LLMs) and find that they implement abstract representations that are common to different architectures. They form in LLMs with as few as 31 million parameters, and at least as many as 12 billion parameters, such as GPT-2, Pythia, and Llama-2. We find a set of ‘mod 10’ features that underlie how successor heads increment in LLMs across different architectures and sizes. We perform vector arithmetic with these features to edit head behavior and provide insights into numeric representations within LLMs. Additionally, we study the behavior of successor heads on natural language data, identifying interpretable polysemanticity in a Pythia successor head.
Successor Heads: Recurring, Interpretable Attention Heads In The Wild
[ "Rhys Gould", "Euan Ong", "George Ogden", "Arthur Conmy" ]
Workshop/SoLaR
2312.09230
[ "" ]
https://huggingface.co/papers/2312.09230
1
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=1kavETZX3Y
@inproceedings{ wang2023forbidden, title={Forbidden Facts: An Investigation of Competing Objectives in Llama 2}, author={Tony Wang and Miles Kai and Kaivalya Hariharan and Nir Shavit}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=1kavETZX3Y} }
LLMs often face competing pressures (for example helpfulness vs. harmlessness). To understand how models resolve such conflicts, we study Llama-2-7b-chat on the \textit{forbidden fact} task. Specifically, we instruct Llama 2 to truthfully complete a factual recall statement while forbidding it from saying the correct answer. This often makes the model give incorrect answers. We decompose Llama 2 into 1057 different components, and rank each one with respect to how useful it is for forbidding the correct answer. We find that in aggregate, 41 components are enough to reliably implement the full suppression behavior. However, we find that these components are fairly heterogeneous and that many operate using faulty heuristics. We find that one of these heuristics can be exploited via manually designed adversarial attacks, which we call California Attacks. Our results highlight some roadblocks standing in the way of being able to successfully interpret advanced ML systems.
Forbidden Facts: An Investigation of Competing Objectives in Llama 2
[ "Tony Wang", "Miles Kai", "Kaivalya Hariharan", "Nir Shavit" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zZPICTs5gB
@inproceedings{ generale2023a, title={A Bayesian Approach to Designing Microstructures and Processing Pathways for Tailored Material Properties}, author={Adam P. Generale and Conlain Kelly and Grayson Harrington and Andreas Euan Robertson and Michael Buzzy and Surya Kalidindi}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=zZPICTs5gB} }
Inverse problems are central to material design. While numerous studies have focused on designing microstructures by inverting structure-property linkages for various material systems, such efforts stop short of providing realizable paths to manufacture such structures. Accomplishing the dual task of designing a microstructure and a feasible manufacturing pathway to achieve a target property requires inverting the complete process-structure-property linkage. However, this inversion is complicated by a variety of challenges such as inherent microstructure stochasticity, high-dimensionality, and ill-conditioning of the inversion. In this work, we propose a Bayesian framework leveraging a lightweight flow-based generative approach for the stochastic inversion of the complete process-structure-property linkage. This inversion identifies a solution distribution in the processing parameter space; utilizing these processing conditions realizes materials with the target property sets. Our modular framework readily incorporates the output of stochastic forward models as conditioning variables for a flow-based generative model, thereby learning the complete joint distribution over processing parameters and properties. We demonstrate its application to the multi-objective task of designing processing routes of heterogeneous materials given target sets of bulk elastic moduli and thermal conductivities.
A Bayesian Approach to Designing Microstructures and Processing Pathways for Tailored Material Properties
[ "Adam P. Generale", "Conlain Kelly", "Grayson Harrington", "Andreas Euan Robertson", "Michael Buzzy", "Surya Kalidindi" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yVLGEnHSEU
@inproceedings{ soares2023beyond, title={Beyond Chemical Language: A Multimodal Approach to Enhance Molecular Property Prediction}, author={Eduardo Soares and Emilio Vital Brazil and Karen Fiorella Aquino Gutierrez and Renato Cerqueira and Daniel P Sanders and Kristin Schmidt and Dmitry Zubarev}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=yVLGEnHSEU} }
We present a novel multimodal language model approach for predicting molecular properties by combining chemical language representation with physicochemical features. Our approach, Multimodal-MoLFormer, utilizes a causal multi-stage feature selection method that identifies physicochemical features based on their direct causal effect on a specific target property. These causal features are then integrated with the vector space generated by molecular embeddings from MoLFormer. In particular, we employ Mordred descriptors as physicochemical features and identify the Markov blanket of the target property, which theoretically contains the most relevant features for accurate prediction. Our results demonstrate a superior performance of our proposed approach compared to existing state-of-the-art algorithms, including the chemical language-based MoLFormer and graph neural networks, in predicting complex tasks such as biodegradability and PFAS toxicity estimation. Moreover, we demonstrate the effectiveness of our feature selection method in reducing the dimensionality of the Mordred feature space while maintaining or improving the model’s performance. Our approach opens up promising avenues for future research in molecular property prediction by harnessing the synergistic potential of both chemical language and physicochemical features, leading to enhanced performance and advancements in the field.
Beyond Chemical Language: A Multimodal Approach to Enhance Molecular Property Prediction
[ "Eduardo Soares", "Emilio Vital Brazil", "Karen Fiorella Aquino Gutierrez", "Renato Cerqueira", "Daniel P Sanders", "Kristin Schmidt", "Dmitry Zubarev" ]
Workshop/AI4Mat
2306.14919
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xxyHjer00Y
@inproceedings{ fang2023phonon, title={Phonon predictions with E(3)-equivariant graph neural networks}, author={Shiang Fang and Mario Geiger and Joseph Checkelsky and Tess Smidt}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=xxyHjer00Y} }
We present an equivariant neural network for predicting the phonon modes of the periodic crystals and molecules by evaluating the second derivative Hessian matrices of the energy model, which are first trained with the energy and force data. Such efficient Hessian prediction enables us to predict the phonon dispersion and the density of states for inorganic crystal material and can be fine-tuned with additional dataset. For molecules, we also derive the symmetry constraints for infrared/Raman active modes by analyzing the phonon mode irreducible representations. Our training paradigm further shows using Hessian as a new type of higher-order training data to improve the energy models beyond the lower-order energy and force data.
Phonon predictions with E(3)-equivariant graph neural networks
[ "Shiang Fang", "Mario Geiger", "Joseph Checkelsky", "Tess Smidt" ]
Workshop/AI4Mat
2403.11347
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=wug7i3O7y1
@inproceedings{ prein2023mtencoder, title={{MTENCODER}: A Multi-task Pretrained Transformer Encoder for Materials Representation Learning}, author={Thorben Prein and Elton Pan and Tom Doerr and Elsa Olivetti and Jennifer L.M. Rupp}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=wug7i3O7y1} }
Given the vast spectrum of material properties characterizing each compound, learning representations for inorganic materials is intricate. The prevailing trend within the materials informatics community leans towards designing specialized models that predict single properties. We introduce a \textit{multi-task} learning framework, wherein a transformer-based encoder is co-trained across diverse materials properties and a denoising objective, resulting in robust and generalizable materials representations. Our method not only improves over the performance observed in single-dataset pretraining, but also showcases scalability and adaptability toward multi-dataset pretraining. Experiments demonstrate that the trained encoder \textsc{MTEncoder} captures chemically meaningful representations, surpassing the performance of currrent structure-agnostic materials encoders. This approach paves the way to improvements in a multitude of materials informatics tasks, prominently including materials property prediction and synthesis planning for materials discovery.
MTENCODER: A Multi-task Pretrained Transformer Encoder for Materials Representation Learning
[ "Thorben Prein", "Elton Pan", "Tom Doerr", "Elsa Olivetti", "Jennifer L.M. Rupp" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=waQPqifiuC
@inproceedings{ gonzales2023data, title={Data Efficient Training for Materials Property Prediction Using Active Learning Querying}, author={Carmelo Gonzales and Kin Long Kelvin Lee and Bin Mu and Mikhail Galkin and Santiago Miret}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=waQPqifiuC} }
The field of machine learning for materials property prediction and characterization is seeing rapid developments in models, datasets, and frameworks. While datasets and models grow in size, frameworks must mature concurrently to match the data requirements and quick development cycles required to support these growing workloads. The efficient training of models is one area where machine learning frameworks may be improved. Utilizing active learning querying strategies to train models from scratch using fewer data can lead to faster development cycles, model evaluations, and reduced costs of training. Well-studied active learning querying strategies from computer vision and natural language processing are directly applied to train an E(n)-GNN model from scratch using a subset of the Materials Project Database and Novel Materials Discovery (NOMAD) Database, with the results compared to data subset selection techniques and the standard training pipeline. In general, the models trained with active learning querying strategies meet or exceed the performance standard trained models while using significantly less training data.
Data Efficient Training for Materials Property Prediction Using Active Learning Querying
[ "Carmelo Gonzales", "Kin Long Kelvin Lee", "Bin Mu", "Mikhail Galkin", "Santiago Miret" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u6ndfkFRJC
@inproceedings{ hira2023reconstructing, title={Reconstructing Materials Tetrahedron: Challenges in Materials Information Extraction}, author={Kausik Hira and Mohd Zaki and Dhruvil Bhavesh Sheth and Mausam . and N M Anoop Krishnan}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=u6ndfkFRJC} }
Discovery of new materials has a documented history of propelling human progress for centuries and more. The behaviour of a material is a function of its composition, structure, and properties, which further depend on its processing and testing conditions. Recent developments in deep learning and natural language processing have enabled information extraction at scale from published literature such as peer-reviewed publications, books, and patents. However, this information is spread in multiple formats, such as tables, text, and images, and with little or no uniformity in reporting style giving rise to several machine learning challenges. Here, we discuss, quantify, and document these outstanding challenges in automated information extraction (IE) from materials science literature towards the creation of a large materials science knowledge base. Specifically, we focus on IE from text and tables and outline several challenges with examples. We hope the present work inspires researchers to address the challenges in a coherent fashion, providing to fillip to IE for the materials knowledge base.
Reconstructing Materials Tetrahedron: Challenges in Materials Information Extraction
[ "Kausik Hira", "Mohd Zaki", "Dhruvil Bhavesh Sheth", "Mausam .", "N M Anoop Krishnan" ]
Workshop/AI4Mat
2310.08383
[ "https://github.com/m3rg-iitd/matsci-ie-challanges" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=trnzZVhXj2
@inproceedings{ yang2023scalable, title={Scalable Diffusion for Materials Generation}, author={Sherry Yang and KwangHwan Cho and Amil Merchant and Pieter Abbeel and Dale Schuurmans and Igor Mordatch and Ekin Dogus Cubuk}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=trnzZVhXj2} }
​​​​Generative models trained on internet-scale data are capable of generating novel and realistic texts, images, and videos. A natural next question is whether these models can advance science, for example by generating novel stable materials. Traditionally, models with explicit structures (e.g., graphs) have been used in modeling structural relationships in scientific data (e.g., atoms and bonds in crystals), but generating structures can be difficult to scale to large and complex systems. Another challenge in generating materials is the mismatch between standard generative modeling metrics and downstream applications. For instance, common metrics such as the reconstruction error do not correlate well with the downstream goal of discovering novel stable materials. In this work, we tackle the scalability challenge by developing a unified crystal representation that can represent any crystal structure (UniMat), followed by training a diffusion probabilistic model on these UniMat representations. Our empirical results suggest that despite the lack of explicit structure modeling, UniMat can generate high fidelity crystal structures from larger and more complex chemical systems, outperforming previous graph-based approaches under various generative modeling metrics. To better connect the generation quality of materials to downstream applications, such as discovering novel stable materials, we propose additional metrics for evaluating generative models of materials, including per-composition formation energy and stability with respect to convex hulls through decomposition energy from Density Function Theory (DFT). Lastly, we show that conditional generation with UniMat can scale to previously established crystal datasets with up to millions of crystals structures, outperforming random structure search (the current leading method for structure discovery) in discovering new stable materials.
Scalable Diffusion for Materials Generation
[ "Sherry Yang", "KwangHwan Cho", "Amil Merchant", "Pieter Abbeel", "Dale Schuurmans", "Igor Mordatch", "Ekin Dogus Cubuk" ]
Workshop/AI4Mat
2311.09235
[ "" ]
https://huggingface.co/papers/2311.09235
0
0
0
7
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=teN9tMyzCm
@inproceedings{ tiwari2023cono, title={Co{NO}: Complex Neural Operator for Continuous Dynamical Systems}, author={Karn Tiwari and N M Anoop Krishnan and Prathosh AP}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=teN9tMyzCm} }
Abstract Neural operators extend data-driven models to map between infinite-dimensional functional spaces. These models have successfully solved continuous dynamical systems represented by differential equations, viz weather forecasting, fluid flow, or solid mechanics. However, the existing operators still rely on real space, thereby losing rich representations potentially captured in the complex space by functional transforms. In this paper, we introduce a Complex Neural Operator (CoNO), that parameterizes the integral kernel in the complex fractional Fourier domain. Additionally, the model employing a complex-valued neural network along with aliasing-free activation functions preserves the complex values and complex algebraic properties, thereby enabling improved representation, robustness to noise, and generalization. We show that the model effectively captures the underlying partial differential equation with a single complex fractional Fourier transform. We perform an extensive empirical evaluation of CoNO on several datasets and additional tasks such as zero-shot super-resolution, evaluation of out-of-distribution data, data efficiency, and robustness to noise. CoNO exhibits comparable or superior performance to all the state-of-the-art models in these tasks. Altogether, CoNO presents a robust and superior model for modeling continuous dynamical systems, providing a fillip to scientific machine learning. Our code implementation is available at https://anonymous.4open.science/r/anonymous-cono.
CoNO: Complex Neural Operator for Continuous Dynamical Systems
[ "Karn Tiwari", "N M Anoop Krishnan", "Prathosh AP" ]
Workshop/AI4Mat
2310.02094
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rz7qRZqk9h
@inproceedings{ xie2023tokenizer, title={Tokenizer Effect on Functional Material Prediction: Investigating Contextual Word Embeddings for Knowledge Discovery}, author={Tong Xie and Yuwei Wan and Ke Lu and Wenjie Zhang and Chunyu Kit and Bram Hoex}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=rz7qRZqk9h} }
Exploring the predictive capabilities of natural language processing models in material science is a subject of ongoing interest. This study examines material property prediction, relying on models to extract latent knowledge from compound names and material properties. We assessed various methods for contextual embeddings and explored pre-trained models like BERT and GPT. Our findings indicate that using information-dense embeddings from the third layer of domain-specific BERT models, such as MatBERT, combined with the context-average method, is the optimal approach for utilizing unsupervised word embeddings from material science literature to identify material-property relationships. The stark contrast between the domain-specific MatBERT and the general BERT model emphasizes the value of domain-specific training and tokenization for material prediction. Our research identifies a "tokenizer effect", highlighting the importance of specialized tokenization techniques to capture material names effectively during the pretraining phase. We discovered that a tokenizer which preserves compound names entirely, while maintaining a consistent token count, enhances the efficacy of context-aware embeddings in functional material prediction.
Tokenizer Effect on Functional Material Prediction: Investigating Contextual Word Embeddings for Knowledge Discovery
[ "Tong Xie", "Yuwei Wan", "Ke Lu", "Wenjie Zhang", "Chunyu Kit", "Bram Hoex" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rTSMwxnY4f
@inproceedings{ carbonero2023on, title={On the importance of catalyst-adsorbate 3D interactions for relaxed energy predictions}, author={Alvaro Carbonero and Alexandre AGM Duval and Victor Schmidt and Santiago Miret and Alex Hern{\'a}ndez-Garc{\'\i}a and Yoshua Bengio and David Rolnick}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=rTSMwxnY4f} }
The use of machine learning for material property prediction and discovery has traditionally centered on graph neural networks that incorporate the geometric configuration of all atoms. However, in practice not all this information may be readily available, e.g.~when evaluating the potentially unknown binding of adsorbates to catalyst. In this paper, we investigate whether it is possible to predict a system's relaxed energy in the OC20 dataset while ignoring the relative position of the adsorbate with respect to the electro-catalyst. We consider SchNet, DimeNet++ and FAENet as base architectures and measure the impact of four modifications on model performance: removing edges in the input graph, pooling independent representations, not sharing the backbone weights and using an attention mechanism to propagate non-geometric relative information. We find that while removing binding site information impairs accuracy as expected, modified models are able to predict relaxed energies with remarkably decent MAE. Our work suggests future research directions in accelerated materials discovery where information on reactant configurations can be reduced or altogether omitted.
On the importance of catalyst-adsorbate 3D interactions for relaxed energy predictions
[ "Alvaro Carbonero", "Alexandre AGM Duval", "Victor Schmidt", "Santiago Miret", "Alex Hernández-García", "Yoshua Bengio", "David Rolnick" ]
Workshop/AI4Mat
2310.06682
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qpQr8px2Pz
@inproceedings{ circi2023retrieval, title={Retrieval of synthesis parameters of polymer nanocomposites using {LLM}s}, author={Defne Circi and Ghazal Khalighinejad and Shruti Badhwar and Bhuwan Dhingra and L. Brinson}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=qpQr8px2Pz} }
Automated materials synthesis requires historical data, but extracting detailed data and metadata from publications is challenging. We developed initial strategies for using large language models for rapid, autonomous data extraction from materials science articles in a format curatable by a materials database. We used the sub-domain of polymer nanocomposites as our example use case and demonstrated a proof of concept case study via manual validation. We used Claude 2 chat, Open AI GPT-3.5, and 4 API to extract characterization methods and general information about the samples, utilizing zero and few-shot prompting to elicit more detailed and accurate responses. We achieved the best results with an F1 score of 0.88 in the sample extraction task, using Claude 2 chat. Our findings demonstrate the utility of language models for more effective and practical retrieval of synthesis parameters from literature.
Retrieval of synthesis parameters of polymer nanocomposites using LLMs
[ "Defne Circi", "Ghazal Khalighinejad", "Shruti Badhwar", "Bhuwan Dhingra", "L. Brinson" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qBWcaz5pWd
@inproceedings{ lu2023out, title={Out of Domain Stress Prediction on a Dataset of Simulated 3D Polycrystalline Microstructures}, author={Thomas Lu and Aarti Singh}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=qBWcaz5pWd} }
Surrogate machine learning models for expensive material simulations can be an effective method to estimate relevant properties, which can help reduce the number of experiments needed. However, significant difficulties can occur when attempting to learn from small simulated datasets, particularly for samples out of the domain of the training data. This work provides an exploration on training deep learning models on a dataset of 36 synthetic 3D equiaxed polycrystalline microstructures with different cubic textures with a focus on out-of-domain accuracy, analyzing a number of transfer learning set ups, domain adaptation methods, model architectures, and featurizations across two formulations of the problem. We develop an evaluation set-up to validate our results, and report several methods that provide better results than our baseline of a simple U-Net architecture.
Out of Domain Stress Prediction on a Dataset of Simulated 3D Polycrystalline Microstructures
[ "Thomas Lu", "Aarti Singh" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pKmcMaULn1
@inproceedings{ yang2023curator, title={{CURATOR}: Autonomous Batch Active-Learning Workflow for Catalysts}, author={Xin Yang and Renata Sechi and Martin Hoffmann Petersen and Arghya Bhowmik and Heine Anton Hansen}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=pKmcMaULn1} }
Machine learning interatomic potentials (MLIPs) enable molecular simulations at longer time scales without compromising accuracy and at lower computational costs compared to electronic structure methods such as density functional theory (DFT). Application of MLIPs to complex functional-materials development can help to create new scientific insights, however, MLIPs need ad-hoc training for each new system. Reaching sufficient accuracy through large-scale training is data-intensive, and requires a high level of technical proficiency from the user. Reliable MLIP construction requires an appropriate selection of representative structures and calibrated model uncertainty while avoiding undersampling of the state space. Currently, there is a lack of end-to-end automated software to take this complexity away from the end user. In this tutorial, we show how to use CURATOR, an open-source software-based autonomous batch active learning workflow. CURATOR trains message-passing graph neural networks and enables management of model training, production testing, data selection based on uncertainty estimation, optimal batch choice, labeling via DFT-based simulations, and retraining in a user-friendly way.
CURATOR: Autonomous Batch Active-Learning Workflow for Catalysts
[ "Xin Yang", "Renata Sechi", "Martin Hoffmann Petersen", "Arghya Bhowmik", "Heine Anton Hansen" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oRKWhmtUG6
@inproceedings{ yang2023accurate, title={Accurate Prediction of Experimental Band Gaps from Large Language Model-Based Data Extraction}, author={Samuel J. Yang and Shutong Li and Subhashini Venugopalan and Vahe Tshitoyan and Muratahan Aykol and Amil Merchant and Ekin Dogus Cubuk and Gowoon Cheon}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=oRKWhmtUG6} }
Machine learning is transforming materials discovery by providing rapid predictions of material properties, which enables large-scale screening for target materials. However, such models require training data. While automated data extraction from scientific literature has potential, current auto-generated datasets often lack sufficient accuracy and critical structural and processing details of materials that influence the properties. Using band gap as an example, we demonstrate Large language model (LLM)-prompt-based extraction yields an order of magnitude lower error rate. Combined with additional prompts to select a subset of experimentally measured properties from pure, single-crystalline bulk materials, this results in an automatically extracted dataset that's larger and more diverse than the largest existing human-curated database of experimental band gaps. Compared to the existing human-curated database, we show the model trained on our extracted database achieves a 19% reduction in the mean absolute error of predicted band gaps. Finally, we demonstrate that LLMs are able to train models predicting band gap on the extracted data, achieving an automated pipeline of data extraction to materials property prediction.
Accurate Prediction of Experimental Band Gaps from Large Language Model-Based Data Extraction
[ "Samuel J. Yang", "Shutong Li", "Subhashini Venugopalan", "Vahe Tshitoyan", "Muratahan Aykol", "Amil Merchant", "Ekin Dogus Cubuk", "Gowoon Cheon" ]
Workshop/AI4Mat
2311.13778
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=mlQMqXo083
@inproceedings{ nemati2023selfsupervised, title={Self-supervised Crack Detection in X-ray Computed Tomography Data of Additive Manufacturing Parts}, author={Saber Nemati and Seyedeh Shaghayegh Rabbanian and Hao Wang and Leslie Butler and Shengmin Guo}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=mlQMqXo083} }
Following the current trends for minimizing human intervention in training intelligent architectures, this paper proposes a self-supervised method for quality control of Additive Manufacturing (AM) parts. An Inconel 939 sample is fabricated with the Laser Powder Bed Fusion (L-PBF) method and scanned using X-ray Computed Tomography (XCT) to reveal the internal cracks. A self-supervised approach was adopted by employing three modules that generate crack-like features for training a CycleGAN network. The proposed method generates random cracks based on a combination of uniform and normal random variables and outperforms the others in fine-grain crack detection and capturing narrow tips. A preliminary investigation of the training process shows that the algorithm has the capability of predicting the crack propagation direction as well.
Self-supervised Crack Detection in X-ray Computed Tomography Data of Additive Manufacturing Parts
[ "Saber Nemati", "Seyedeh Shaghayegh Rabbanian", "Hao Wang", "Leslie Butler", "Shengmin Guo" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=mVttK4r1Zt
@inproceedings{ zhou2023active, title={Active learning for excited states dynamics simulations to discover molecular degradation pathways}, author={Chen Zhou and Prashant Kumar and Daniel Escudero and Pascal Friederich}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=mVttK4r1Zt} }
The demand for precise, data-efficient, and cost-effective exploration of chemical space has ignited growing interest in machine learning (ML), which exhibits remarkable capabilities in accelerating atomistic simulations of large systems over long time scales. Active learning is a technique widely used to reduce the cost of acquiring relevant ML training data. Here we present a modular, transferrable, and broadly applicable, parallel active learning orchestrator. Our workflow enables data and task parallelism for data generation, model training, and ML-enhanced simulations. We demonstrate its use in efficiently exploring multiple excited state potential energy surfaces and possible degradation pathways of an organic semiconductor used in organic light-emitting diodes. With our modular and adaptable workflow architecture, we expect our parallel active learning approach to be readily extended to explore other materials using state-of-the-art ML models, opening ways to AI-guided design and a better understanding of molecules and materials relevant to various applications, such as organic semiconductors or photocatalysts.
Active learning for excited states dynamics simulations to discover molecular degradation pathways
[ "Chen Zhou", "Prashant Kumar", "Daniel Escudero", "Pascal Friederich" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=mE6ldawl0n
@inproceedings{ jung2023data, title={Data Distillation for Neural Network Potentials toward Foundational Dataset}, author={Gang Seob Jung and Sangkeun Lee and Jong Choi}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=mE6ldawl0n} }
Machine learning (ML) techniques and atomistic modeling have rapidly transformed materials design and discovery. Specifically, generative models can swiftly propose promising materials for targeted applications. However, the predicted properties of materials through the generative models often do not match with calculated properties through ab initio calculations. This discrepancy can arise because the generated coordinates are not fully relaxed, whereas the many properties are derived from relaxed structures. Neural network-based potentials (NNPs) can expedite the process by providing relaxed structures from the initially generated ones. Nevertheless, acquiring data to train NNPs for this purpose can be extremely challenging as it needs to encompass previously unknown structures. This study utilized extended ensemble molecular dynamics (MD) to secure a broad range of liquid- and solid-phase configurations in one of the metallic systems, nickel. Then, we could significantly reduce them through active learning without losing much accuracy. We found that the NNP trained from the distilled data could predict different energy-minimized closed-pack crystal structures even though those structures were not explicitly part of the initial data. Furthermore, the data can be translated to other metallic systems (aluminum and niobium) without repeating the sampling and distillation processes. Our approach to data acquisition and distillation has demonstrated the potential to expedite NNP development and enhance materials design and discovery by integrating generative models.
Data Distillation for Neural Network Potentials toward Foundational Dataset
[ "Gang Seob Jung", "Sangkeun Lee", "Jong Choi" ]
Workshop/AI4Mat
2311.05407
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=m0YBr6wFin
@inproceedings{ cunnington2023symbolic, title={Symbolic Learning for Material Discovery}, author={Daniel Cunnington and Flaviu Cipcigan and Rodrigo Neumann Barros Ferreira and Jonathan Booth}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=m0YBr6wFin} }
Discovering new materials is essential to solve challenges in climate change, sustainability and healthcare. A typical task in materials discovery is to search for a material in a database which maximises the value of a function. That function is often expensive to evaluate, and can rely upon a simulation or an experiment. Here, we introduce SyMDis, a sample efficient optimisation method based on symbolic learning, that discovers near-optimal materials in a large database. SyMDis performs comparably to a state-of-the-art optimiser, whilst learning interpretable rules to aid physical and chemical verification. Furthermore, the rules learned by SyMDis generalise to unseen datasets and return high performing candidates in a zero-shot evaluation, which is difficult to achieve with other approaches.
Symbolic Learning for Material Discovery
[ "Daniel Cunnington", "Flaviu Cipcigan", "Rodrigo Neumann Barros Ferreira", "Jonathan Booth" ]
Workshop/AI4Mat
2312.11487
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=lXHSXyyLhd
@inproceedings{ soares2023capturing, title={Capturing Formulation Design of Battery Electrolytes with Chemical Large Language Model}, author={Eduardo Soares and Vidushi Sharma and Emilio Vital Brazil and Renato Cerqueira and Young-Hye Na}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=lXHSXyyLhd} }
Recent progress in large transformers-based foundation models have demonstrated impressive capabilities in mastering complex chemical language representations. These models show promise in learning task-agnostic chemical language representations through a two-step process: pre-training on extensive unlabeled corpora and fine-tuning on specific downstream tasks. By utilizing self-supervised learning capabilities, foundation models have significantly reduced the reliance on labeled data and task-specific features, streamlining data acquisition and pushing the boundaries of chemical language representation. However, their practical implementation in further downstream tasks is still in its early stages and largely limited to sequencing problems. The proposed multimodal approach using MoLFormer, a chemical large language model, aims to demonstrate the capabilities of transformer based models to non-sequencing applications such as capturing design space of liquid formulations. Multimodal MoLFormer utilizes the extensive chemical information learned in pre-training from unlabeled corpora for predicting performance of battery electrolytes and showcases superior performance compared to state-of-the-art algorithms. The potential of foundation models in designing mixed material systems such as liquid formulations presents a groundbreaking opportunity to accelerate the discovery and optimization of new materials and formulations across various industries.
Capturing Formulation Design of Battery Electrolytes with Chemical Large Language Model
[ "Eduardo Soares", "Vidushi Sharma", "Emilio Vital Brazil", "Renato Cerqueira", "Young-Hye Na" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=l3K28QS6R6
@inproceedings{ ruff2023connectivity, title={Connectivity Optimized Nested Line Graph Networks for Crystal Structures}, author={Robin Ruff and Patrick Reiser and Jan Stuehmer and Pascal Friederich}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=l3K28QS6R6} }
Graph neural networks (GNNs) have been applied to a large variety of applications in materials science and chemistry. Here, we systematically investigate the graph construction for crystalline (periodic) materials and investigate its impact on the GNNs model performance. We propose the asymmetric unit cell as a representation to reduce the number of nodes needed to represent periodic graphs by exploiting all symmetries of the system. Without any loss in accuracy, this substantially reduces the computational cost and thus time needed to train large graph neural networks. For architecture exploration we extend the original Graph Network framework (GN) of Battaglia et al. [1], introducing nested line graphs (Nested Line Graph Network, NLGN) to include more recent architectures. Thereby, with a systematically built GNN architecture based on NLGN blocks, we improve state-of-the-art results across all tasks within the MatBench benchmark. Further analysis shows that optimized connectivity and deeper message functions are responsible for the improvement. Asymmetric unit cells and connectivity optimization can be generally applied to (crystal) graph networks, while the suggested nested NLGN framework can be used to as a template to compare and build more GNN architectures.
Connectivity Optimized Nested Line Graph Networks for Crystal Structures
[ "Robin Ruff", "Patrick Reiser", "Jan Stuehmer", "Pascal Friederich" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=l167FjdPOv
@inproceedings{ mistal2023crystalgfn, title={Crystal-{GFN}: sampling materials with desirable properties and constraints}, author={Mistal and Alex Hern{\'a}ndez-Garc{\'\i}a and Alexandra Volokhova and Alexandre AGM Duval and Yoshua Bengio and Divya Sharma and Pierre Luc Carrier and Micha{\l} Koziarski and Victor Schmidt}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=l167FjdPOv} }
Accelerating material discovery holds the potential to greatly help mitigate the climate crisis. Discovering new solid-state materials such as electrocatalysts, super-ionic conductors or photovoltaic materials can have a crucial impact, for instance, in improving the efficiency of renewable energy production and storage. In this paper, we introduce Crystal-GFN, a generative model of crystal structures that sequentially samples structural properties of crystalline materials, namely the space group, composition and lattice parameters. This domain-inspired approach enables the flexible incorporation of physical and structural constraints, as well as the use of any available predictive model of a desired physico-chemical property as an objective function. To design stable materials, one must target the candidates with the lowest formation energy, which is used as an objective to evaluate the capabilities of Crystal-GFN. The formation energy of a crystal structure is predicted here by a new proxy model trained on MatBench. The results demonstrate that Crystal-GFN is able to sample diverse crystals with low formation energy.
Crystal-GFN: sampling materials with desirable properties and constraints
[ "Mistal", "Alex Hernández-García", "Alexandra Volokhova", "Alexandre AGM Duval", "Yoshua Bengio", "Divya Sharma", "Pierre Luc Carrier", "Michał Koziarski", "Victor Schmidt" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=kupYlLLGdf
@inproceedings{ majumdar2023pihlora, title={{PIHL}o{RA}: Physics-informed hypernetworks for low-ranked adaptation}, author={Ritam Majumdar and Vishal Sudam Jadhav and Anirudh Deodhar and Shirish Karande and Lovekesh Vig and Venkataramana Runkana}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=kupYlLLGdf} }
Physics-informed neural networks (PINNs) have been widely used to develop neural surrogates for solutions of Partial Differential Equations. A drawback of PINNs is that they have to be retrained with every change in initial-boundary conditions and PDE coefficients. The Hypernetwork, a model-based meta learning technique, takes in a parameterized task embedding as input and predicts the weights of PINN as output. Predicting weights of a neural network however, is a high-dimensional regression problem, and thus it is observed that hypernetworks perform sub-optimally while predicting parameters for large base networks. In this work we investigate whether we can circumvent the above issue with use of low ranked adaptation (LORA). Specifically, we use low ranked adaptation to decompose every layer of the base network into low-ranked tensors and use hypernetworks to predict the low-ranked tensors. However, we observe that the reduced dimensionality of the resulting weight-regression problem does not suffice to train the hypernetwork well. Nevertheless, addition of physics informed loss (HyperPINN) drastically improves the generalization capabilities. In order to show the efficacy of our proposed methods we consider widely used PDEs used in the domain of Material Science such as Maxwell's equation, Elasticity equation, Burger's equation, Navier-Stokes. We observe that LoRA-based HyperPINN (PIHLoRA) training allows us to learn fast solutions while having an 8x reduction in prediction parameters on average without compromising on accuracy when compared to all other baselines.
PIHLoRA: Physics-informed hypernetworks for low-ranked adaptation
[ "Ritam Majumdar", "Vishal Sudam Jadhav", "Anirudh Deodhar", "Shirish Karande", "Lovekesh Vig", "Venkataramana Runkana" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=kjUylvko18
@inproceedings{ murakumo2023llm, title={{LLM} Drug Discovery Challenge: A Contest as a Feasibility Study on the Utilization of Large Language Models in Medicinal Chemistry}, author={Kusuri Murakumo and Naruki Yoshikawa and Kentaro Rikimaru and Shogo Nakamura and Kairi Furui and Takamasa Suzuki and Hiroyuki Yamasaki and Yuki Nishigaya and Yuzo Takagi and Masahito Ohue}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=kjUylvko18} }
The ultimate ideal in AI-driven drug discovery is the automatic design of specific drugs for individual diseases, yet this goal remains technically distant at present. However, recent advancements in large language models (LLMs) have significantly broadened the scope of applications with various tasks being explored in the chemistry domain. To probe the potential of utilizing LLMs in drug discovery, we organized a contest: the LLM Drug Discovery Challenge. Participants were tasked with proposing molecular structures of active compound candidates for a designated drug target using LLM-based workflows. The proposed chemical structures were evaluated comprehensively through scoring by a panel of five judges with deep expertise in medicinal chemistry, structural biology, and computational chemistry. Nine participants tackled the challenge with their unique methodologies, exploring the possibilities and current limitations of leveraging LLMs in drug discovery. In this rapidly advancing field, we aim to discuss the directions of future developments and what is expected moving forward.
LLM Drug Discovery Challenge: A Contest as a Feasibility Study on the Utilization of Large Language Models in Medicinal Chemistry
[ "Kusuri Murakumo", "Naruki Yoshikawa", "Kentaro Rikimaru", "Shogo Nakamura", "Kairi Furui", "Takamasa Suzuki", "Hiroyuki Yamasaki", "Yuki Nishigaya", "Yuzo Takagi", "Masahito Ohue" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=k3523F7PpW
@inproceedings{ wen2023search, title={Search Strategies for Self-driving Laboratories with Pending Experiments}, author={Hao Wen and Jakob Zeitler and Connor Rupnow}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=k3523F7PpW} }
Self-driving laboratories (SDLs) consist of multiple stations that perform material synthesis and characterisation tasks. To minimize station downtime and maxi- mize experimental throughput, it is practical to run experiments in asynchronous parallel, in which multiple experiments are being performed at once in differ- ent stages. Asynchronous parallelization of experiments, however, introduces delayed feedback (i.e. “pending points”), which is known to reduce Bayesian optimizer performance. Here, we build a simulator for a multi-stage SDL and com- pare optimization strategies for dealing with delayed feedback and asynchronous parallelized operation. Using data from [1], we build a ground truth Bayesian optimization simulator from 177 previously run experiments for maximizing the conductivity of functional coatings. We then compare search strategies such as naive expected improvement, 4-mode exploration as proposed by the original authors and asynchronous batching. We evaluate their performance in terms of number of stages, and short, medium and long-term optimization performance. Our simulation results showcase the trade-off between the asynchronous parallel operation and delayed feedback.
Search Strategies for Self-driving Laboratories with Pending Experiments
[ "Hao Wen", "Jakob Zeitler", "Connor Rupnow" ]
Workshop/AI4Mat
2312.03466
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=jtAXitX6dh
@inproceedings{ stevenson2023machine, title={Machine learning force field ranking of candidate solid electrolyte interphase structures in Li-ion batteries}, author={James Minuse Stevenson}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=jtAXitX6dh} }
The Solid-Electrolyte Interphase (SEI) formed in lithium-ion batteries is a vital but poorly-understood class of materials, combining organic and inorganic components. An SEI allows a battery to function by protecting electrode materials from unwanted side reactions. We use a combination of classical sampling and a novel machine learning model to produce the first set of SEI candidate structures ranked by predicted energy, to be used in future machine learning applications and compared to experimental results. We hope that this work will be the start of a more quantitative understanding of lithium-ion battery interphases and an impetus to development of machine learning models for battery materials.
Machine learning force field ranking of candidate solid electrolyte interphase structures in Li-ion batteries
[ "James Minuse Stevenson" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=josIqIStKs
@inproceedings{ lee2023matsciml, title={MatSci{ML}: A Broad, Multi-Task Benchmark for Solid-State Materials Modeling}, author={Kin Long Kelvin Lee and Carmelo Gonzales and Marcel Nassar and Matthew Spellings and Mikhail Galkin and Santiago Miret}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=josIqIStKs} }
We propose MatSci ML, a novel benchmark for modeling **Mat**erials **Sci**ence using **M**achine **L**earning methods focused on solid-state materials with periodic crystal structures. Applying machine learning methods to solid-state materials is a nascent field with substantial fragmentation largely driven by the great variety of datasets used to develop machine learning models. This fragmentation makes comparing the performance and generalizability of different methods difficult, thereby hindering overall research progress in the field. Building on top of open-source datasets, including large-scale datasets like the OpenCatalyst, OQMD, NOMAD, the Carolina Materials Database, and Materials Project, the MatSci ML benchmark provides a diverse set of materials systems and properties data for model training and evaluation, including simulated energies, atomic forces, material bandgaps, as well as classification data for crystal symmetries via space groups. The diversity of properties in MatSci ML makes the implementation and evaluation of multi-task learning algorithms for solid-state materials possible, while the diversity of datasets facilitates the development of new, more generalized algorithms and methods across multiple datasets. In the multi-dataset learning setting, MatSci ML enables researchers to combine observations from multiple datasets to perform joint prediction of common properties, such as energy and forces. Using MatSci ML, we evaluate the performance of different graph neural networks and equivariant point cloud networks on several benchmark tasks spanning single task, multitask, and multi-data learning scenarios. Our open-source code is available at \url{https://github.com/IntelLabs/matsciml}.
MatSciML: A Broad, Multi-Task Benchmark for Solid-State Materials Modeling
[ "Kin Long Kelvin Lee", "Carmelo Gonzales", "Marcel Nassar", "Matthew Spellings", "Mikhail Galkin", "Santiago Miret" ]
Workshop/AI4Mat
2309.05934
[ "https://github.com/intellabs/matsciml" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=jlZrTCccAb
@inproceedings{ cheng2023reflectionequivariant, title={Reflection-Equivariant Diffusion for 3D Structure Determination from Isotopologue Rotational Spectra in Natural Abundance}, author={Austin Henry Cheng and Alston Lo and Santiago Miret and Brooks Pate and Alan Aspuru-Guzik}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=jlZrTCccAb} }
Structure determination is necessary to identify unknown organic molecules, such as those in natural products, forensic samples, the interstellar medium, and laboratory syntheses. Rotational spectroscopy enables structure determination by providing accurate 3D information about small organic molecules via their moments of inertia. Kraitchman analysis uses these moments to determine isotopic substitution coordinates, which are the unsigned $|x|,|y|,|z|$ coordinates of all atoms with natural isotopic abundance, including carbon, nitrogen, and oxygen. While unsigned substitution coordinates can verify guesses of structures, the missing $+/-$ signs make it a hard computational problem to determine the actual structure from just the substitution coordinates. To tackle this inverse problem, we develop KREED (Kraitchman REflection-Equivariant Diffusion), a diffusion generative model which infers a molecule's all-atom 3D structure conditioned on the molecular formula, moments of inertia, and unsigned substitution coordinates of carbon and other heavy atoms. KREED's top-1 predictions identify the correct 3D structure with $>$98\% accuracy on the QM9 and GEOM datasets when provided with substitution coordinates of all heavy atoms with natural isotopic abundance. When substitution coordinates are restricted to only a subset of carbons, accuracy is retained at 91\% for QM9 and 32\% for GEOM. On a test set of experimentally measured substitution coordinates gathered from the literature, KREED can identify the correct all-atom 3D structure in 25 of 33 cases, demonstrating experimental applicability for context-free 3D structure determination with rotational spectroscopy.
Reflection-Equivariant Diffusion for 3D Structure Determination from Isotopologue Rotational Spectra in Natural Abundance
[ "Austin Henry Cheng", "Alston Lo", "Santiago Miret", "Brooks Pate", "Alan Aspuru-Guzik" ]
Workshop/AI4Mat
2310.11609
[ "https://github.com/aspuru-guzik-group/kreed" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=iSFsLFsGYX
@inproceedings{ shibata2023message, title={Message Passing Neural Network for Predicting Dipole Moment Dependent Core Electron Excitation Spectra}, author={Kiyou Shibata and Teruyasu Mizoguchi}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=iSFsLFsGYX} }
Absorption near edge structures in the core electron excitation spectra reflect the anisotropy of orbitals in the transition final state and can be used for analyzing local atomic environment including its orientation. So far, the analysis of fine structures is mainly based on a fingerprint-matching with high-cost experimental or simulated spectra. If core electron excitation spectra, including its anisotropy, can be predicted at low cost using machine learning, the application range of the core electron excitation spectra will be accelerated and extended for such as orientation and electronic structure analysis of liquid crystals and organic solar cells at high spatial resolution. In this study, we introduce a message-passing neural network for predicting core electron excitation spectra using a unit direction vector in addition to molecular graphs as input. Utilizing a database of calculated C K-edge spectra, we have confirmed that the network can predict core electron excitation spectra reflecting the anisotropy of molecules. Our model is expected to be expanded to other physical quantities in general that depend not only on molecular graphs but also on anisotropic vectors.
Message Passing Neural Network for Predicting Dipole Moment Dependent Core Electron Excitation Spectra
[ "Kiyou Shibata", "Teruyasu Mizoguchi" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=hIqTGXFu0J
@inproceedings{ noutahi2023gotta, title={Gotta be {SAFE}: A new Framework for Molecular Design}, author={Emmanuel Noutahi and Cristian Gabellini and Michael Craig and Jonathan Siu Chi Lim and Prudencio Tossou}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=hIqTGXFu0J} }
Traditional molecular string representations, such as SMILES, often pose challenges for AI-driven molecular design due to their non-sequential depiction of molecular substructures. To address this issue, we introduce Sequential Attachment-based Fragment Embedding (SAFE), a novel line notation for chemical structures. SAFE reimagines SMILES strings as an unordered sequence of interconnected fragment blocks while maintaining full compatibility with existing SMILES parsers. It streamlines complex generative tasks, including scaffold decoration, fragment linking, polymer generation, and scaffold hopping, while facilitating autoregressive generation for fragment-constrained design, thereby eliminating the need for intricate decoding or graph-based models. We demonstrate the effectiveness of SAFE by training an 87-million-parameter GPT2-like model on a dataset containing 1.1 billion SAFE representations. Through extensive experimentation, we show that our SAFE-GPT model exhibits versatile and robust optimization performance. SAFE opens up new avenues for the rapid exploration of chemical space under various constraints, promising breakthroughs in AI-driven molecular design.
Gotta be SAFE: A new Framework for Molecular Design
[ "Emmanuel Noutahi", "Cristian Gabellini", "Michael Craig", "Jonathan Siu Chi Lim", "Prudencio Tossou" ]
Workshop/AI4Mat
2310.10773
[ "https://github.com/datamol-io/safe" ]
https://huggingface.co/papers/2310.10773
2
0
0
5
[ "datamol-io/safe-gpt" ]
[ "datamol-io/safe-gpt", "datamol-io/safe-drugs", "anrilombard/safe-gpt-small" ]
[ "bcadkins01/beta_lactam_demo" ]
[ "datamol-io/safe-gpt" ]
[ "datamol-io/safe-gpt", "datamol-io/safe-drugs", "anrilombard/safe-gpt-small" ]
[ "bcadkins01/beta_lactam_demo" ]
1
poster
null
https://openreview.net/forum?id=ezuhvZAaUF
@inproceedings{ lingampalli2023ecocomp, title={Eco-Comp: Towards Responsible Computing in Materials Science}, author={Sai Lingampalli and El Tayeb Bentria and Fadwa El Mellouhi}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=ezuhvZAaUF} }
Bridging the time and length scales and the use of large molecular dynamics (MD) simulations in material science is expected to surge in the next few years, partially due to the development of highly accurate machine learning inter-atomic potentials that enable the simulation of multi-million atomic systems. We also expect a high demand for material science simulations using multiple nodes within high-performance computing facilities (HPCs) due to their computational intensity. Through the analysis of catalysis simulation setups consisting of bulk metallic systems with adsorbed molecular species on the surface, we identified various factors that affect parallel computing efficiency. To foster sustainable and ethical computing practices, this study employs the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) to find the optimal allocation of computing resources based on the simulation input. We thus propose guidelines to promote responsible computing within HPC architecture: Eco-Comp is a user-friendly automated Python tool that allows material scientists to optimize the power consumption of their simulations using one command. This tutorial gives a broad overview of the Eco-Comp software and its potential use for the material science community through an interactive guide.
Eco-Comp: Towards Responsible Computing in Materials Science
[ "Sai Lingampalli", "El Tayeb Bentria", "Fadwa El Mellouhi" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=eDlEn1PPJw
@inproceedings{ guha2023hepom, title={{HEPOM}: A predictive framework for accelerated Hydrolysis Energy Predictions of Organic Molecules}, author={Rishabh Debraj Guha and Santiago Vargas and Evan Walter Clark Spotte-Smith and Alex R Epstein and Maxwell Christopher Venetos and Mingjian Wen and Ryan Kingsbury and Samuel M Blau and Kristin Persson}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=eDlEn1PPJw} }
Hydrolysis is a fundamental chemical reaction where water facilitates the cleavage of bonds in a reactant molecule. The process is ubiquitous in biological and chemical systems, owing to water's remarkable versatility as a solvent. However, accurately predicting the feasibility of hydrolysis through computational techniques is a difficult task, as subtle changes in reactant structure like heteroatom substitutions or neighboring functional groups can influence the reaction outcome. Furthermore, hydrolysis is sensitive to the pH of the aqueous medium, and the same reaction can have different reaction properties at different pH conditions. In this work, we have combined reaction templates and high-throughput ab-initio calculations to construct a diverse dataset of hydrolysis free energies. Subsequently, we use a Graph Neural Network (GNN) to predict the free energy changes ($\Delta$G) for all hydrolytic pathways within a subset of the QM9 molecular dataset. The framework automatically identifies reaction centers, generates hydrolysis products, and utilizes a trained GNN model to predict $\Delta$G values for all potential hydrolysis reactions in a given molecule. The long-term goal of the work is to develop a data-driven, computational tool for high-throughput screening of pH-specific hydrolytic stability and the rapid prediction of reaction products, which can then be applied in a wide array of applications including chemical recycling of polymers and ion-conducting membranes for clean energy generation and storage.
HEPOM: A predictive framework for accelerated Hydrolysis Energy Predictions of Organic Molecules
[ "Rishabh Debraj Guha", "Santiago Vargas", "Evan Walter Clark Spotte-Smith", "Alex R Epstein", "Maxwell Christopher Venetos", "Mingjian Wen", "Ryan Kingsbury", "Samuel M Blau", "Kristin Persson" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=dJuDv4MKLE
@inproceedings{ nguyen2023hierarchical, title={Hierarchical {GF}lowNet for Crystal Structure Generation}, author={Tri Minh Nguyen and Sherif Abdulkader Tawfik and Truyen Tran and Sunil Gupta and Santu Rana and Svetha Venkatesh}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=dJuDv4MKLE} }
Discovering new solid-state materials necessitates the ability to rapidly explore the vast space of crystal structures and locate stable regions. Generating stable materials with desired properties and composition is a challenging task because of (a) the exponentially large number of possibilities when the elements from the periodic table are considered along with vast variations in their 3D arrangement and corresponding lattice parameters and (b) the rarity of the stable structures. Furthermore, materials discovery requires not only optimized solution structures but also diversity in the configuration of generated material structures. Existing methods have difficulty when exploring large material spaces and generating significantly diverse samples with desired properties and requirements. We propose Crystal Hierarchical Generative Flow Network (CHGlownet), a new generative model that employs a hierarchical exploration strategy with Generative Flow Network to efficiently explore the material space while generating the crystal structure with desired properties. Our model decomposes the large material space into a hierarchy of subspaces of space groups, lattice parameters, and atoms. We significantly outperform the iterative generative methods such as Generative Flow Network (GFlowNet) and Physics Guided Crystal Generative Model (PGCGM) in crystal structure generative tasks in validity, diversity, and generating stable structures with optimized properties and requirements.
Hierarchical GFlowNet for Crystal Structure Generation
[ "Tri Minh Nguyen", "Sherif Abdulkader Tawfik", "Truyen Tran", "Sunil Gupta", "Santu Rana", "Svetha Venkatesh" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=cq2MJtq9iA
@inproceedings{ cipcigan2023discovery, title={Discovery of Novel Reticular Materials for Carbon Dioxide Capture using {GF}lowNets}, author={Flaviu Cipcigan and Jonathan Booth and Rodrigo Neumann Barros Ferreira and Carine Ribeiro Dos Santos and Mathias B Steiner}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=cq2MJtq9iA} }
Artificial intelligence holds promise to improve materials discovery. GFlowNets are an emerging deep learning algorithm with many applications in AI-assisted discovery. Using GFlowNets, we generate porous reticular materials, such as metal organic frameworks and covalent organic frameworks, for applications in carbon dioxide capture. We introduce a new Python package (matgfn) to train and sample GFlowNets. We use matgfn to generate the matgfn-rm dataset of novel and diverse reticular materials with gravimetric surface area above 5000 $m^2 /g$. We calculate single- and two-component gas adsorption isotherms for the top-100 candidates in matgfn-rm. These candidates are novel compared to the state-of-art ARC-MOF dataset and rank in the 90th percentile in terms of working capacity compared to the CoRE2019 dataset. We discover 15 hypothetical materials outperforming all materials in CoRE2019.
Discovery of Novel Reticular Materials for Carbon Dioxide Capture using GFlowNets
[ "Flaviu Cipcigan", "Jonathan Booth", "Rodrigo Neumann Barros Ferreira", "Carine Ribeiro Dos Santos", "Mathias B Steiner" ]
Workshop/AI4Mat
2310.07671
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=cSz69rFRvS
@inproceedings{ beeler2023demonstrating, title={Demonstrating ChemGym{RL}: An Interactive Framework for Reinforcement Learning for Digital Chemistry}, author={Chris Beeler and Sriram Ganapathi Subramanian and Kyle Sprague and Colin Bellinger and Mark Crowley and Isaac Tamblyn}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=cSz69rFRvS} }
This tutorial describes a simulated laboratory for making use of reinforcement learning (RL) for chemical discovery. A key advantage of the simulated environment is that it enables RL agents to be trained safely and efficiently. In addition, it offer an excellent test-bed for RL in general, with challenges which are uncommon in existing RL benchmarks. The simulated laboratory, denoted ChemGymRL, is open-source, implemented according to the standard Gymnasium API, and is highly customizable. It supports a series of interconnected virtual chemical \emph{benches} where RL agents can operate and train. Within this tutorial introduce the environment, demonstrate how to train off-the-shelf RL algorithms on the benches, and how to modify the benches by adding additional reactions and other capabilities. In addition, we discuss future directions for ChemGymRL benches and RL for laboratory automation and the discovery of novel synthesis pathways. The software, documentation and tutorials are available here: https://www.chemgymrl.com
Demonstrating ChemGymRL: An Interactive Framework for Reinforcement Learning for Digital Chemistry
[ "Chris Beeler", "Sriram Ganapathi Subramanian", "Kyle Sprague", "Colin Bellinger", "Mark Crowley", "Isaac Tamblyn" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=cR1iE6MQ1y
@inproceedings{ venugopal2023matkg, title={Mat{KG}-2: Unveiling precise material science ontology through autonomous committees of {LLM}s}, author={Vineeth Venugopal and Elsa Olivetti}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=cR1iE6MQ1y} }
This paper introduces MatKG-2, a Material Science knowledge graph autonomously generated through a Large Language Model (LLM) driven pipeline. Building on the groundwork of MatKG, MatKG-2 employs a novel 'committee of large language models' approach to extract and classify knowledge triples with an established ontology. Unlike the previous version, which relied on statistical co-occurrence, MatKG-2 offers more nuanced, ontology-based relationships. Using open LLMs such as Llama2 7b and Bloom 1b/7b, the study offers reproducibility and broad community engagement. By using 4-bit and 8-bit quantized versions for fine-tuning and inference, MatKG-2 is also more computationally tractable and therefore compatible with most commercially available GPUs. Our work highlights the potential of MatKG-2 in supporting Material Science data infrastructure and in contributing to the semantic web.
MatKG-2: Unveiling precise material science ontology through autonomous committees of LLMs
[ "Vineeth Venugopal", "Elsa Olivetti" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=b3RULuNZjQ
@inproceedings{ fu2023mofdiff, title={{MOFD}iff: Coarse-grained Diffusion for Metal-Organic Framework Design}, author={Xiang Fu and Tian Xie and Andrew Scott Rosen and Tommi Jaakkola and Jake Allen Smith}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=b3RULuNZjQ} }
Metal-organic frameworks (MOFs) are of immense interest in applications such as gas storage and carbon capture due to their exceptional porosity and tunable chemistry. Their modular nature has enabled the use of template-based methods to generate hypothetical MOFs by combining molecular building blocks in accordance with known network topologies. However, the ability of these methods to identify top-performing MOFs is often hindered by the limited diversity of the resulting chemical space. In this work, we propose MOFDiff: a coarse-grained (CG) diffusion model that generates CG MOF structures through a denoising diffusion process over the coordinates and identities of the building blocks. The all-atom MOF structure is then determined through a novel assembly algorithm. As the diffusion model generates 3D MOF structures by predicting scores in E(3), we employ equivariant graph neural networks that respect the permutational and roto-translational symmetries. We comprehensively evaluate our model's capability to generate valid and novel MOF structures and its effectiveness in designing outstanding MOF materials for carbon capture applications with molecular simulations.
MOFDiff: Coarse-grained Diffusion for Metal-Organic Framework Design
[ "Xiang Fu", "Tian Xie", "Andrew Scott Rosen", "Tommi Jaakkola", "Jake Allen Smith" ]
Workshop/AI4Mat
2310.10732
[ "" ]
https://huggingface.co/papers/2310.10732
1
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=akFqQokObE
@inproceedings{ alberts2023learning, title={Learning the Language of {NMR}: Structure Elucidation from {NMR} spectra using Transformer Models}, author={Marvin Alberts and Federico Zipoli and Alain Vaucher}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=akFqQokObE} }
The application of machine learning models in chemistry has made remarkable strides in recent years. Even though there is considerable interest in automating common procedure in analytical chemistry using machine learning, very few models have been adopted into everyday use. Among the analytical instruments available to chemists, Nuclear Magnetic Resonance (NMR) spectroscopy is one of the most important, offering insights into molecular structure unobtainable with other methods. However, most processing and analysis of NMR spectra is still performed manually, making the task tedious and time consuming especially for large quantities of spectra. We present a transformer-based machine learning model capable of predicting the molecular structure directly from the NMR spectrum. Our model is pretrained on synthetic NMR spectra, achieving a top-1 accuracy of 67.0% when predicting the structure from both the $^1$H and $^{13}$C spectrum. Additionally, we train a model which, given a spectrum and a set of likely compounds, selects the structure corresponding to the spectrum. This model achieves a top-1 accuracy of 98.28% when trained on both $^1$H and $^{13}$C spectra in selecting the correct structure.
Learning the Language of NMR: Structure Elucidation from NMR spectra using Transformer Models
[ "Marvin Alberts", "Federico Zipoli", "Alain Vaucher" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Ztku4ig4xM
@inproceedings{ coda2023impacts, title={Impacts of Data and Models on Unsupervised Pre-training for Molecular Property Prediction}, author={Elizabeth Coda and Gihan Uthpala Panapitiya and Emily Saldanha}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=Ztku4ig4xM} }
The available labeled data to support molecular property prediction are limited in size due to experimental time and cost requirements. However, unsupervised learning techniques can leverage vast databases of molecular structures, thus significantly expanding the scope of training data. We compare the effectiveness of pre-training data and modeling choices to support the downstream task of molecular aqueous solubility prediction. We also compare the global and local structure of the learned latent spaces to probe the properties of effective pre-training approaches. We find that the pre-training modeling choices affect predictive performance and the latent space structure much more than the data choices.
Impacts of Data and Models on Unsupervised Pre-training for Molecular Property Prediction
[ "Elizabeth Coda", "Gihan Uthpala Panapitiya", "Emily Saldanha" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ZoTcKpyL3p
@inproceedings{ chen2023automatic, title={Automatic Generation of Mechanistic Pathways of Organic Reactions with Dual Templates}, author={Shuan Chen and Ramil Babazade and Yousung Jung}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=ZoTcKpyL3p} }
Understanding organic reaction mechanisms is crucial for interpreting the formation of products at the atomic and electronic level, but still remains as a domain of knowledgeable experts. The lack of a large-scale dataset with chemically reasonable mechanistic sequences also hinders the development of reliable machine learning models to predict organic reactions based on mechanisms as human chemists do. Here, we propose a method that automatically generates reaction mechanisms of a large dataset of organic reactions using autonomously extracted reaction templates and expert-coded mechanistic templates. By applying this method, we labeled 94.8\% of 33k USPTO reactions into chemically reasonable arrow-pushing diagrams, validated by expert chemists. Our method is simple, flexible, and can be expanded to cover a wider range of reactions, regardless of type or complexity. We envision it becoming an invaluable tool to propose reaction mechanisms, and to develop future reaction outcome prediction models and discover new reactions.
Automatic Generation of Mechanistic Pathways of Organic Reactions with Dual Templates
[ "Shuan Chen", "Ramil Babazade", "Yousung Jung" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=XtEwUzaXJ3
@inproceedings{ schwarzer2023learning, title={Learning Silicon Dopant Transitions in Graphene using Scanning Transmission Electron Microscopy}, author={Max Schwarzer and Jesse Farebrother and Joshua Greaves and Kevin Roccapriore and Ekin Cubuk and Rishabh Agarwal and Aaron Courville and Marc Bellemare and Sergei Kalinin and Igor Mordatch and Pablo Castro}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=XtEwUzaXJ3} }
We introduce a machine learning approach to determine the transition rates of silicon atoms on a single layer of carbon atoms, when stimulated by the electron beam of a scanning transmission electron microscope (STEM). Our method is data-centric, leveraging data collected on a STEM. The data samples are processed and filtered to produce symbolic representations, which we use to train a neural network to predict transition rates. These rates are then applied to guide a single silicon atom throughout the lattice to pre-determined target destinations. We present empirical analyses that demonstrate the efficacy and generality of our approach.
Learning Silicon Dopant Transitions in Graphene using Scanning Transmission Electron Microscopy
[ "Max Schwarzer", "Jesse Farebrother", "Joshua Greaves", "Kevin Roccapriore", "Ekin Cubuk", "Rishabh Agarwal", "Aaron Courville", "Marc Bellemare", "Sergei Kalinin", "Igor Mordatch", "Pablo Castro" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=VbjD8w2ctG
@inproceedings{ govindarajan2023learning, title={Learning Conditional Policies for Crystal Design Using Offline Reinforcement Learning}, author={Prashant Govindarajan and Santiago Miret and Jarrid Rector-Brooks and Mariano Phielipp and Janarthanan Rajendran and Sarath Chandar}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=VbjD8w2ctG} }
Navigating through the exponentially large chemical space to search for desirable materials is an extremely challenging task in material discovery. Recent developments in generative and geometric deep learning have shown promising results in molecule and material discovery but often lack evaluation with high-accuracy computational methods. This work aims to design novel and stable crystalline materials conditioned on a desired band gap. To achieve conditional generation, we: 1. Formulate crystal design as a sequential decision-making problem, create relevant trajectories based on high-quality materials data, and use conservative Q-learning to learn a conditional policy from these trajectories. To do so, we formulate a reward function that incorporates constraints for energetic and electronic properties obtained directly from density functional theory (DFT) calculations; 2. Evaluate the generated materials from the policy using DFT calculations for both energy and band gap; 3. Compare our results to relevant baselines, including a random policy, behavioral cloning, and unconditioned policy learning. Our experiments show that conditioned policies achieve targeted crystal design and demonstrate the capability to perform crystal discovery evaluated with accurate and computationally expensive DFT calculations.
Learning Conditional Policies for Crystal Design Using Offline Reinforcement Learning
[ "Prashant Govindarajan", "Santiago Miret", "Jarrid Rector-Brooks", "Mariano Phielipp", "Janarthanan Rajendran", "Sarath Chandar" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TLercqxR4f
@inproceedings{ song2023honeybee, title={HoneyBee: Progressive Instruction Finetuning of Large Language Models for Materials Science}, author={Yu Song and Santiago Miret and Huan Zhang and Bang Liu}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=TLercqxR4f} }
We propose an instruction-based process for trustworthy data curation in materials science (MatSci-Instruct), which we then apply to finetune a LLaMa-based language model targeted for materials science (HoneyBee). MatSci-Instruct helps alleviate the scarcity of relevant, high-quality materials science textual data available in the open literature, and HoneyBee is the first billion-parameter language model specialized to materials science. In MatSci-Instruct we improve the trustworthiness of generated data by prompting multiple commercially available large language models for generation with an Instructor module (e.g. Chat-GPT) and verification from an independent Verifier module (e.g. Claude). Using MatSci-Instruct, we construct a dataset of multiple tasks and measure the quality of our dataset along multiple dimensions, including accuracy against known facts, relevance to materials science, as well as completeness and reasonableness of the data. Moreover, we iteratively generate more targeted instructions in a finetuning-evaluation-feedback loop leading to progressively better performance for our finetuned HoneyBee models. Our evaluation on the MatSci-NLP benchmark shows HoneyBee's outperformance of existing language models on materials science tasks and iterative improvement in successive stages of instruction refinement. We study the quality of HoneyBee's language modeling through automatic evaluation and analyze case studies to further understand the model's capabilities and limitations.
HoneyBee: Progressive Instruction Finetuning of Large Language Models for Materials Science
[ "Yu Song", "Santiago Miret", "Huan Zhang", "Bang Liu" ]
Workshop/AI4Mat
2310.08511
[ "https://github.com/BangLab-UdeM-Mila/NLP4MatSci-HoneyBee" ]
https://huggingface.co/papers/2310.08511
1
0
0
4
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=TDhNb2Q9Xm
@inproceedings{ guo2023understanding, title={Understanding Experimental Data by Identifying Symmetries with Deep Learning}, author={Yichen Guo and Shuyu Qin and Joshua Agar}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=TDhNb2Q9Xm} }
Utilizing computational methods to extract actional information from scientific data is essential due to the time-consuming and inaccurate nature of the manual processes of humans. To better serve the purpose, equipping computational methods with physical rules is necessary. Integrating deep learning models with symmetry awareness has emerged as a promising approach to significantly improve symmetry detection in experimental data, with techniques such as parameter sharing and novel convolutional layers enhancing symmetry recognition.[1,2,3,4,5,6] However, the challenge of integrating physical principles, such as symmetry, into these models persists. To address this, we have developed benchmarking datasets and training frameworks, exploring three perspectives to classify wallpaper group symmetries effectively. Our study demonstrates the limitations of deep learning models in understanding symmetry, as evidenced by benchmark results. A detailed analysis is provided with a hierarchical dataset and training outcomes, while a symmetry filter is designed aiming to improve symmetry operation recognition. This endeavor aims to push the boundaries of deep learning models in comprehending symmetry and embed physical rules within them, ultimately unlocking new possibilities at the intersection of machine learning and physical symmetry, with valuable applications in materials science and beyond.
Understanding Experimental Data by Identifying Symmetries with Deep Learning
[ "Yichen Guo", "Shuyu Qin", "Joshua Agar" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=SfEsK3O2KT
@inproceedings{ pekala2023evaluating, title={Evaluating {AI}-guided Design for Scientific Discovery}, author={Michael Pekala and Elizabeth Ann Pogue and Kyle McElroy and Alexander New and Gregory Bassen and Brandon Wilfong and Janna Domenico and Tyrel McQueen and Christopher D Stiles}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=SfEsK3O2KT} }
Machine learning has great potential to revolutionize experimental materials research; however, the degree to which these approaches accelerate novel discovery is rarely quantified. To this end, we propose a framework for characterizing the rate of “first discovery” of scientific hypotheses in the form of materials families. We use a combination of the SuperCon and Materials Project databases to simulate a scientific needle-in-a-haystack discovery problem as a motivating example. We use this approach to compare the ability of different adaptive sampling strategies to rediscover promising superconductor families, such as the Cuprates and iron-based superconductors. This methodology can be applied using various notions of novelty, making it applicable to discovery problems more broadly.
Evaluating AI-guided Design for Scientific Discovery
[ "Michael Pekala", "Elizabeth Ann Pogue", "Kyle McElroy", "Alexander New", "Gregory Bassen", "Brandon Wilfong", "Janna Domenico", "Tyrel McQueen", "Christopher D Stiles" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=SeXGn7MeUr
@inproceedings{ bihani2023egraffbench, title={{EG}ra{FFB}ench: Evaluation of Equivariant Graph Neural Network Force Fields for Atomistic Simulations}, author={Vaibhav Bihani and Utkarsh Pratiush and Sajid Mannan and Tao Du and Zhimin Chen and Santiago Miret and Matthieu Micoulaut and Morten M Smedskjaer and Sayan Ranu and N M Anoop Krishnan}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=SeXGn7MeUr} }
Equivariant graph neural networks force fields (EGraFFs) have shown great promise in modelling complex interactions in atomic systems by exploiting the graphs’ inherent symmetries. Recent works have led to a surge in the development of novel architectures that incorporate equivariance-based inductive biases alongside architectural innovations like graph transformers and message passing to model atomic interactions. However, thorough evaluations of these deploying EGraFFs for the downstream task of real-world atomistic simulations, is lacking. To this end, here we perform a systematic benchmarking of 6 EGraFF algorithms (NequIP, Allegro, BOTNet, MACE, Equiformer, TorchMDNet), with the aim of understanding their capabilities and limitations for realistic atomistic simulations. In addition to our thorough evaluation and analysis on eight existing datasets based on the benchmarking literature, we release two new benchmark datasets, propose four new metrics, and three challenging tasks. The new datasets and tasks evaluate the performance of EGraFF to out-of-distribution data, in terms of different crystal structures, temperatures, and new molecules. Interestingly, evaluation of the EGraFF models based on dynamic simulations reveals that having a lower error on energy or force does not guarantee stable or reliable simulation or faithful replication of the atomic structures. Moreover, we find that no model clearly outperforms other models on all datasets and tasks. Importantly, we show that the performance of all the models on out-of-distribution datasets is unreliable, pointing to the need for the development of a foundation model for force fields that can be used in real-world simulations. In summary, this work establishes a rigorous framework for evaluating machine learning force fields in the context of atomic simulations and points to open research challenges within this domain.
EGraFFBench: Evaluation of Equivariant Graph Neural Network Force Fields for Atomistic Simulations
[ "Vaibhav Bihani", "Utkarsh Pratiush", "Sajid Mannan", "Tao Du", "Zhimin Chen", "Santiago Miret", "Matthieu Micoulaut", "Morten M Smedskjaer", "Sayan Ranu", "N M Anoop Krishnan" ]
Workshop/AI4Mat
2310.02428
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=SKo0sLvaJH
@inproceedings{ suresh2023multiobjective, title={Multi-objective Evolutionary Design of Microstructures using Diffusion Autoencoders}, author={Anirudh Suresh and Devesh Shah and Alemayehu S Admasu and Devesh Upadhyay and Kalyanmoy Deb}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=SKo0sLvaJH} }
Efficient design of microstructures with targeted properties has always been a challenging task owing to the expensive and time-consuming nature of the problem. In recent years, generative models have been used to accelerate this process. However, most of these methods are hindered by the choice of their generative model - either due to stability and usability, like with GANs, or flexibility of the model itself, like the availability of a semantically meaningful latent space. We propose a diffusion autoencoder based generative design framework that not only provides the fidelity and stability benefits of diffusion models but also has a desirable latent space that can be exploited by evolutionary algorithms. We employ this framework to solve multiple simultaneous objectives to find a Pareto frontier of candidate microstructures. We also show that the search space of optimization can be drastically reduced by conditioning the model with target objective values. We demonstrate the efficacy of the proposed framework on a number of optimization and generative tasks based on two-phase morphology dataset derived from Cahn-Hilliard equations.
Multi-objective Evolutionary Design of Microstructures using Diffusion Autoencoders
[ "Anirudh Suresh", "Devesh Shah", "Alemayehu S Admasu", "Devesh Upadhyay", "Kalyanmoy Deb" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QjnVVrixPW
@inproceedings{ mirauta2023high, title={High throughput decomposition of spectra}, author={Dumitru Mirauta and Vladimir Gusev and Michael W Gaultois and Matthew Rosseinsky}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=QjnVVrixPW} }
In order to fully utilise the potential throughput of automated synthesis and characterisation data collection, data analysis capabilities must have matching throughput, which consumes excessive (human) expert time even for small datasets. One such analysis task is unmixing; being able to generally separate, from a sample consisting of multiple components, the individual patterns characteristic of the constituent parts. Such tasks are often complicated by variation of the basis patterns (e.g. peak shifting and broadening in PXRD). Conventional approaches focus on fitting a parameterised subset of transformations or utilising phase space relationships, and so one tuned for PXRD may require extensive modification or retraining before being suitable for another modality. This work aims to build a more robust foundation for unmixing, not specific to a particular spectral modality. A more robust optimisation can be achieved through a more robust cost, and distance/comparison is a vital component of such costs. We construct a non-regressive, distance geometry based framework, in this presentation leveraging Optimal Transport (OT) with a Euclidean ground cost, but lending itself to modification through the use of different distances. This provides a non-parametric approach that allows for arbitrary variation. We show through numerical experiments that our approach can handle fully blind basis discovery despite independent random peak shifting/broadening at various intensities, where matrix factorisation frameworks break down. We also showcase use in smaller data regimes through a laboratory discovery mockup, where our method can flag compositions containing an unknown trace component.
High throughput decomposition of spectra
[ "Dumitru Mirauta", "Vladimir Gusev", "Michael W Gaultois", "Matthew Rosseinsky" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=O8mZO2ri33
@inproceedings{ ghugare2023searching, title={Searching for High-Value Molecules Using Reinforcement Learning and Transformers}, author={Raj Ghugare and Santiago Miret and Adriana Hugessen and Mariano Phielipp and Glen Berseth}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=O8mZO2ri33} }
Reinforcement learning (RL) over text representations can be effective for finding high-value policies that can search over graphs. However, RL requires careful structuring of the search space and algorithm design to be effective in this challenge. Through extensive experiments, we explore how different design choices for text grammar and algorithmic choices for training can affect an RL policy's ability to generate molecules with desired properties. We arrive at a new RL-based molecular design algorithm (ChemRLformer) and perform a thorough analysis using 25 molecule design tasks, including computationally complex protein docking simulations. From this analysis, we discover unique insights in this problem space and show that ChemRLformer achieves state-of-the-art performance while being more straightforward than prior work by demystifying which design choices are actually helpful for text-based molecule design.
Searching for High-Value Molecules Using Reinforcement Learning and Transformers
[ "Raj Ghugare", "Santiago Miret", "Adriana Hugessen", "Mariano Phielipp", "Glen Berseth" ]
Workshop/AI4Mat
2310.02902
[ "" ]
https://huggingface.co/papers/2310.02902
0
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=NyXTjtFojv
@inproceedings{ hua2023accelerated, title={Accelerated Sampling of Rare Events using a Neural Network Bias Potential}, author={Xinru Hua and Rasool Ahmad and Jose Blanchet and Wei Cai}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=NyXTjtFojv} }
In the field of computational physics and material science, the efficient sampling of rare events occurring at atomic scale is crucial. It aids in understanding mechanisms behind a wide range of important phenomena, including protein folding, conformal changes, chemical reactions and materials diffusion and deformation. Traditional simulation methods, such as Molecular Dynamics and Monte Carlo, often prove inefficient in capturing the timescale of these rare events by brute force. In this paper, we introduce a practical approach by combining the idea of importance sampling with deep neural networks (DNNs) that enhance the sampling of these rare events. In particular, we approximate the variance-free bias potential function with DNNs which is trained to maximize the probability of rare event transition under the importance potential function. This method is easily scalable to high-dimensional problems and provides robust statistical guarantees on the accuracy of the estimated probability of rare event transition. Furthermore, our algorithm can actively generate and learn from any successful samples, which is a novel improvement over existing methods. Using a 2D system as a test bed, we provide comparisons between results obtained from different training strategies, traditional Monte Carlo sampling and numerically solved optimal bias potential function under different temperatures. Our numerical results demonstrate the efficacy of the DNN-based importance sampling of rare events.
Accelerated Sampling of Rare Events using a Neural Network Bias Potential
[ "Xinru Hua", "Rasool Ahmad", "Jose Blanchet", "Wei Cai" ]
Workshop/AI4Mat
2401.06936
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=NIBiIAOvvr
@inproceedings{ bran2023exploring, title={Exploring Organic Syntheses through Natural Language}, author={Andres M Bran and CHENG-HUA HUANG and Philippe Schwaller}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=NIBiIAOvvr} }
Chemists employ a number of levels of abstraction for describing objects and communicating ideas. Most of this knowledge is in the form of natural language, through books, articles and oral explanations, due to its flexibility and capacity to connect the different levels of abstraction. Despite of this, machine-learning chemical models are typically limited to low-level abstractions like graph representations or dynamic point clouds that, although powerful, ignore important aspects like procedural details. In this work, we propose methods for exploring the chemical space at the rich level of natural language. In this setting, synthetic procedure paragraphs are split into segments in four possible classes, and are subsequently mapped into a latent space where they can be conveniently studied. We explore the structure of this space, and find interesting connections with experimental realisation that are beyond the scope of commonly used reaction SMILES. This work aims to draw a path towards LLM-based data processing and chemical space exploration, by analyzing chemical data in previously inaccessible ways that will ultimately allow for better understanding of materials design.
Exploring Organic Syntheses through Natural Language
[ "Andres M Bran", "CHENG-HUA HUANG", "Philippe Schwaller" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=MNfVMjsL7S
@inproceedings{ lacombe2023adsorbrl, title={Adsorb{RL}: Deep Multi-Objective Reinforcement Learning for Inverse Catalysts Design}, author={Romain Lacombe and Lucas Hendren and Khalid El-Awady}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=MNfVMjsL7S} }
A central challenge of the clean energy transition is the development of catalysts for low-emissions technologies. Recent advances in Machine Learning for quantum chemistry drastically accelerate the computation of catalytic activity descriptors such as adsorption energies. Here we introduce AdsorbRL, a Deep Reinforcement Learning agent aiming to identify potential catalysts given a multi-objective binding energy target, trained using offline learning on the Open Catalyst 2020 and Materials Project data sets. We experiment with Deep Q-Network agents to traverse the space of all ~160,000 possible unary, binary and ternary compounds of 55 chemical elements, with very sparse rewards based on adsorption energy known for only between 2,000 and 3,000 catalysts per adsorbate. To constrain the actions space, we introduce Random Edge Traversal and train a single-objective DQN agent on the known states subgraph, which we find strengthens target binding energy by an average of 4.1 eV. We extend this approach to multi-objective, goal-conditioned learning, and train a DQN agent to identify materials with the highest (respectively lowest) adsorption energies for multiple simultaneous target adsorbates. We experiment with Objective Sub-Sampling, a novel training scheme aimed at encouraging exploration in the multi-objective setup, and demonstrate simultaneous adsorption energy improvement across all target adsorbates, by an average of 0.8 eV. Overall, our results suggest strong potential for Deep Reinforcement Learning applied to the inverse catalysts design problem.
AdsorbRL: Deep Multi-Objective Reinforcement Learning for Inverse Catalysts Design
[ "Romain Lacombe", "Lucas Hendren", "Khalid El-Awady" ]
Workshop/AI4Mat
2312.02308
[ "https://github.com/rlacombe/adsorbrl" ]
https://huggingface.co/papers/2312.02308
1
1
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=LkRfodp4tt
@inproceedings{ hu2023anisognn, title={Aniso{GNN}: physics-informed graph neural networks that generalize to anisotropic properties of polycrystals}, author={Guangyu Hu and Marat Latypov}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=LkRfodp4tt} }
We present AnisoGNNs -- graph neural networks (GNNs) that generalize predictions of anisotropic properties of polycrystals in arbitrary testing directions without the need in excessive training data. To this end, we develop GNNs with a physics-inspired combination of node attributes and aggregation function. We demonstrate the excellent generalization capabilities of AnisoGNNs in predicting anisotropic elastic and inelastic properties of two alloys.
AnisoGNN: physics-informed graph neural networks that generalize to anisotropic properties of polycrystals
[ "Guangyu Hu", "Marat Latypov" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=L6AJmCkfNe
@inproceedings{ chen2023automated, title={Automated Diffraction Pattern Analysis for Identifying Crystal Systems Using Multiview Opinion Fusion Machine Learning}, author={Jie Chen and Hengrui Zhang and Carolin B Wahl and Wei Liu and Chad Mirkin and Vinayak Dravid and Daniel W Apley and Wei Chen}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=L6AJmCkfNe} }
A bottleneck in high-throughput nanomaterials discovery is the pace at which new materials can be structurally characterized. Although current machine learning (ML) methods show promise for the automated processing of electron diffraction patterns (DPs), they fail in high-throughput experiments where DPs are collected from crystals with random orientations. Inspired by the human decision-making process, a framework for automated crystal system classification from DPs with arbitrary orientations was developed. A convolutional neural network was trained using evidential deep learning, and the predictive uncertainties were quantified and leveraged to fuse multiview predictions. Using vector map representations of DPs, the framework achieves an unprecedented testing accuracy of 0.94 in the examples considered, is robust to noise, and retains remarkable accuracy using experimental data. This work highlights the ability of ML be used to accelerate experimental high-throughput materials data analytics.
Automated Diffraction Pattern Analysis for Identifying Crystal Systems Using Multiview Opinion Fusion Machine Learning
[ "Jie Chen", "Hengrui Zhang", "Carolin B Wahl", "Wei Liu", "Chad Mirkin", "Vinayak Dravid", "Daniel W Apley", "Wei Chen" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=JwjkukC5Xl
@inproceedings{ tipton2023haldane, title={Haldane Bundles: A Dataset for Learning to Predict the Chern Number of Line Bundles on the Torus}, author={Cody Tipton and Elizabeth Coda and Davis Brown and Alyson Bittner and Jung H. Lee and Grayson Jorgenson and Tegan Emerson and Henry Kvinge}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=JwjkukC5Xl} }
Characteristic classes, which are abstract topological invariants associated with vector bundles, have become an important notion in modern physics with surprising real-world consequences. As a representative example, the incredible properties of topological insulators, which are insulators in their bulk but conductors on their surface, can be completely characterized by a specific characteristic class associated with their electronic band structure, the first Chern class. Given their importance to next generation computing and the computational challenge of calculating them using first-principles approaches, there is a need to develop machine learning approaches to predict the characteristic classes associated with a material system. To aid in this program we introduce the {\emph{Haldane bundle dataset}}, which consists of synthetically generated complex line bundles on the $2$-torus. We envision this dataset, which is not as challenging as noisy and sparsely measured real-world datasets but (as we show) still difficult for off-the-shelf architectures, to be a testing ground for architectures that incorporate the rich topological and geometric priors underlying characteristic classes.
Haldane Bundles: A Dataset for Learning to Predict the Chern Number of Line Bundles on the Torus
[ "Cody Tipton", "Elizabeth Coda", "Davis Brown", "Alyson Bittner", "Jung H. Lee", "Grayson Jorgenson", "Tegan Emerson", "Henry Kvinge" ]
Workshop/AI4Mat
2312.04600
[ "https://github.com/shadtome/haldane-bundles" ]
https://huggingface.co/papers/2312.04600
0
0
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=ITda7kqxSn
@inproceedings{ kaliyev2023rapid, title={Rapid Fitting of Band-Excitation Piezoresponse Force Microscopy Using Physics Constrained Unsupervised Neural Networks}, author={Alibek T Kaliyev and Ryan F Forelli and Shuyu Qin and Yichen Guo and Seda Memik and Michael W. Mahoney and Amir Gholami and Nhan Tran and Philip Harris and Martin Tak{\'a}{\v{c}} and Joshua Agar}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=ITda7kqxSn} }
Scanning probe spectroscopy generates high-dimensional data that is difficult to analyze in real time, hindering researcher creativity. Machine learning techniques like PCA accelerate analysis but are inefficient, sensitive to noise, and lack interpretability. We developed an unsupervised deep neural network constrained by a known empirical equation to enable real-time, robust fitting. Demonstrated on band-excitation piezoresponse force microscopy, our model fits cantilever response to a simple harmonic oscillators more than 4 orders of magnitude faster than least squares while enhancing robustness. It performs well on noisy data where conventional methods fail. Quantization-aware training enables sub-millisecond streaming inference on an FPGA, orders of magnitude faster than data acquisition. This methodology broadly applies to spectroscopic fitting and provides a pathway for real-time control and interpretation.
Rapid Fitting of Band-Excitation Piezoresponse Force Microscopy Using Physics Constrained Unsupervised Neural Networks
[ "Alibek T Kaliyev", "Ryan F Forelli", "Shuyu Qin", "Yichen Guo", "Seda Memik", "Michael W. Mahoney", "Amir Gholami", "Nhan Tran", "Philip Harris", "Martin Takáč", "Joshua Agar" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=IC7b3EQ7wB
@inproceedings{ kishimoto2023mhggnn, title={{MHG}-{GNN}: Combination of Molecular Hypergraph Grammar with Graph Neural Network}, author={Akihiro Kishimoto and Hiroshi Kajino and Hirose Masataka and Junta Fuchiwaki and Indra Priyadarsini and Lisa Hamada and Hajime Shinohara and Daiju Nakano and Seiji Takeda}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=IC7b3EQ7wB} }
Property prediction plays an important role in material discovery. As an initial step to eventually develop a foundation model for material science, we introduce a new autoencoder called the MHG-GNN, which combines graph neural network (GNN) with Molecular Hypergraph Grammar (MHG). Results on a variety of property prediction tasks with diverse materials show that MHG-GNN is promising.
MHG-GNN: Combination of Molecular Hypergraph Grammar with Graph Neural Network
[ "Akihiro Kishimoto", "Hiroshi Kajino", "Hirose Masataka", "Junta Fuchiwaki", "Indra Priyadarsini", "Lisa Hamada", "Hajime Shinohara", "Daiju Nakano", "Seiji Takeda" ]
Workshop/AI4Mat
2309.16374
[ "" ]
https://huggingface.co/papers/2309.16374
0
0
0
9
[ "ibm/materials.mhg-ged" ]
[]
[ "ibm/FM4M-demo1", "ibm/FM4M-demo2", "itohtaka/my1stspace" ]
[ "ibm/materials.mhg-ged" ]
[]
[ "ibm/FM4M-demo1", "ibm/FM4M-demo2", "itohtaka/my1stspace" ]
1
poster
null
https://openreview.net/forum?id=HXGjCkp47o
@inproceedings{ jose2023treebased, title={Tree-based Quantile Active Learning for automated discovery of {MOF}s}, author={Ashna Jose and Emilie Devijver and Noel JAKSE and Val{\'e}rie Monbet and Roberta Poloni}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=HXGjCkp47o} }
Metal-organic frameworks (MOFs), formed through coordination bonds between metal ions and organic ligands, are promising materials for efficient gas adsorption, due to their ultrahigh porosity, chemical tunability and large surface area. Because over a hundred thousand hypothetical MOFs have been reported to date, brute force discovery of the best performer MOF for a specific application is not feasible. Recently, predicting material properties using machine learning algorithms has played a crucial role in scanning large databases, but this often requires large labeled training sets, which is not always available. To address this, active learning, where the training set is constructed iteratively by querying only informative labels, is necessary. Moreover, in most cases, a very specific range of the property of interest is desirable. We employ a novel regression tree-based quantile active learning algorithm that uses partitions of a regression tree to select new samples to be added to the training set. It thereby limits the sample size while maximizing the prediction quality over a quantile of interest. Tests on benchmark MOF data sets demonstrate that focusing on a specific quantile is effective in learning regression models to predict electronic band gaps and CO$_2$ adsorption in the regions of interest, from a very limited labeled data set.
Tree-based Quantile Active Learning for automated discovery of MOFs
[ "Ashna Jose", "Emilie Devijver", "Noel JAKSE", "Valérie Monbet", "Roberta Poloni" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=GVIHjopgnR
@inproceedings{ fu2023learning, title={Learning Interatomic Potentials at Multiple Scales}, author={Xiang Fu and Albert Musaelian and Anders Johansson and Tommi Jaakkola and Boris Kozinsky}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=GVIHjopgnR} }
The need to use a short time step is a key limit on the speed of molecular dynamics (MD) simulations. Simulations governed by classical potentials are often accelerated by using a multiple-time-step (MTS) integrator that evaluates certain potential energy terms that vary more slowly than others less frequently. This approach is enabled by the simple but limiting analytic forms of classical potentials. Machine learning interatomic potentials (MLIPs), in particular recent equivariant neural networks, are much more broadly applicable than classical potentials and can faithfully reproduce the expensive but accurate reference electronic structure calculations used to train them. They still, however, require the use of a single short time step, as they lack the inherent term-by-term scale separation of classical potentials. This work introduces a method to learn a scale separation in complex interatomic interactions by co-training two MLIPs. Initially, a small and efficient model is trained to reproduce short-time-scale interactions. Subsequently, a large and expressive model is trained jointly to capture the remaining interactions not captured by the small model. When running MD, the MTS integrator then evaluates the smaller model for every time step and the larger model less frequently, accelerating simulation. Compared to a conventionally trained MLIP, our approach can achieve a significant speedup (~3x in our experiments) without a loss of accuracy on the potential energy or simulation-derived quantities.
Learning Interatomic Potentials at Multiple Scales
[ "Xiang Fu", "Albert Musaelian", "Anders Johansson", "Tommi Jaakkola", "Boris Kozinsky" ]
Workshop/AI4Mat
2310.13756
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=FfvByyoVAO
@inproceedings{ lee2023clcs, title={{CLCS} : Contrastive Learning between Compositions and Structures for practical Li-ion battery electrodes design}, author={Jaewan Lee and Changyoung Park and Hongjun Yang and Sehui Han and Woohyung Lim}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=FfvByyoVAO} }
Prediction of average voltage of a cathode material, which is related to energy density, is an important task in a battery. However, it is difficult to develop a practical prediction model because relevant data is small, and important information including structure, regarded as a good modality for predicting properties of materials, is barely known except compositions. Inspired by these points, we propose a pretraining method utilizing a contrastive learning between compositions and structures(CLCS), which can improve the performance of voltage prediction task using only compositions of materials. First, we pretrained an composition encoder through contrastive learning between composition and structure representations, extracted by a transformer encoder and a graph neural network respectively, enabling the composition encoder to learn information associated with structures. Then, we transferred the composition encoder to a downstream task of predicting the average voltage with compositions. The performance of transferred model exceeds one of a model without pretraining by 9.7%. Also, with attention score analysis, we discovered that the transferred composition encoder focuses on lithium more than other elements in lithium-transition metal-oxygen systems compared to the composition encoder without pretraining.
CLCS : Contrastive Learning between Compositions and Structures for practical Li-ion battery electrodes design
[ "Jaewan Lee", "Changyoung Park", "Hongjun Yang", "Sehui Han", "Woohyung Lim" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Eu2k9La3RB
@inproceedings{ becker2023combinatorial, title={Combinatorial Optimization via Memory Metropolis: Template Networks for Proposal Distributions in Simulated Annealing applied to Nanophotonic Inverse Design}, author={Marlon Becker and Marco Butz and David Lemli and Carsten Schuck and Benjamin Risse}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=Eu2k9La3RB} }
We propose to utilize a neural network to build transition proposal distributions in simulated annealing (SA), which we use for combinatorial optimization on 2D-binary grids and thereby direct convergence towards states of structurally clustered patterns. To accomplish this we introduce a novel class of network architectures called template networks. A template network learns a template to construct a proposal distribution for state transitions of the stochastic process of the Metropolis algorithm, which forms the basis of SA. Each network represents a single constant pattern and is trained on the evaluation results of intermediate states of a single optimization run, resulting in an architecture not requiring an input layer. Using this learning scheme we equip the Metropolis algorithm with the ability to utilize information about past states, intentionally violating the Markov property of memorylessness, and therefore call our method Memory Metropolis (MeMe). Moreover, the emergence of structural clusters is encouraged by incorporating layers with limited local connectivity in the template network, while the network depth controls the learnable cluster sizes. Viewing the optimization objective of the Metropolis algorithm as a reward maximization allows to train the template network to find high-reward template-patterns.\ We apply our algorithm to combinatorial optimization in nanophotonic inverse design and demonstrate that MeMe results in clustered design patterns suitable for direct optical chip fabrication which can not be found by plain SA or regularized SA. Code is available at https://github.com/MarlonBecker/MeMe.
Combinatorial Optimization via Memory Metropolis: Template Networks for Proposal Distributions in Simulated Annealing applied to Nanophotonic Inverse Design
[ "Marlon Becker", "Marco Butz", "David Lemli", "Carsten Schuck", "Benjamin Risse" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=EiT2bLsfM9
@inproceedings{ takeda2023multimodal, title={Multi-modal Foundation Model for Material Design}, author={Seiji Takeda and Indra Priyadarsini and Akihiro Kishimoto and Hajime Shinohara and Lisa Hamada and Hirose Masataka and Junta Fuchiwaki and Daiju Nakano}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=EiT2bLsfM9} }
We propose a multi-modal foundation model for small molecules, a shift from traditional AI models that are tailored for individual tasks and modalities. This model uses a late fusion strategy to align and fuse three distinct modalities: SELFIES, DFT properties, and optical spectrum. The model is pre-trained with over 6 billion samples to provide two primary functions, generating fused feature representations across the three modalities, and cross-modal predictions and genrations. As preliminary experiments, we demonstrate that the fused representation successfully improves the performance of property predictions for chromophore molecules, and showcase 6 distinct cross-modal inferences.
Multi-modal Foundation Model for Material Design
[ "Seiji Takeda", "Indra Priyadarsini", "Akihiro Kishimoto", "Hajime Shinohara", "Lisa Hamada", "Hirose Masataka", "Junta Fuchiwaki", "Daiju Nakano" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=CjPCM6wXWb
@inproceedings{ cornet2023inversedesign, title={Inverse-design of organometallic catalysts with guided equivariant diffusion}, author={Fran{\c{c}}ois R J Cornet and Bardi Benediktsson and Bjarke Hastrup and Arghya Bhowmik and Mikkel N. Schmidt}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=CjPCM6wXWb} }
Organometallic complexes are ubiquitous in homogenous catalysis, and their optimisation is of particular interest for many technologically relevant reactions. However, due to the large variety of possible metal-ligand and ligand-ligand interactions, finding the best combination of metal and ligands is an immensely challenging task. Here we present an inverse design framework based on a diffusion generative model for \textit{in-silico} design of such complexes. Given the importance of the spatial structure of a catalyst, the model directly operates on all-atom (including explicit \ch{H}) representations in $3$D space. To handle the symmetries inherent to that data representation, it combines an equivariant diffusion model and an equivariant property predictor to drive sampling at inference time. We illustrate the potential of the proposed framework by optimising catalysts for the Suzuki-Miyaura cross-coupling reaction, and validating a selection of novel proposed complexes with \textsc{DFT}.
Inverse-design of organometallic catalysts with guided equivariant diffusion
[ "François R J Cornet", "Bardi Benediktsson", "Bjarke Hastrup", "Arghya Bhowmik", "Mikkel N. Schmidt" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=BTeWafMOyt
@inproceedings{ qi2023latent, title={Latent Conservative Objective Models for Data-Driven Crystal Structure Prediction}, author={Han Qi and Stefano Rando and Xinyang Geng and Iku Ohama and Aviral Kumar and Sergey Levine}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=BTeWafMOyt} }
In computational chemistry, crystal structure prediction (CSP) is an optimization problem that involves discovering the lowest energy stable crystal structure for a given chemical formula. This problem is challenging as it requires discovering globally optimal designs with the lowest energies on complex manifolds. One approach to tackle this problem involves building simulators based on density functional theory (DFT) followed by running search in simulation, but these simulators are painfully slow. In this paper, we study present and study an alternate, data-driven approach to crystal structure prediction: instead of directly searching for the most stable structures in simulation, we train a surrogate model of the crystal formation energy from a database of existing crystal structures, and then optimize this model with respect to the parameters of the crystal structure. This surrogate model is trained to be conservative so as to prevent exploitation of its errors by the optimizer. To handle optimization in the non-Euclidean space of crystal structures, we first utilize a state-of-the-art graph diffusion auto-encoder (CD-VAE) to convert a crystal structure into a vector-based search space and then optimize a conservative surrogate model of the crystal energy, trained on top of this vector representation. We show that our approach, dubbed LCOMs (latent conservative objective models), performs comparably to the best current approaches in terms of success rate of structure prediction, while also drastically reducing computational cost.
Latent Conservative Objective Models for Data-Driven Crystal Structure Prediction
[ "Han Qi", "Stefano Rando", "Xinyang Geng", "Iku Ohama", "Aviral Kumar", "Sergey Levine" ]
Workshop/AI4Mat
2310.10056
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=BFIoxpFeZ2
@inproceedings{ volokhova2023towards, title={Towards equilibrium molecular conformation generation with {GF}lowNets}, author={Alexandra Volokhova and Micha{\l} Koziarski and Alex Hern{\'a}ndez-Garc{\'\i}a and Cheng-Hao Liu and Santiago Miret and Pablo Lemos and Luca Thiede and Zichao Yan and Alan Aspuru-Guzik and Yoshua Bengio}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=BFIoxpFeZ2} }
Sampling diverse, thermodynamically feasible molecular conformations plays a crucial role in predicting properties of a molecule. In this paper we propose to use GFlowNet for sampling conformations of small molecules from the Boltzmann distribution, as determined by the molecule's energy. The proposed approach can be used in combination with energy estimation methods of different fidelity and discovers a diverse set of low-energy conformations for highly flexible drug-like molecules. We demonstrate that GFlowNet can reproduce molecular potential energy surfaces by sampling proportionally to the Boltzmann distribution.
Towards equilibrium molecular conformation generation with GFlowNets
[ "Alexandra Volokhova", "Michał Koziarski", "Alex Hernández-García", "Cheng-Hao Liu", "Santiago Miret", "Pablo Lemos", "Luca Thiede", "Zichao Yan", "Alan Aspuru-Guzik", "Yoshua Bengio" ]
Workshop/AI4Mat
2310.14782
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7yt3N0o0W9
@inproceedings{ qin2023extremely, title={Extremely Noisy 4D-{TEM} Strain Mapping Using Cycle Consistent Spatial Transforming Autoencoders}, author={Shuyu Qin and Joshua Agar and Nhan Tran}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=7yt3N0o0W9} }
Atomic-scale imaging of 2D and quantum materials benefits from precisely extracting crystallographic strain, shear, and rotation to understand their mechanical, optical and electronic properties. One powerful technique is 4D-TEM (4-dimensional transmission electron microscopy), where a convergent electron beam is scanned across a sample while measuring the resulting diffraction pattern with a direct electron detector. Extracting the crystallographic strain, shear, and rotation from this data relies either on cross-correlation of probe templates (e.g., implemented in py4DSTEM) or determining the center of mass (CoM) of the diffraction peaks. These algorithms have limitations. They require manual preprocessing and hyperparameter tuning, are sensitive to signal-to-noise, and generally are difficult to automate. There is no one-size-fits-all algorithm. Recently, machine learning techniques have been used to assist in analyzing 4D-TEM data, however, these models do not possess the capacity to learn the strain, rotation, or translation instead they just learn an approximation that almost aways tends to be correct as long as the test examples are within the training dataset distribution. We developed a novel neural network structure – Cycle Consistent Spatial Transforming Autoencoder (CC-ST-AE). This model takes a set of diffraction images and trains a sparse autoencoder to classify an observed diffraction pattern to a dictionary of learned “averaged” diffraction patterns. Secondly, it learns the affine transformation matrix parameters that minimizes the reconstruction error between the dictionary and the input diffraction pattern. Since the affine transformation includes translation, strain, shear, and rotation, we can parsimoniously learn the strain tensor. To ensure the model is physics conforming, we train the model cycle consistently, by ensuring the inverse affine transformation from the dictionary results in the original diffraction pattern. We validated this model on a number of benchmark tasks including: A Simulated 4D TEM data of $WS_2$ and $WSe_2$ lateral heterostructures (noise free) with a ground truth of the strain, rotation and shear parameters. Secondly, we test this model on experimental 4D TEM on 2D heterostructures of tungsten disulfide ($WS_2$) and tungsten diselenide ($WSe_2$). This model shows several significant improvements including: 1. When tested on simulated data, the model can recover the ground truth with minimal error. 2. The model can learn the rotation and strain on noisy diffraction patterns where CoM failed, and significantly outperforms template matching (py4DSTEM). 3. Our model can accommodate large and continuous rotations difficult to obtain with other methods. 4. Our model is more robust to noisy data. 5. Our model can map the strain, shear and rotation; identify dislocation and ripples; and distinguish background and sample area automatically. Ultimately, this work demonstrates how embedding physical concepts into unsupervised neural networks can simplify, automate, and accelerate analysis pipelines while simultaneously leveraging stochastic averaging that improves robustness on noisy data. This algorithmic concept can be extended to include other physical phenomena (e.g., polarization, sample tilt), can be used in automated experiments, and can be applied to other applications in materials characterization. Detailed information is attached in PDF.
Extremely Noisy 4D-TEM Strain Mapping Using Cycle Consistent Spatial Transforming Autoencoders
[ "Shuyu Qin", "Joshua Agar", "Nhan Tran" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6E2qjEf7Rs
@inproceedings{ vogel2023graphtostring, title={Graph-to-String Variational Autoencoder for Synthetic Polymer Design}, author={Gabriel Vogel and Paolo Sortino and Jana Marie Weber}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=6E2qjEf7Rs} }
Generative molecular design is becoming an increasingly valuable approach to accelerate materials discovery. Besides comparably small amounts of polymer data, also the complex higher-order structure of synthetic polymers makes generative polymer design highly challenging. We build upon a recent polymer representation that includes stoichiometries and chain architectures of monomer ensembles and develop a novel variational autoencoder (VAE) architecture encoding a graph and decoding a string. Most notably, our model learns a latent space (LS) that enables de-novo generation of copolymer structures including different monomer stoichiometries and chain architectures.
Graph-to-String Variational Autoencoder for Synthetic Polymer Design
[ "Gabriel Vogel", "Paolo Sortino", "Jana Marie Weber" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=5ioOONby01
@inproceedings{ bompas2023a, title={A Generative Model for Accelerated Inverse Modelling Using a Novel Embedding for Continuous Variables}, author={Sebastien Bompas and Stefan Sandfeld}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=5ioOONby01} }
In materials science, the challenge of rapid prototyping materials with desired properties often involves extensive experimentation to find suitable microstructures. Additionally, finding microstructures for given properties is typically an ill-posed problem where multiple solutions may exist. Using generative machine learning models can be a viable solution which also reduces the computational cost. This comes with new challenges because, e.g., a continuous property variable as conditioning input to the model is required. We investigate the shortcomings of an existing method and compare this to a novel embedding strategy for generative models that is based on the binary representation of floating point numbers. This eliminates the need for normalization, preserves information, and creates a versatile embedding space for conditioning the generative model. This technique can be applied to condition a network on any number, to provide fine control over generated microstructure images, thereby contributing to accelerated materials design.
A Generative Model for Accelerated Inverse Modelling Using a Novel Embedding for Continuous Variables
[ "Sebastien Bompas", "Stefan Sandfeld" ]
Workshop/AI4Mat
2311.11343
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=57EslEJNOj
@inproceedings{ fox2023active, title={Active Causal Machine Learning for Molecular Property Prediction}, author={Zachary R Fox and Ayana Ghosh}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=57EslEJNOj} }
Predicting properties from molecular structures is paramount to design tasks in medicine, materials science, and environmental management. However, design rules derived from the structure-property relationships using correlative data-driven methods fail to elucidate underlying causal mechanisms controlling chemical phenomena. This preliminary work proposes a workflow to actively learn robust cause-effect relations between structural features and molecular property for a broad chemical space utilizing smaller subsets, entailing partial information.
Active Causal Machine Learning for Molecular Property Prediction
[ "Zachary R Fox", "Ayana Ghosh" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=511z1DGjPi
@inproceedings{ yi2023rvesimulator, title={rvesimulator: An automated representative volume element simulator for data-driven material discovery}, author={Jiaxiang Yi and Miguel Anibal Bessa}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=511z1DGjPi} }
The rvesimulator aims to provide a user-friendly, automated Python-based framework conducting Representative Volume Element (RVE) simulation via powerful Finite Element Method (FEM) software Abaqus. By utilizing this repository, large amount of reliable FEM data-set generation is possible with RVEs encompassing materials from elastic to plastic composites. rvesimulator provides: (1) a cross-platform function to run arbitrary Python-Abaqus script without graphical user interface (GUI), it offers users a convenience way to run their unique scripts; (2) Python-Abaqus scripts to simulate RVE with different design of experiments including various micro-structures, material laws, and loading; (3) benchmarks of running prevalent RVEs covering elastic, hyper-elastic, plastic materials are provided, which illustrates the general pipeline (preprocess, execution, and postprocess) of the developed framework. By sharing the developing framework, we are aiming to reduce the labor-intensive process of generating massive of simulations data for new materials and structure discovery. Therefore, it facilitates the application and development of machine learning method for new material discovery.
rvesimulator: An automated representative volume element simulator for data-driven material discovery
[ "Jiaxiang Yi", "Miguel Anibal Bessa" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=4zUU50Ddhc
@inproceedings{ nguyen2023expt, title={Ex{PT}: Synthetic Pretraining for Few-Shot Experimental Design}, author={Tung Nguyen and Sudhanshu Agrawal and Aditya Grover}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=4zUU50Ddhc} }
Experimental design for optimizing black-box functions is a fundamental problem in many science and engineering fields. In this problem, sample efficiency is crucial due to the time, money, and safety costs of real-world design evaluations. Existing approaches either rely on active data collection or access to large, labeled datasets of past experiments, making them impractical in many real-world scenarios. In this work, we address the more challenging yet realistic setting of few-shot experimental design, where only a few labeled data points of input designs and their corresponding values are available. We introduce Experiment Pretrained Transformers (ExPT), a foundation model for few-shot experimental design that combines unsupervised learning and in-context pretraining. In ExPT, we only assume knowledge of a finite collection of unlabelled data points from the input domain and pretrain a transformer neural network to optimize diverse synthetic functions defined over this domain. Unsupervised pretraining allows ExPT to adapt to any design task at test time in an in-context fashion by conditioning on a few labeled data points from the target task and generating the candidate optima. We evaluate ExPT on few-shot experimental design in challenging domains and demonstrate its superior generality and performance compared to existing methods. The source code is available at https://github.com/tung-nd/ExPT.git.
ExPT: Synthetic Pretraining for Few-Shot Experimental Design
[ "Tung Nguyen", "Sudhanshu Agrawal", "Aditya Grover" ]
Workshop/AI4Mat
2310.19961
[ "https://github.com/tung-nd/expt" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=4ilKwquW51
@inproceedings{ schilter2023unveiling, title={Unveiling the Secrets of \${\textasciicircum}1\$H-{NMR} Spectroscopy: A Novel Approach Utilizing Attention Mechanisms}, author={Oliver Schilter and Marvin Alberts and Federico Zipoli and Alain C. Vaucher and Philippe Schwaller and Teodoro Laino}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=4ilKwquW51} }
The significance of Nuclear Magnetic Resonance (NMR) spectroscopy in organic synthesis cannot be overstated, as it plays a pivotal role in deducing chemical structures from experimental data. While machine learning has predominantly been employed for predictive purposes in the analysis of spectral data, our study introduces a novel application of a transformer-based model's attention weights to unravel the underlying "language" that correlates spectral peaks with their corresponding atom in the chemical structures. This attention mapping technique proves beneficial for comprehending spectra, enabling accurate assignment of spectra to the respective molecules. Our approach consistently achieves correct assignment of $^1$H-NMR experimental spectra to the respective molecules in a reaction, with an accuracy exceeding 95\%. Furthermore, it consistently associates peaks with the correct atoms in the molecule, achieving a remarkable peak-to-atom match rate of 71\% for exact match and 89\% of close shift matching ($\pm$ 0.59ppm). This framework exemplifies the capability of harnessing the attention mechanism within transformer models to unveil the intricacies of spectroscopic data. Importantly, this approach can readily be extended to other types of spectra, showcasing its versatility and potential for broader applications in the field.
Unveiling the Secrets of ^1H-NMR Spectroscopy: A Novel Approach Utilizing Attention Mechanisms
[ "Oliver Schilter", "Marvin Alberts", "Federico Zipoli", "Alain C. Vaucher", "Philippe Schwaller", "Teodoro Laino" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=4CJRmdFrnh
@inproceedings{ qin2023distributed, title={Distributed Reinforcement Learning for Molecular Design: Antioxidant case}, author={Huanyi Qin and Denis Akhiyarov and Kenneth Chiu and Mauricio Araya-Polo}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=4CJRmdFrnh} }
Deep reinforcement learning has successfully been applied for molecular discovery as shown by the Molecule Deep Q-network (MolDQN) algorithm. This algorithm has challenges when applied to optimizing new molecules: training such a model is limited in terms of scalability to larger datasets and the trained model cannot be generalized to different molecules in the same dataset. In this paper, a distributed reinforcement learning algorithm for antioxidants, called DA-MolDQN is proposed to address these problems. State-of-the-art bond dissociation energy (BDE) and ionization potential (IP) predictors are integrated into DA-MolDQN, which are critical chemical properties while optimizing antioxidants. Training time is reduced by algorithmic improvements for molecular modifications. The algorithm is distributed, scalable for up to 512 molecules, and generalizes the model to a diverse set of molecules. The proposed models are trained with a proprietary antioxidant dataset. The results have been reproduced with both proprietary and public datasets. The proposed molecules have been validated with DFT simulations and a subset of them confirmed in public "unseen" datasets. In summary, DA-MolDQN is up to 100x faster than previous algorithms and can discover new optimized molecules from proprietary and public antioxidants.
Distributed Reinforcement Learning for Molecular Design: Antioxidant case
[ "Huanyi Qin", "Denis Akhiyarov", "Kenneth Chiu", "Mauricio Araya-Polo" ]
Workshop/AI4Mat
2312.01267
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster