bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=kRKuJK7g4r | @inproceedings{
guo2023beam,
title={Beam Enumeration: Probabilistic Explainability For Sample Efficient Self-conditioned Molecular Design},
author={Jeff Guo and Philippe Schwaller},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=kRKuJK7g4r}
} | Generative molecular design has moved from proof-of-concept to real-world applicability, as marked by the surge in very recent papers reporting experimental validation. Key challenges in explainability and sample efficiency present opportunities to enhance generative design to directly optimize expensive high-fidelity oracles and provide actionable insights to domain experts. Here, we propose Beam Enumeration to exhaustively enumerate the most probable sub-sequences from language-based molecular generative models and show that molecular substructures can be extracted. When coupled with reinforcement learning, extracted substructures become meaningful, providing a source of explainability and improving sample efficiency through self-conditioned generation. Beam Enumeration is generally applicable to any language-based molecular generative model and notably further improves the performance of the recently reported Augmented Memory algorithm, which achieved the new state-of-the-art on the Practical Molecular Optimization benchmark for sample efficiency. The combined algorithm generates more high reward molecules and faster, given a fixed oracle budget. Beam Enumeration is the first method to jointly address explainability and sample efficiency for molecular design. | Beam Enumeration: Probabilistic Explainability For Sample Efficient Self-conditioned Molecular Design | [
"Jeff Guo",
"Philippe Schwaller"
] | Workshop/AI4Science | 2309.13957 | [
"https://github.com/schwallergroup/augmented_memory"
] | https://huggingface.co/papers/2309.13957 | 0 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=kQyP5u5ccw | @inproceedings{
song2023deepspeedscience,
title={DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated {AI} System Technologies},
author={Shuaiwen Song and Bonnie Kruft and Minjia Zhang and Conglong Li and Shiyang Chen and Chengming Zhang and Masahiro Tanaka and Xiaoxia Wu and Mohammed AlQuraishi and Gustaf Ahdritz and Christina Floristean and Rick Stevens and Venkatram Vishwanath and Arvind Ramanathan and Sam Foreman and Kyle Hippe and Prasanna Balaprakash and Yuxiong He},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=kQyP5u5ccw}
} | In the next decade, deep learning may revolutionize the natural sciences, enhancing our capacity to model and predict natural occurrences. This could herald a new era of scientific exploration, bringing significant advancements across sectors from drug development to renewable energy. To answer this call, we present DeepSpeed4Science initiative which aims to build unique capabilities through AI system technology innovations to help domain experts to unlock today’s biggest science mysteries. By leveraging DeepSpeed’s current technology pillars (training, inference and compression) as base technology enablers, DeepSpeed4Science will create a new set of AI system technologies tailored for accelerating scientific discoveries by addressing their unique complexity beyond the common technical approaches used for accelerating generic large language models (LLMs). In this paper, we showcase the early progress we made with DeepSpeed4Science in addressing two of the critical system challenges in structural biology research. | DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies | [
"Shuaiwen Leon Song",
"Bonnie Kruft",
"Minjia Zhang",
"Conglong Li",
"Shiyang Chen",
"Chengming Zhang",
"Masahiro Tanaka",
"Xiaoxia Wu",
"Mohammed AlQuraishi",
"Gustaf Ahdritz",
"Christina Floristean",
"Rick L. Stevens",
"Venkatram Vishwanath",
"Arvind Ramanathan",
"Sam Foreman",
"Kyle Hippe",
"Prasanna Balaprakash",
"Yuxiong He"
] | Workshop/AI4Science | 2310.04610 | [
""
] | https://huggingface.co/papers/2310.04610 | 3 | 1 | 0 | 92 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=kOhETZ7SXP | @inproceedings{
matchev2023seeking,
title={Seeking Truth and Beauty in Flavor Physics with Machine Learning},
author={Konstantin T. Matchev and Katia Matcheva and Pierre Ramond and Sarunas Verner},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=kOhETZ7SXP}
} | The discovery process of building new theoretical physics models involves the dual aspect of both fitting to the existing experimental data and satisfying abstract theorists' criteria like beauty, naturalness, etc. We design loss functions for performing both of those tasks with machine learning techniques. We use the Yukawa quark sector as a toy example to demonstrate that the optimization of these loss functions results in true and beautiful models. | Seeking Truth and Beauty in Flavor Physics with Machine Learning | [
"Konstantin T. Matchev",
"Katia Matcheva",
"Pierre Ramond",
"Sarunas Verner"
] | Workshop/AI4Science | 2311.00087 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=kFiMXnLH9x | @inproceedings{
zhu2023learning,
title={Learning Over Molecular Conformer Ensembles: Datasets and Benchmarks},
author={Yanqiao Zhu and Jeehyun Hwang and Keir Adams and Zhen Liu and Bozhao Nan and Brock Stenfors and Yuanqi Du and Jatin Chauhan and Olaf Wiest and Olexandr Isayev and Connor Coley and Yizhou Sun and Wei Wang},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=kFiMXnLH9x}
} | Molecular Representation Learning (MRL) has proven impactful in numerous biochemical applications such as drug discovery and enzyme design. While Graph Neural Networks (GNNs) are effective at learning molecular representations from a 2D molecular graph or a single 3D structure, existing works often overlook the flexible nature of molecules, which continuously interconvert across conformations via chemical bond rotations and minor vibrational perturbations. To better account for molecular flexibility, some recent works formulate MRL as an ensemble learning problem, focusing on explicitly learning from a set of conformer structures. However, most of these studies have limited datasets, tasks, and models. In this work, we introduce the first MoleculAR Conformer Ensemble Learning (MARCEL) benchmark to thoroughly evaluate the potential of learning on conformer ensembles and suggest promising research directions. MARCEL includes four datasets covering diverse molecule- and reaction-level properties of chemically diverse molecules including organocatalysts and transition-metal catalysts, extending beyond the scope of common GNN benchmarks that are confined to drug-like molecules. In addition, we conduct a comprehensive empirical study, which benchmarks representative 1D, 2D, and 3D molecular representation learning models, along with two strategies that explicitly incorporate conformer ensembles into 3D MRL models. Our findings reveal that direct learning from an accessible conformer space can improve performance on a variety of tasks and models. | Learning Over Molecular Conformer Ensembles: Datasets and Benchmarks | [
"Yanqiao Zhu",
"Jeehyun Hwang",
"Keir Adams",
"Zhen Liu",
"Bozhao Nan",
"Brock Stenfors",
"Yuanqi Du",
"Jatin Chauhan",
"Olaf Wiest",
"Olexandr Isayev",
"Connor Coley",
"Yizhou Sun",
"Wei Wang"
] | Workshop/AI4Science | 2310.00115 | [
"https://github.com/sxkdz/marcel"
] | https://huggingface.co/papers/2310.00115 | 4 | 0 | 0 | 13 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=jwlMPXW0pa | @inproceedings{
masliaev2023towards,
title={Towards stable real-world equation discovery with assessing differentiating quality influence},
author={Mikhail Masliaev and Ilya Markov and Alexander Hvatov},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=jwlMPXW0pa}
} | This paper explores the critical role of differentiation approaches for data-driven differential equation discovery. Accurate derivatives of the input data are essential for reliable algorithmic operation, particularly in real-world scenarios where measurement quality is inevitably compromised. We propose alternatives to the commonly used finite differences-based method, notorious for its instability in the presence of noise, which can exacerbate random errors in the data. Our analysis covers four distinct methods: Savitzky-Golay filtering, spectral differentiation, smoothing based on artificial neural networks, and the regularization of derivative variation. We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms, providing valuable insights for robust modeling of real-world processes. | Towards stable real-world equation discovery with assessing differentiating quality influence | [
"Mikhail Masliaev",
"Ilya Markov",
"Alexander Hvatov"
] | Workshop/AI4Science | 2311.05787 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=iazHKyT4YK | @inproceedings{
sch{\"o}nle2023optimizing,
title={Optimizing Markov Chain Monte Carlo Convergence with Normalizing Flows and Gibbs Sampling},
author={Christoph Sch{\"o}nle and Marylou Gabri{\'e}},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=iazHKyT4YK}
} | Generative models have started to integrate into the scientific computing toolkit. One notable instance of this integration is the utilization of normalizing flows (NF) in the development of sampling and variational inference algorithms. This work introduces a novel algorithm, GflowMC, which relies on a Metropolis-within-Gibbs framework within the latent space of NFs. This approach addresses the challenge of vanishing acceptance probabilities often encountered when using NF-generated independent proposals, while retaining non-local updates, enhancing its suitability for sampling multi-modal distributions. We assess GflowMC's performance concentrating on the $\phi^4$ model from statistical mechanics.
Our results demonstrate that by identifying an optimal size for partial updates, convergence of the Markov Chain Monte Carlo (MCMC) can be achieved faster than with full updates. Additionally, we explore the adaptability of GflowMC for biasing proposals towards increasing the update frequency of critical coordinates, such as coordinates highly correlated to mode switching in multi-modal targets. | Optimizing Markov Chain Monte Carlo Convergence with Normalizing Flows and Gibbs Sampling | [
"Christoph Schönle",
"Marylou Gabrié"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=iJfPFUvFfy | @inproceedings{
li2023latent,
title={Latent Neural {PDE} Solver for Time-dependent Systems},
author={Zijie Li and Saurabh Patil and Dule Shu and Amir Barati Farimani},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=iJfPFUvFfy}
} | Neural networks have shown promising potential in accelerating the numerical simulation of systems governed by partial differential equations (PDEs). While many of the existing neural network surrogates operate on the high-dimensional discretized field, we propose to learn the dynamics of the system in the latent space with much coarser discretization. A non-linear autoencoder is trained first to project the full-order representation of the system onto the mesh-reduced space, then another temporal model is trained to predict the future state in this mesh-reduced space. This reduction process eases the training of the temporal model as it greatly reduces the computational cost induced by high-resolution discretization. We study the capability of the proposed framework on 2D/3D fluid flow and showcase that it has competitive performance compared to the model that operates on full-order space. | Latent Neural PDE Solver for Time-dependent Systems | [
"Zijie Li",
"Saurabh Patil",
"Dule Shu",
"Amir Barati Farimani"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=i7jjLwOOQm | @inproceedings{
behrouz2023learning,
title={Learning Temporal Higher-order Patterns to Detect Anomalous Brain Activity},
author={Ali Behrouz and Farnoosh Hashemi},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=i7jjLwOOQm}
} | Due to recent advances in machine learning on graphs, representing the connections of the human brain as a network has become one of the most pervasive analytical paradigms. However, most existing graph machine learning-based methods suffer from a subset of five critical limitations: They are (1) designed for simple pair-wise interactions while recent studies on the human brain show the existence of higher-order dependencies of brain regions, (2) designed to perform on pre-constructed networks from time-series data, which limits their generalizability, (3) designed for classifying brain networks, limiting their ability to reveal underlying patterns that might cause the symptoms of a disease or disorder, (4) designed for learning of static patterns, missing the dynamics of human brain activity, and (5) designed in supervised setting, relying their performance on the existence of labeled data. To address these limitations, we present HADiB, an end-to-end anomaly detection model that automatically learns the structure of the hypergraph representation of the brain from neuroimage data. HADiB uses a tetra-stage message-passing mechanism along with an attention mechanism that learns the importance of higher-order dependencies of brain regions. We further present a new adaptive hypergraph pooling to obtain brain-level representation, enabling HADiB to detect the neuroimage of people living with a specific disease or disorder. Our experiments on Parkinson’s Disease, Attention Deficit Hyperactivity Disorder, and Autism Spectrum Disorder show the efficiency and effectiveness of our approaches in detecting anomalous brain activity. | Learning Temporal Higher-order Patterns to Detect Anomalous Brain Activity | [
"Ali Behrouz",
"Farnoosh Hashemi"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=i3PecpoiPG | @inproceedings{
marinescu2023expression,
title={Expression Sampler as a Dynamic Benchmark for Symbolic Regression},
author={Ioana Marinescu and Younes Strittmatter and Chad Williams and Sebastian Musslick},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=i3PecpoiPG}
} | Equation discovery, the problem of identifying mathematical expressions from data, has witnessed the emergence of symbolic regression (SR) techniques aided by benchmarking systems like SRbench. However, these systems are limited by their reliance on static expressions and datasets, which, in turn, provides limited insight into the circumstances under which SR algorithms perform well versus fail. To address this issue, we introduce an open-source method for generating comprehensive SR datasets via random sampling of mathematical expressions. This method enables dynamic expression sampling while controlling for various expression characteristics pertaining to expression complexity. The method also allows for using prior information about expression distributions, for example, to simulate expression distributions for a specific scientific domain. Using this dynamic benchmark, we demonstrate that the overall performance of established SR algorithms decreases with expression complexity and provide insight into which equation features are best recovered. Our results suggest that most SR algorithms overestimate the number of expression tree nodes and trigonometric functions and underestimate the number of input variables present in the ground truth. | Expression Sampler as a Dynamic Benchmark for Symbolic Regression | [
"Ioana Marinescu",
"Younes Strittmatter",
"Chad C Williams",
"Sebastian Musslick"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=hSmn7BQZ2v | @inproceedings{
zhao2023what,
title={What a Scientific Language Model Knows and Doesn't Know about Chemistry},
author={Lawrence Zhao and Carl Edwards and Heng Ji},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=hSmn7BQZ2v}
} | Large Language Models (LLMs) show promise to change how we can interact with and control the design of other modalities, such as drugs, materials, and proteins, and enable scientific reasoning and planning. However, LLMs have several weaknesses: they tend to memorize instead of understand, and the implicit knowledge does not always propagate well between semantically similar inputs. In this work, we seek to distinguish what these scientific LLMs have memorized versus what they actually understand. To do so, we propose a new comprehensive benchmark dataset to evaluate LLM performance on molecular property prediction. We consider Galactica 1.3B, a state-of-the-art scientific LLM, and find that different prompting strategies exhibit vastly different error rates. We find that in-context
learning generally improves performance over zero-shot prompting, and the effect is twice as great for computed properties than for experimental. Furthermore, we show the model is brittle and relies on memorized information, which may limit the application of LLMs for controlling molecular discovery. Based on these findings, we suggest the development of novel methods to enhance information propagation within LLMs—if we desire LLMs to help us control molecular design and the scientific process, then they must learn a sufficient understanding of how molecules work in the real world. | What a Scientific Language Model Knows and Doesn't Know about Chemistry | [
"Lawrence Zhao",
"Carl Edwards",
"Heng Ji"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=h9HuWcDJ6C | @inproceedings{
pengmei2023beyond,
title={Beyond {MD}17: The xx{MD} Dataset as a Chemically Meaningful Benchmark for Neural Force Fields Development},
author={Zihan Pengmei and Junyu Liu and Yinan Shu},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=h9HuWcDJ6C}
} | Neural force fields (NFFs) have gained prominence in computational chemistry as surrogate models, superseding quantum-chemistry calculations in ab initio molecular dynamics. The prevalent benchmark for NFFs has been the MD17 dataset and its subsequent extension. These datasets predominantly comprise geometries from the equilibrium region of the ground electronic state potential energy surface, sampling from direct adiabatic dynamics. However, many chemical reactions entail significant molecular deformations, notably bond breaking. We demonstrate the constrained distribution of internal coordinates and energies in the MD17 datasets, underscoring their inadequacy for representing systems undergoing chemical reactions. Addressing this sampling limitation, we introduce the xxMD (Extended Excited-state Molecular Dynamics) dataset, derived from non-adiabatic dynamics. This dataset encompasses energies and forces ascertained from both multireference wave function theory and density functional theory. Furthermore, its nuclear configuration spaces authentically depict chemical reactions, making xxMD a more chemically relevant dataset. Our re-assessment of equivariant models on the xxMD datasets reveals notably higher mean absolute errors than those reported for MD17 and its variants. This observation underscores the challenges faced in crafting a generalizable NFF model with extrapolation capability. | Beyond MD17: The xxMD Dataset as a Chemically Meaningful Benchmark for Neural Force Fields Development | [
"Zihan Pengmei",
"Junyu Liu",
"Yinan Shu"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=gVuDf0YZ27 | @inproceedings{
li2023muben,
title={{MUB}en: Benchmarking the Uncertainty of Molecular Representation Models},
author={Yinghao Li and Lingkai Kong and Yuanqi Du and Yue Yu and Yuchen Zhuang and Wenhao Mu and Chao Zhang},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=gVuDf0YZ27}
} | Large molecular representation models pre-trained on massive unlabeled data have shown great success in predicting molecular properties. However, these models may tend to overfit the fine-tuning data, resulting in over-confident predictions on test data that fall outside of the training distribution. To address this issue, uncertainty quantification (UQ) methods can be used to improve the models' calibration of predictions. Although many UQ approaches exist, not all of them lead to improved performance. While some studies have included UQ to improve molecular pre-trained models, the process of selecting suitable backbone and UQ methods for reliable molecular uncertainty estimation remains underexplored. To address this gap, we present MUBen, which evaluates different UQ methods for state-of-the-art backbone molecular representation models to investigate their capabilities. By fine-tuning various backbones using different molecular descriptors as inputs with UQ methods from different categories, we critically assess the influence of architectural decisions and training strategies. Our study offers insights for selecting UQ for backbone models, which can facilitate research on uncertainty-critical applications in fields such as materials science and drug discovery. | MUBen: Benchmarking the Uncertainty of Molecular Representation Models | [
"Yinghao Li",
"Lingkai Kong",
"Yuanqi Du",
"Yue Yu",
"Yuchen Zhuang",
"Wenhao Mu",
"Chao Zhang"
] | Workshop/AI4Science | 2306.10060 | [
"https://github.com/yinghao-li/muben"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=g0fOI1bE1C | @inproceedings{
pengmei2023transformers,
title={Transformers are efficient hierarchical chemical graph learners},
author={Zihan Pengmei and Zimu Li and Chih-chan Tien and Risi Kondor and Aaron Dinner},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=g0fOI1bE1C}
} | Transformers, adapted from natural language processing, are emerging as a leading approach for graph representation learning.
Current graph transformers generally treat each node or edge as an individual token, which can become computationally expensive for graphs of even moderate size owing to the quadratic scaling with token count of the computational complexity of self-attention.
In this paper, we introduce SubFormer, a graph transformer that operates on subgraphs that aggregate information by a message-passing mechanism. This approach reduces the number of tokens and enhances learning long-range interactions. We demonstrate SubFormer on benchmarks for predicting molecular properties from chemical structures and show that it is competitive with state-of-the-art graph transformers at a fraction of the computational cost, with training times on the order of minutes on a consumer-grade graphics card. We interpret the attention weights in terms of chemical structures. We show that SubFormer exhibits limited over-smoothing and avoids over-squashing, which is prevalent in traditional graph neural networks. | Transformers are efficient hierarchical chemical graph learners | [
"Zihan Pengmei",
"Zimu Li",
"Chih-chan Tien",
"Risi Kondor",
"Aaron Dinner"
] | Workshop/AI4Science | 2310.01704 | [
"https://github.com/zpengmei/SubFormer-Spec"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=fhCSDMkrFr | @inproceedings{
robinson2023contrasting,
title={Contrasting Sequence with Structure: Pre-training Graph Representations with {PLM}s},
author={Louis Robinson and Timothy Atkinson and Liviu Copoiu and Patrick Bordes and Thomas PIERROT and Thomas Barrett},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=fhCSDMkrFr}
} | Understanding protein function is vital for drug discovery, disease diagnosis, and protein engineering. While Protein Language Models (PLMs) pre-trained on vast protein sequence datasets have achieved remarkable success, equivalent Protein Structure Models (PSMs) remain underrepresented. We attribute this to the relative lack of high-confidence structural data and suitable pre-training objectives. In this context, we introduce BioCLIP, a contrastive learning framework that pre-trains PSMs by leveraging PLMs, generating meaningful per-residue and per-chain structural representations. When evaluated on tasks such as protein-protein interaction, Gene Ontology annotation, and Enzyme Commission number prediction, BioCLIP-trained PSMs consistently outperform models trained from scratch and further enhance performance when merged with sequence embeddings. Notably, BioCLIP approaches, or exceeds, specialized methods across all benchmarks using its singular pre-trained design. Our work addresses the challenges of obtaining quality structural data and designing self-supervised objectives, setting the stage for more comprehensive models of protein function. Source code is publicly available. | Contrasting Sequence with Structure: Pre-training Graph Representations with PLMs | [
"Louis Robinson",
"Timothy Atkinson",
"Liviu Copoiu",
"Patrick Bordes",
"Thomas PIERROT",
"Thomas D Barrett"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=fEoemPDicz | @inproceedings{
xie2023textdecision,
title={Text2Decision: Decoding Latent Variables in Risky Decision Making from Think Aloud Text},
author={Hanbo Xie and Huadong Xiong and Robert C. Wilson},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=fEoemPDicz}
} | Understanding human thoughts can be difficult, as scientists usually rely on observing behaviors. The think-aloud protocol, where people talk about their thoughts while making decisions, provides a more direct way to study thoughts. However, past research on this topic has mostly been qualitative. Recent advancements in artificial intelligence and natural language processing provide the potential for more quantitative analysis of language data. This study introduces Text2Decision, a model trained on task questions from a large-scale task collection, used to decode decision tendencies in risky decision-making from think-aloud texts. We test our model in both human and GPT-4 simulated think-aloud text data about risky decision-making, which are out-of-distributed in the training. Our findings demonstrate the model's performance in capturing GPT-4 manipulated decision personas and in unveiling heuristic decision tendencies from humans. Text2Decision demonstrates its capability by training on basic task outlines and theoretical frameworks and generalizing to unseen empirical think-aloud text data. This not only allows decoding individual differences from these texts but also extends to analyzing large-scale domain datasets. This study shed light on AI integration in cognitive research for the AI4Science paradigm. | Text2Decision: Decoding Latent Variables in Risky Decision Making from Think Aloud Text | [
"Hanbo Xie",
"Huadong Xiong",
"Robert C. Wilson"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=exP6UntwqJ | @inproceedings{
etcheverry2023sbmltoodejax,
title={{SBML}to{ODE}jax: Efficient Simulation and Optimization of Biological Network Models in {JAX}},
author={Mayalen Etcheverry and Michael Levin and Cl{\'e}ment Moulin-Frier and Pierre-Yves Oudeyer},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=exP6UntwqJ}
} | Advances in bioengineering and biomedicine demand a deep understanding of the dynamic behavior of biological systems, ranging from protein pathways to complex cellular processes. Biological networks like gene regulatory networks and protein pathways are key drivers of embryogenesis and physiological processes. Comprehending their diverse behaviors is essential for tackling diseases, including cancer, as well as for engineering novel biological constructs. Despite the availability of extensive mathematical models represented in Systems Biology Markup Language (SBML), researchers face significant challenges in exploring the full spectrum of behaviors and optimizing interventions to efficiently shape those behaviors. Existing tools designed for simulation of biological network models are not tailored to facilitate interventions on network dynamics nor to facilitate automated discovery. Leveraging recent developments in machine learning (ML), this paper introduces SBMLtoODEjax, a lightweight library designed to seamlessly integrate SBML models with ML-supported pipelines, powered by JAX. SBMLtoODEjax facilitates the reuse and customization of SBML-based models, harnessing JAX's capabilities for efficient parallel simulations and optimization, with the aim to accelerate research in biological network analysis. | SBMLtoODEjax: Efficient Simulation and Optimization of Biological Network Models in JAX | [
"Mayalen Etcheverry",
"Michael Levin",
"Clément Moulin-Frier",
"Pierre-Yves Oudeyer"
] | Workshop/AI4Science | 2307.08452 | [
"https://github.com/flowersteam/sbmltoodejax"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eYsiIYAXXj | @inproceedings{
ma2023reinforcement,
title={Reinforcement Learning-Enabled Environmentally Friendly and Multi-functional Chrome-looking Plating},
author={Taigao Ma and Anwesha Saha and L. Jay Guo and Haozhu Wang},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=eYsiIYAXXj}
} | Although decorative chrome plating (DCP) is ubiquitous in metal finishings and coatings, the industrial process of chromium deposition is fraught with adverse health effects for the workers involved and causes environmental pollution. In this work, we seek to find an environmentally friendly replacement to DCP by mimicking the chrome color used for decoration. To discover a suitable replacement efficiently, we employ a reinforcement learning (RL) algorithm to perform an automatic inverse design in optical multilayer thin film structures. The RL algorithm successfully figures out two different structures with environmentally friendly materials while still showing a chrome color. One structure is further designed to have high transmission in the radio frequency regime, a property that general metals cannot have, which can broaden the decorative chrome applications to include microwave operating devices. We also experimentally fabricate these structures and validate their performance. | Reinforcement Learning-Enabled Environmentally Friendly and Multi-functional Chrome-looking Plating | [
"Taigao Ma",
"Anwesha Saha",
"L. Jay Guo",
"Haozhu Wang"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=dz3O7M1QzA | @inproceedings{
pandey2023charm,
title={{CHARM}: Creating Halos with Auto-Regressive Multi-stage networks},
author={Shivam Pandey and Chirag Modi and Benjamin Wandelt and Guilhem Lavaux},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=dz3O7M1QzA}
} | To maximize the amount of information extracted from cosmological datasets, simulations that accurately represent these observations are necessary. However, traditional simulations that evolve particles under gravity by estimating particle-particle interactions (N-body simulations) are computationally expensive and prohibitve to scale to the large volumes and resolutions necessary for the upcoming datasets. Moreover, modeling the distribution of galaxies typically involves identifying collapsed and bound dark matter structures called halos. This is also a time-consuming process for large N-body simulations, further exacerbating the computational cost. In this study, we introduce CHARM, a novel method for creating mock halo catalogs by matching the spatial and mass statistics of halos directly from the large-scale distribution of dark matter density field. We develop multi-stage neural spline flow based networks to learn this mapping directly with computationally cheaper, approximate dark matter simulations instead of relying on the full N-body simulations. We validate that the mock halo catalogs have same statistical properties as obtained from traditional methods. Our method effectively provides a speed-up of more than a factor of 1000 in creating reliable mock halo catalogs compared to conventional approaches. This study represents a major first step towards being able to analyze the non-Gaussian and non-linear information from current-generation surveys using simulation-based inference approaches on the massive scales of upcoming surveys. | CHARM: Creating Halos with Auto-Regressive Multi-stage networks | [
"Shivam Pandey",
"Chirag Modi",
"Benjamin Dan Wandelt",
"Guilhem Lavaux"
] | Workshop/AI4Science | 2409.09124 | [
"https://github.com/shivampcosmo/charm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=cOup1wKAfq | @inproceedings{
novitasari2023unleashing,
title={Unleashing the Autoconversion Rates Forecasting: Evidential Regression from Satellite Data},
author={Maria Novitasari and Johannes Quaas and Miguel Rodrigues},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=cOup1wKAfq}
} | High-resolution simulations such as the ICOsahedral Non-hydrostatic Large-Eddy Model (ICON-LEM) can be used to understand the interactions between aerosols, clouds, and precipitation processes that currently represent the largest source of uncertainty involved in determining the radiative forcing of climate change. Nevertheless, due to the exceptionally high computing cost required, this simulation-based approach can only be employed for a short period of time within a limited area. Despite the fact that machine learning can solve this problem, the related model uncertainties may make it less reliable. To address this, we developed a neural network (NN) model powered with evidential learning to assess the data and model uncertainties applied to satellite observation data. Our study focuses on estimating the rate at which small droplets (cloud droplets) collide and coalesce to become larger droplets (raindrops) – autoconversion rates -- since this is one of the key processes in the precipitation formation of liquid clouds, hence crucial to better understanding cloud responses to anthropogenic aerosols. The results of estimating the autoconversion rates demonstrate that the model performs reasonably well, with the inclusion of both aleatoric and epistemic uncertainty estimation, which improves the credibility of the model and provides useful insights for future improvement. | Unleashing the Autoconversion Rates Forecasting: Evidential Regression from Satellite Data | [
"Maria Carolina Novitasari",
"Johannes Quaas",
"Miguel R. D. Rodrigues"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=byailXBliQ | @inproceedings{
shenoy2023role,
title={Role of Structural and Conformational Diversity for Machine Learning Potentials},
author={Nikhil Shenoy and Prudencio Tossou and Emmanuel Noutahi and Hadrien Mary and Dominique Beaini and Jiarui Ding},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=byailXBliQ}
} | In the field of Machine Learning Interatomic Potentials (MLIPs), understanding the intricate relationship between data biases, specifically conformational and structural diversity, and model generalization is critical in improving the quality of Quantum Mechanics (QM) data generation efforts. We investigate these dynamics through two distinct experiments: a fixed budget one, where the dataset size remains constant, and a fixed molecular set one, which focuses on fixed structural diversity while varying conformational diversity. Our results reveal nuanced patterns in generalization metrics. Notably, for optimal structural and conformational generalization we need a careful balance between structural and conformational diversity that existing QM datasets do not meet. Our results also highlight the limitation of the MLIP models at generalizing beyond their training distribution, emphasizing the importance of defining applicability domain during model deployment. These findings provide valuable insights and guidelines for QM data generation efforts. | Role of Structural and Conformational Diversity for Machine Learning Potentials | [
"Nikhil Shenoy",
"Prudencio Tossou",
"Emmanuel Noutahi",
"Hadrien Mary",
"Dominique Beaini",
"Jiarui Ding"
] | Workshop/AI4Science | 2311.00862 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=bx2vpeOUSO | @inproceedings{
truong2023immunology,
title={Immunology Meets Artificial Intelligence: Expanding Our Scientific Toolbox},
author={Van Truong and Matthew Lee and Dokyoon Kim and John Wherry and Marylyn Ritchie},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=bx2vpeOUSO}
} | Artificial intelligence (AI) is now a part of our daily lives. In this swiftly evolving landscape, AI has become an indispensable tool in the scientific discovery process, augmenting tasks from ideation and hypothesis generation to data cleaning, code development and debugging, text editing, and data analysis. This paper advocates for educational resources for AI in immunology, emphasizing its unique position to leverage AI's potential for scientific discovery. Immunology's intricate tapestry spans multiple biological scales, from molecular interactions to complex systems, presenting an ideal canvas for AI-driven solutions. The field is rich in data, thanks to advanced molecular and single-cell technologies, making it ripe for AI-driven insights. To support the intersection of AI and immunology, we've established a dedicated website as an AI resource hub, offering curated modules and resources. By fostering a "learn by playing" ethos, promoting interactive and engaging workshops, and inviting community contributions, we aim to empower immunologists to harness AI's transformative capabilities and navigate this exciting frontier collectively. | Immunology Meets Artificial Intelligence: Expanding Our Scientific Toolbox | [
"Van Truong",
"Matthew Lee",
"Dokyoon Kim",
"John Wherry",
"Marylyn Ritchie"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=bfD3Fzy7Mb | @inproceedings{
richter2023spatialssl,
title={Spatial{SSL}: Whole-Brain Spatial Transcriptomics in the Mouse Brain with Self-Supervised Learning},
author={Till Richter and Anna Schaar and Francesca Drummer and Cheng-Wei Liao and Leopold Endres and Fabian Theis},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=bfD3Fzy7Mb}
} | Self-supervised learning (SSL) is a rich framework for obtaining meaningful data representations across large datasets. While SSL shows impressive results in computer vision and natural language processing, the single-cell field's diverse applications still need to be explored. We study SSL for the application of cell classification in cellular neighborhoods of spatially-resolved single-cell RNA-sequencing data. To address this, we developed an SSL framework on spatial molecular profiling data, integrating a cell's molecular expression and spatial location within a tissue slice. We demonstrate our methods on a large-scale whole mouse brain atlas, recording the gene expression measurements of 550 genes in 4,334,174 individual cells across 59 discrete tissue slices from the entire mouse brain. Our empirical study suggests that SSL improves downstream performance, especially in the presence of class imbalances. Notably, we observe a more substantial performance improvement on the sub-graph level than the full-graph level. | SpatialSSL: Whole-Brain Spatial Transcriptomics in the Mouse Brain with Self-Supervised Learning | [
"Till Richter",
"Anna Schaar",
"Francesca Drummer",
"Cheng-Wei Liao",
"Leopold Endres",
"Fabian J Theis"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=bCssNn4ZPe | @inproceedings{
khodak2023learning,
title={Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances},
author={Mikhail Khodak and Edmond Chow and Maria Florina Balcan and Ameet Talwalkar},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=bCssNn4ZPe}
} | Solving a linear system $\mathbf{Ax}=\mathbf{b}$ is a fundamental scientific computing primitive for which numerous solvers and preconditioners have been developed. These come with parameters whose optimal values depend on the system being solved and are often impossible or too expensive to identify; thus in practice sub-optimal heuristics are used. We consider the common setting in which many related linear systems need to be solved, e.g. during a single numerical simulation. In this scenario, can we sequentially choose parameters that attain a near-optimal overall number of iterations, without extra matrix computations? We answer in the affirmative for Successive Over-Relaxation (SOR), a standard solver whose parameter $\omega$ has a strong impact on its runtime. For this method, we prove that a bandit online learning algorithm---using only the number of iterations as feedback---can select parameters for a sequence of instances such that the overall cost approaches that of the best fixed $\omega$ as the sequence length increases. Furthermore, when given additional structural information, we show that a _contextual_ bandit method asymptotically achieves the performance of the _instance-optimal_ policy, which selects the best $\omega$ for each instance. Our work provides the first learning-theoretic treatment of high-precision linear system solvers and the first end-to-end guarantees for data-driven scientific computing, demonstrating theoretically the potential to speed up numerical methods using well-understood learning algorithms. | Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances | [
"Mikhail Khodak",
"Edmond Chow",
"Maria Florina Balcan",
"Ameet Talwalkar"
] | Workshop/AI4Science | 2310.02246 | [
""
] | https://huggingface.co/papers/2310.02246 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=Zt35qMfDTp | @inproceedings{
lanusse2023astroclip,
title={Astro{CLIP}: Cross-Modal Pre-Training for Astronomical Foundation Models},
author={Francois Lanusse and Liam Parker and Siavash Golkar and Alberto Bietti and Miles Cranmer and Michael Eickenberg and Geraud Krawezik and Michael McCabe and Ruben Ohana and Mariel Pettee and Bruno R{\'e}galdo-Saint Blancard and Tiberiu Tesileanu and Kyunghyun Cho and Shirley Ho},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=Zt35qMfDTp}
} | We present AstroCLiP, a strategy to facilitate the construction of astronomical foundation models that bridge the gap between diverse astronomical observational modalities. We demonstrate that a cross-modal contrastive learning approach between images and spectra of galaxies yields highly informative embeddings of both modalities. In particular, we apply our method on multi-band images and spectrograms from the Dark Energy Spectroscopic Instrument (DESI), and show that: (1) these embeddings are well-aligned between modalities and can be used for accurate cross-modal searches, and (2) these embeddings encode valuable physical information about the galaxies - in particular redshift and stellar mass - that can be used to achieve competitive zero- and few- shot predictions without further finetuning. Additionally, in the process of developing our approach, we also construct a novel, transformer-based model and pretraining approach for galaxy spectra. | AstroCLIP: Cross-Modal Pre-Training for Astronomical Foundation Models | [
"Francois Lanusse",
"Liam Holden Parker",
"Siavash Golkar",
"Alberto Bietti",
"Miles Cranmer",
"Michael Eickenberg",
"Geraud Krawezik",
"Michael McCabe",
"Ruben Ohana",
"Mariel Pettee",
"Bruno Régaldo-Saint Blancard",
"Tiberiu Tesileanu",
"Kyunghyun Cho",
"Shirley Ho"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Zh5heSS5dc | @inproceedings{
didi2023modelling,
title={Modelling biology in novel ways - an {AI}-first course in Structural Bioinformatics},
author={Kieran Didi and Charles Harris and Pietro Lio and Rainer Beck},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=Zh5heSS5dc}
} | In recent years, there has been tremendous progress in applying data-driven methodologies to study biological questions. The rapidly evolving field of machine learning has gained a plethora of methods that can be applied to structural biology like protein structure prediction. However, the intricacies one faces when analyzing complex biological data are sometimes underappreciated in applications of machine learning methods.
On the other hand, biologists often face a language- and method barrier when trying to understand and correctly apply machine learning tools. As a result, they might be using such methods without proper expertise, potentially resulting in incorrect predictions and questionable conclusions about the resulting data.
To help remedy these issues, we have developed a holistic 11-unit course in AI-driven Structural Bioinformatics with the aim of (i) encouraging machine learning researchers to learn more about the biological complexity of the data they are analyzing and (ii) allowing biologists to better understand state-of-the-art machine learning algorithms for correct application to biological systems.
The course includes video lectures, animated visualisations as well as in-depth exercises and further resources for each of the topics discussed. We hope that our course stimulates collaboration across research communities and lowers the entry barrier for newcomers to understand and investigate structural biology with data-driven tools. Our course is available at \url{https://structural-bioinformatics.netlify.app}. | Modelling biology in novel ways - an AI-first course in Structural Bioinformatics | [
"Kieran Didi",
"Charles Harris",
"Pietro Lio",
"Rainer Beck"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZUkrNwMz5J | @inproceedings{
beeler2023chemgymrl,
title={ChemGym{RL}: An Interactive Framework for Reinforcement Learning for Digital Chemistry},
author={Chris Beeler and Sriram Ganapathi Subramanian and Colin Bellinger and Mark Crowley and Isaac Tamblyn},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=ZUkrNwMz5J}
} | This paper provides a simulated laboratory for making use of Reinforcement Learning (RL) for chemical discovery. Since RL is fairly data intensive, training agents `on-the-fly' by taking actions in the real world is infeasible and possibly dangerous. Moreover, chemical processing and discovery involves challenges which are not commonly found in RL benchmarks and therefore offer a rich space to work in. We introduce a set of highly customizable and open-source RL environments, **ChemGymRL**, implementing the standard Gymnasium API. ChemGymRL supports a series of interconnected virtual chemical *benches* where RL agents can operate and train. The paper introduces and details each of these benches using well-known chemical reactions as illustrative examples, and trains a set of standard RL algorithms in each of these benches. Finally, discussion and comparison of the performances of several standard RL methods are provided in addition to a list of directions for future work as a vision for the further development and usage of ChemGymRL. | ChemGymRL: An Interactive Framework for Reinforcement Learning for Digital Chemistry | [
"Chris Beeler",
"Sriram Ganapathi Subramanian",
"Kyle Sprague",
"Colin Bellinger",
"Mark Crowley",
"Isaac Tamblyn"
] | Workshop/AI4Science | 2305.14177 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=YbFPAaY4hh | @inproceedings{
nguyen2023expt,
title={Ex{PT}: Scaling Foundation Models for Experimental Design via Synthetic Pretraining},
author={Tung Nguyen and Sudhanshu Agrawal and Aditya Grover},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=YbFPAaY4hh}
} | Experimental design for optimizing black-box functions is a fundamental problem in many science and engineering fields. In this problem, sample efficiency is crucial due to the time, money, and safety costs of real-world design evaluations. Existing approaches either rely on active data collection or access to large, labeled datasets of past experiments, making them impractical in many real-world scenarios. In this work, we address the more challenging yet realistic setting of few-shot experimental design, where only a few labeled data points of input designs and their corresponding values are available. We introduce Experiment Pretrained Transformers (ExPT), a foundation model for few-shot experimental design that combines unsupervised learning and in-context pretraining. In ExPT, we only assume knowledge of a finite collection of unlabelled data points from the input domain and pretrain a transformer neural network to optimize diverse synthetic functions defined over this domain. Unsupervised pretraining allows ExPT to adapt to any design task at test time in an in-context fashion by conditioning on a few labeled data points from the target task and generating the candidate optima. We evaluate ExPT on few-shot experimental design in challenging domains and demonstrate its superior generality and performance compared to existing methods. The source code is available at https://github.com/tung-nd/ExPT.git. | ExPT: Synthetic Pretraining for Few-Shot Experimental Design | [
"Tung Nguyen",
"Sudhanshu Agrawal",
"Aditya Grover"
] | Workshop/AI4Science | 2310.19961 | [
"https://github.com/tung-nd/expt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Yb6tsekXu1 | @inproceedings{
demir2023seinvariant,
title={{SE}(3)-Invariant Multiparameter Persistent Homology for Chiral-Sensitive Molecular Property Prediction},
author={Andac Demir and Francis Prael III and Bulent Kiziltan},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=Yb6tsekXu1}
} | In this study, we present a novel computational method for generating molecular fingerprints using multiparameter persistent homology (MPPH). This technique holds considerable significance for key areas such as drug discovery and materials science, where precise molecular property prediction is vital. By integrating SE(3)-invariance with Vietoris-Rips persistent homology, we effectively capture the three-dimensional representations of molecular chirality. Chirality, an intrinsic feature of stereochemistry, is dictated by the spatial orientation of atoms within a molecule, defining its unique 3D configuration. This non-superimposable mirror image property directly influences the molecular interactions, thereby serving as an essential factor in molecular property prediction. We explore the underlying topologies and patterns in molecular structures by applying Vietoris-Rips persistent homology across varying scales and parameters such as atomic weight, partial charge, bond type, and chirality. Our method's efficacy can be further improved by incorporating additional parameters such as aromaticity, orbital hybridization, bond polarity, conjugated systems, as well as bond and torsion angles. Additionally, we leverage Stochastic Gradient Langevin Boosting (SGLB) in a Bayesian ensemble of Gradient Boosting Decision Trees (GBDT) to obtain aleatoric and epistemic uncertainty estimates for gradient boosting models. Using these uncertainty estimates, we prioritize high-uncertainty samples for active learning and model fine-tuning, benefiting scenarios where data labeling is costly or time consuming. Our approach offers unique insights into molecular structure, distinguishing it from traditional single-parameter or single-scale analyses. When compared to conventional graph neural networks (GNNs) which usually suffer from oversmoothing and oversquashing, MPPH provides a more comprehensive and interpretable characterization of molecular data topology. We substantiate our approach with theoretical stability guarantees and demonstrate its superior performance over existing state-of-the-art methods in predicting molecular properties through extensive evaluations on the MoleculeNet benchmark datasets. | SE(3)-Invariant Multiparameter Persistent Homology for Chiral-Sensitive Molecular Property Prediction | [
"Andac Demir",
"Francis Joseph Prael III",
"Bulent Kiziltan"
] | Workshop/AI4Science | 2312.07633 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Y7et3Ow02l | @inproceedings{
cheng2023selfsupervised,
title={Self-supervised Learning to Discover Physical Objects and Predict Their Interactions from Raw Videos},
author={Sheng Cheng and Yezhou Yang and Yang Jiao and Yi Ren},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=Y7et3Ow02l}
} | The ability to discover objects from raw videos and to predict their future dynamics is crucial for achieving general intelligence. While existing methods accomplish these two tasks separately, i.e., learning object segmentation with fixed dynamics or learning dynamics with known system states, we explore the feasibility of jointly accomplishing the two together in a self-supervised setting for physical environments. Critically, we show on real video datasets that learning object dynamics improves the accuracy of discovering dynamical objects. | Self-supervised Learning to Discover Physical Objects and Predict Their Interactions from Raw Videos | [
"Sheng Cheng",
"Yezhou Yang",
"Yang Jiao",
"Yi Ren"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=XabBwZcjJr | @inproceedings{
sarra2023deep,
title={Deep Bayesian Experimental Design for Quantum Many-Body Systems},
author={Leopoldo Sarra and Florian Marquardt},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=XabBwZcjJr}
} | Bayesian experimental design is a technique that allows to efficiently select measurements to characterize a physical system by maximizing the expected information gain. Recent developments in deep neural networks and normalizing flows allow for a more efficient approximation of the posterior and thus the extension of this technique to complex high-dimensional situations. In this paper, we show how this approach holds promise for adaptive measurement strategies to characterize present-day quantum technology platforms. In particular, we focus on arrays of coupled cavities and qubit arrays. Both represent model systems of high relevance for modern applications, like quantum simulations and computing, and both have been realized in platforms where measurement and control can be exploited to characterize and counteract unavoidable disorder. Thus, they represent ideal targets for applications of Bayesian experimental design. | Deep Bayesian Experimental Design for Quantum Many-Body Systems | [
"Leopoldo Sarra",
"Florian Marquardt"
] | Workshop/AI4Science | 2306.14510 | [
"https://github.com/lsarra/active-learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=XPDudqlrEW | @inproceedings{
abdine2023prottext,
title={Prot2Text: Multimodal Protein's Function Generation with {GNN}s and Transformers},
author={Hadi Abdine and Michail Chatzianastasis and Costas Bouyioukos and Michalis Vazirgiannis},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=XPDudqlrEW}
} | In recent years, significant progress has been made in the field of protein function prediction with the development of various machine-learning approaches.
However, most existing methods formulate the task as a multi-classification problem, i.e. assigning predefined labels to proteins.
In this work, we propose a novel approach, Prot2Text, which predicts a protein's function in a free text style, moving beyond the conventional binary or categorical classifications.
By combining Graph Neural Networks(GNNs) and Large Language Models(LLMs), in an encoder-decoder framework, our model effectively integrates diverse data types including protein sequence, structure, and textual annotation and description.
This multimodal approach allows for a holistic representation of proteins' functions, enabling the generation of detailed and accurate functional descriptions.
To evaluate our model, we extracted a multimodal protein dataset from SwissProt, and demonstrate empirically the effectiveness of Prot2Text.
These results highlight the transformative impact of multimodal models, specifically the fusion of GNNs and LLMs, empowering researchers with powerful tools for more accurate function prediction of existing as well as first-to-see proteins. | Prot2Text: Multimodal Protein's Function Generation with GNNs and Transformers | [
"Hadi Abdine",
"Michail Chatzianastasis",
"Costas Bouyioukos",
"Michalis Vazirgiannis"
] | Workshop/AI4Science | 2307.14367 | [
"https://github.com/hadi-abdine/Prot2Text"
] | https://huggingface.co/papers/2307.14367 | 2 | 3 | 0 | 4 | [
"habdine/Prot2Text-Base-v1-1",
"habdine/Prot2Text-Base-v1-0",
"habdine/Prot2Text-Small-v1-1",
"habdine/Prot2Text-Medium-v1-1",
"habdine/Prot2Text-Large-v1-1",
"habdine/Esm2Text-Base-v1-1",
"habdine/Prot2Text-Medium-v1-0",
"habdine/Prot2Text-Large-v1-0",
"habdine/Prot2Text-Small-v1-0",
"habdine/Esm2Text-Base-v1-0"
] | [
"habdine/Prot2Text-Data"
] | [
"habdine/Esm2Text",
"habdine/Prot2Text"
] | [
"habdine/Prot2Text-Base-v1-1",
"habdine/Prot2Text-Base-v1-0",
"habdine/Prot2Text-Small-v1-1",
"habdine/Prot2Text-Medium-v1-1",
"habdine/Prot2Text-Large-v1-1",
"habdine/Esm2Text-Base-v1-1",
"habdine/Prot2Text-Medium-v1-0",
"habdine/Prot2Text-Large-v1-0",
"habdine/Prot2Text-Small-v1-0",
"habdine/Esm2Text-Base-v1-0"
] | [
"habdine/Prot2Text-Data"
] | [
"habdine/Esm2Text",
"habdine/Prot2Text"
] | 1 | poster |
null | https://openreview.net/forum?id=XL8oeY9zwK | @inproceedings{
aristimunha2023evaluating,
title={Evaluating the structure of cognitive tasks with transfer learning},
author={Bruno Aristimunha and Raphael Yokoingawa de Camargo and Walter Lopez Pinaya and Sylvain Chevallier and Alexandre Gramfort and C{\'e}dric Rommel},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=XL8oeY9zwK}
} | Electroencephalography (EEG) decoding is a challenging task due to the limited availability of labelled data. While transfer learning is a promising technique to address this challenge, it assumes that transferable data domains and tasks are known, which is not the case in this setting. This study investigates the transferability of deep learning representations between different EEG decoding tasks. We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets, ERP Core and M3CV, containing over 140 subjects and 11 distinct cognitive tasks. We measure the transferability of learned representations by pre-training deep neural networks on one task and assessing their ability to decode subsequent tasks. Our experiments demonstrate that, even with linear probing transfer, significant improvements in decoding performance can be obtained, with gains of up to 28% compared with the pure supervised approach. Additionally, we discover evidence that certain decoding paradigms elicit specific and narrow brain activities, while others benefit from pre-training on a broad range of representations. By revealing which tasks transfer well and demonstrating the benefits of transfer learning for EEG decoding, our findings have practical implications for mitigating data scarcity in this setting. The transfer maps generated also provide insights into the hierarchical relations between cognitive tasks, hence enhancing our understanding of how these tasks are connected from a neuroscientific standpoint. | Evaluating the structure of cognitive tasks with transfer learning | [
"Bruno Aristimunha",
"Raphael Yokoingawa de Camargo",
"Walter Hugo Lopez Pinaya",
"Sylvain Chevallier",
"Alexandre Gramfort",
"Cédric Rommel"
] | Workshop/AI4Science | 2308.02408 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=XIxcglPy9c | @inproceedings{
dobers2023latent,
title={Latent Space Simulator for Unveiling Molecular Free Energy Landscapes and Predicting Transition Dynamics},
author={Simon Dobers and Hannes Stark and Xiang Fu and Dominique Beaini and Stephan G{\"u}nnemann},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=XIxcglPy9c}
} | Free Energy Surfaces (FES) and metastable transition rates are key elements in understanding the behavior of molecules within a system. However, the typical approaches require computing force fields across billions of time steps in a molecular dynamics (MD) simulation, which is often considered intractable when dealing with large systems or databases. In this work, we propose LaMoDy, a latent-space MD simulator, to effectively tackle the intractability with around 20-fold speed improvements compared to classical MD. The model leverages a chirality-aware SE(3)-invariant encoder-decoder architecture to generate a latent space coupled with a recurrent neural network to run the time-wise dynamics. We show that LaMoDy effectively recovers realistic trajectories and FES more accurately and faster than existing methods while capturing their major dynamical and conformational properties. Furthermore, the proposed approach can generalize to molecules outside the training distribution. | Latent Space Simulator for Unveiling Molecular Free Energy Landscapes and Predicting Transition Dynamics | [
"Simon Dobers",
"Hannes Stark",
"Xiang Fu",
"Dominique Beaini",
"Stephan Günnemann"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=XHFfvzlQ1n | @inproceedings{
hewson2023bayesian,
title={Bayesian Machine Scientist for Model Discovery in Psychology},
author={Joshua Hewson and Younes Strittmatter and Ioana Marinescu and Chad Williams and Sebastian Musslick},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=XHFfvzlQ1n}
} | The rapid growth in complex datasets within the field of psychology poses challenges for integrating observations into quantitative models of human information processing. Other fields of research, such as physics, proposed equation discovery techniques as a way of automating data-driven discovery of interpretable models. One such approach is the Bayesian Machine Scientist (BMS), which employs Bayesian inference to derive mathematical equations linking input variables to an output variable. While BMS has shown promise, its application has been limited to a small subset of scientific domains. This study examines the utility of BMS for model discovery in psychology. In Experiment 1, we compare BMS in recovering four models of human information processing against two common psychological benchmark models---linear/logit regression and a black-box neural network---across a spectrum of noise levels. BMS outperformed the benchmark models on the majority of noise levels and demonstrated at least equivalent performance when considering higher levels of noise. These findings demonstrate BMS’s potential for discovering psychological models of human information processing. In Experiment 2, we investigated the impact of informed priors on BMS recovery, comparing domain-specific function priors against a benchmark uniform prior. Specifically, we investigated four priors across research domains spanning their specificity to psychology. We observe that informed priors robustly enhanced BMS performance for only one of the four models of human information processing. In summary, our findings demonstrate the effectiveness of BMS in recovering computational models of human information processing across a range of noise levels; however, whether integrating expert knowledge into the BMS framework improves performance remains a subject of further inquiry. | Bayesian Machine Scientist for Model Discovery in Psychology | [
"Joshua Tomas Sealth Hewson",
"Younes Strittmatter",
"Ioana Marinescu",
"Chad C Williams",
"Sebastian Musslick"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=WhZrevX2ce | @inproceedings{
soto2023representing,
title={Representing Core-collapse Supernova Light Curves Analytically with Symbolic Regression},
author={Kaylee de Soto and V. Ashley Villar},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=WhZrevX2ce}
} | Radiative transfer simulations of cosmic transients–the rapidly evolving terminal events of stars–are computationally expensive, making Bayesian inference infeasible on even a single events. Yet, astronomical surveys have discovered tens-of-thousands of these events. In this work, we use symbolic regression to derive an analytic expression for the luminosity of the most common core-collapse supernova (the explosive death of a massive star) as a function of time and physical parameters – an analytical expression for these events has eluded the literature for a century. This expression is trained from a set of simulated bolometric light curves (measured luminosity as a function of time) generated from six input physical parameters. We find that a single analytic expression can reproduce $\sim$70\% of light curves in our dataset with less than $\sim$7.5\% fractional error; we additionally present a small set of analytical expressions to reproduce the full set of light curves. By deriving an analytic relation between physical parameters and light curve luminosities, we create an interpretable parametric model and emulate the more expensive simulator. This work demonstrates promising preliminary results for future efforts to build interpretable emulators within time-domain astrophysics. | Representing Core-collapse Supernova Light Curves Analytically with Symbolic Regression | [
"Kaylee de Soto",
"V. Ashley Villar"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=W5U18rgtpg | @inproceedings{
maziarz2023reevaluating,
title={Re-evaluating Retrosynthesis Algorithms with Syntheseus},
author={Krzysztof Maziarz and Austin Tripp and Guoqing Liu and Megan Stanley and Shufang Xie and Piotr Gai{\'n}ski and Philipp Seidl and Marwin Segler},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=W5U18rgtpg}
} | The planning of how to synthesize molecules, also known as retrosynthesis, has been a growing focus of the machine learning and chemistry communities in recent years. Despite the appearance of steady progress, we argue that imperfect benchmarks and inconsistent comparisons mask systematic shortcomings of existing techniques. To remedy this, we present a benchmarking library called syntheseus which promotes best practice by default, enabling consistent meaningful evaluation of single-step and multi-step retrosynthesis algorithms. We use syntheseus to re-evaluate a number of previous retrosynthesis algorithms, and find that the ranking of state-of-the-art models changes when evaluated carefully. We end with guidance for future works in this area. | Re-evaluating Retrosynthesis Algorithms with Syntheseus | [
"Krzysztof Maziarz",
"Austin Tripp",
"Guoqing Liu",
"Megan Stanley",
"Shufang Xie",
"Piotr Gaiński",
"Philipp Seidl",
"Marwin Segler"
] | Workshop/AI4Science | 2310.19796 | [
"https://github.com/microsoft/syntheseus"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=VM7CdbDkQt | @inproceedings{
pandey2023sensitivity,
title={Sensitivity Analysis of Simulation-Based Inference for Galaxy Clustering},
author={Shivam Pandey and Chirag Modi and Benjamin Wandelt and Matthew Ho and ChangHoon Hahn and Bruno R{\'e}galdo-Saint Blancard},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=VM7CdbDkQt}
} | Simulation-based inference (SBI) is a promising approach to leverage high fidelity cosmological simulations and extract information from the non-Gaussian, non-linear scales that cannot be modeled analytically. However, scaling SBI to the next generation of cosmological surveys faces the computational challenge of requiring a large number of accurate simulations over a wide range of cosmologies, while simultaneously encompassing large cosmological volumes at high resolution. This challenge can potentially be mitigated by balancing the accuracy and computational cost for different component models of the simulations while ensuring robust inference. To guide our steps in this, we perform a sensitivity analysis of SBI for galaxy clustering on various main components of the cosmological simulations: gravity model, halo-finder and the galaxy-halo distribution models. We infer two main cosmological parameters using galaxy power spectrum multipoles (two-point statistics) and the bispectrum monopole (three-point statistics) assuming a galaxy number density expected from current generation of galaxy surveys. We find that SBI is insensitive to changing gravity model between accureate and slow $N$-body simulations and approximate and fast particle mesh simulations. However, changing the methodology of finding the collapsed dark matter structures called halos which galaxies populate can lead to biased cosmological inferences. For models of how galaxies populate these halos, training SBI on more complex model leads to consistent inference for less complex models, but SBI trained on simpler models fails when applied to analyze data from a more complex model. | Sensitivity Analysis of Simulation-Based Inference for Galaxy Clustering | [
"Shivam Pandey",
"Chirag Modi",
"Benjamin Dan Wandelt",
"Matthew Ho",
"ChangHoon Hahn",
"Bruno Régaldo-Saint Blancard"
] | Workshop/AI4Science | 2309.15071 | [
"https://github.com/modichirag/contrastive_cosmology"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=UyzfFpoX4K | @inproceedings{
jing2023learning,
title={Learning Scalar Fields for Molecular Docking with Fast Fourier Transforms},
author={Bowen Jing and Tommi Jaakkola and Bonnie Berger},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=UyzfFpoX4K}
} | Molecular docking is critical to structure-based virtual screening, yet the throughput of such workflows is limited by the expensive optimization of scoring functions involved in most docking algorithms. We explore how machine learning can accelerate this process by learning a scoring function with a functional form that allows for more rapid optimization. Specifically, we define the scoring function to be the cross-correlation of multi-channel ligand and protein scalar fields parameterized by equivariant graph neural networks, enabling rapid optimization over rigid-body degrees of freedom with fast Fourier transforms. Moreover, the runtime of our approach can be amortized at several levels of abstraction, and is particularly favorable for virtual screening settings with a common binding pocket. We benchmark our scoring functions on two simplified docking-related tasks: decoy pose scoring and rigid conformer docking. Our method attains similar but faster performance on crystal structures compared to the Vina and Gnina scoring functions, and is more robust on computationally predicted structures. | Learning Scalar Fields for Molecular Docking with Fast Fourier Transforms | [
"Bowen Jing",
"Tommi Jaakkola",
"Bonnie Berger"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Trha4S47t6 | @inproceedings{
chiu2023hypothesis,
title={Hypothesis Tests for Distributional Group Symmetry with Applications to Particle Physics},
author={Kenny Chiu and Benjamin Bloem-Reddy},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=Trha4S47t6}
} | Symmetry plays a central role in the sciences, machine learning, and statistics. When data are known to obey a symmetry, various methods that exploit symmetry have been developed. However, statistical tests for the presence of group invariance focus on a handful of specialized situations, and tests for equivariance are largely non-existent. This work formulates non-parametric hypothesis tests, based on a single independent and identically distributed sample, for distributional symmetry under a specified group. We provide a general formulation of tests for symmetry within two broad settings. Generalizing existing theory for group-based randomization tests, the first setting tests for the invariance of a marginal or joint distribution under the action of a compact group. The second setting tests for the invariance or equivariance of a conditional distribution under the action of a locally compact group. We show that the test for conditional symmetry can be formulated as a test for conditional independence. We implement our tests using kernel methods and apply them to testing for symmetry in problems from high-energy particle physics. | Hypothesis Tests for Distributional Group Symmetry with Applications to Particle Physics | [
"Kenny Chiu",
"Benjamin Bloem-Reddy"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=TScjG5zoB0 | @inproceedings{
schilter2023unveiling,
title={Unveiling the Secrets of \${\textasciicircum}1\$H-{NMR} Spectroscopy: A Novel Approach Utilizing Attention Mechanisms},
author={Oliver Schilter and Marvin Alberts and Federico Zipoli and Alain Vaucher and Philippe Schwaller and Teodoro Laino},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=TScjG5zoB0}
} | The significance of Nuclear Magnetic Resonance (NMR) spectroscopy in organic synthesis cannot be overstated, as it plays a pivotal role in deducing chemical structures from experimental data. While machine learning has predominantly been employed for predictive purposes in the analysis of spectral data, our study introduces a novel application of a transformer-based model's attention weights to unravel the underlying "language" that correlates spectral peaks with their corresponding atom in the chemical structures.
This attention mapping technique proves beneficial for comprehending spectra, enabling accurate assignment of spectra to the respective molecules. Our approach consistently achieves correct assignment of $^1$H-NMR experimental spectra to the respective molecules in a reaction, with an accuracy exceeding 95\%.
Furthermore, it consistently associates peaks with the correct atoms in the molecule, achieving a remarkable peak-to-atom match rate of 71\% for exact match and 89\% of close shift matching ($\pm$ 0.59ppm).
This framework exemplifies the capability of harnessing the attention mechanism within transformer models to unveil the intricacies of spectroscopic data. Importantly, this approach can readily be extended to other types of spectra, showcasing its versatility and potential for broader applications in the field. | Unveiling the Secrets of ^1H-NMR Spectroscopy: A Novel Approach Utilizing Attention Mechanisms | [
"Oliver Schilter",
"Marvin Alberts",
"Federico Zipoli",
"Alain C. Vaucher",
"Philippe Schwaller",
"Teodoro Laino"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=SskZeP3Mzz | @inproceedings{
xue2023vertical,
title={Vertical {AI}-driven Scientific Discovery},
author={Yexiang Xue},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=SskZeP3Mzz}
} | Automating scientific discovery has been a grand goal of Artificial Intelligence (AI) and will bring tremendous societal impact if it succeeds. Despite exciting progress, most endeavor in learning scientific equations from experiment data focuses on the horizontal discovery paths, i.e., they directly search for the best equation in the full hypothesis space. Horizontal paths are challenging because of the associated exponentially large search space. Our work explores an alternative vertical path, which builds scientific equations in an incremental way, starting from one that models data in control variable experiments in which most variables are held as constants. It then extends expressions learned in previous generations via adding new independent variables, using new control variable experiments in which these variables are allowed to vary. This vertical path was motivated by human scientific discovery processes. Experimentally, we demonstrate that such vertical discovery paths expedite symbolic regression. It also improves learning physics models describing nano-structure evolution in computational materials science. | Vertical AI-driven Scientific Discovery | [
"Yexiang Xue"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=SV6mT8iSFP | @inproceedings{
pao-huang2023scalable,
title={Scalable Multimer Structure Prediction using Diffusion Models},
author={Peter Pao-Huang and Bowen Jing and Bonnie Berger},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=SV6mT8iSFP}
} | Accurate protein complex structure modeling is a necessary step in understanding the behavior of biological pathways and cellular systems. While some works have attempted to address this challenge, there is still a need for scaling existing methods to larger protein complexes. To address this need, we propose a novel diffusion generative model (DGM) that predicts large multimeric protein structures by learning to rigidly dock its chains together. Additionally, we construct a new dataset specifically for large protein complexes used to train and evaluate our DGM. We substantially improve prediction runtime and completion rates while maintaining competitive accuracy with current methods. | Scalable Multimer Structure Prediction using Diffusion Models | [
"Peter Pao-Huang",
"Bowen Jing",
"Bonnie Berger"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=SIP1mkJDbO | @inproceedings{
jing2023alphafold,
title={AlphaFold Meets Flow Matching for Generating Protein Ensembles},
author={Bowen Jing and Bonnie Berger and Tommi Jaakkola},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=SIP1mkJDbO}
} | Recent breakthroughs in protein structure prediction have pointed to structural ensembles as the next frontier in the computational understanding of protein structure. At the same time, iterative refinement techniques such as diffusion have driven significant advancements in generative modeling. We explore the synergy of these developments by combining AlphaFold and ESMFold with flow matching, a powerful modern generative modeling framework, in order to sample the conformational landscape of proteins. When trained on the PDB and evaluated on proteins with multiple recent structures, our method produces ensembles with similar precision and greater diversity compared to MSA subsampling. When further fine-tuned on coarse-grained molecular dynamics trajectories, our model generalizes to unseen proteins and accurately predicts conformational flexbility, captures the joint distribution of atomic positions, and models higher-order physiochemical properties such as intermittent contacts and solvent exposure. These results open exciting avenues in the computational prediction of conformational flexibility. | AlphaFold Meets Flow Matching for Generating Protein Ensembles | [
"Bowen Jing",
"Bonnie Berger",
"Tommi Jaakkola"
] | Workshop/AI4Science | 2402.04845 | [
"https://github.com/bjing2016/alphaflow"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=S6k4h5dkEg | @inproceedings{
ranasinghe2023combopath,
title={ComboPath: A model for predicting drug combination effects},
author={Duminda Ranasinghe and Changchang Liu and Dan Spitz and Hok Hei Tam and Nathan Sanders},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=S6k4h5dkEg}
} | Drug combinations have been shown to be an effective strategy for cancer therapy, but identifying beneficial combinations through experiments is labor-intensive and expensive
Machine learning (ML) systems that can propose novel and effective drug combinations have the potential to dramatically improve the efficiency of combinatoric drug design.
{However, the biophysical parameters of drug combinations are degenerate, making it challenging to identify the ground truth of drug interactions even given high-quality experimental data.
Existing ML models are highly underspecified to meet this challenge, leaving them vulnerable to producing parameters that are not biophysically realistic and harming generalization.
We have developed a new ML model, ``ComboPath,'' to predict the cellular dose-response surface of a two-drug combination based on each drug's interactions with their known protein targets.
{ComboPath incorporates a biophysically-motivated intermediate parameterization with prior information used to improve model specification. This} is the first ML model to nominate beneficial drug combinations while simultaneously reconstructing the dose-response surface, providing insight into both the potential of a drug combination and its optimal dosing for therapeutic development.
We show that our models were able to accurately reconstruct 2D dose response surfaces across held-out combination samples from the largest available combinatoric screening dataset while {substantially improving model specification for key biophysical parameters} | ComboPath: A model for predicting drug combination effects | [
"Duminda S Ranasinghe",
"Nathan Sanders",
"Hok Hei Tam",
"Changchang Liu",
"Dan Spitz"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=R8FQMsECIS | @inproceedings{
wigh2023orderly,
title={{ORD}erly: Datasets and benchmarks for chemical reaction data},
author={Daniel Wigh and Joe Arrowsmith and Alexander Pomberger and Kobi Felton and Alexei Lapkin},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=R8FQMsECIS}
} | Machine learning has the potential to provide tremendous value to the life sciences by providing models that aid in the discovery of new molecules and reduce the time for new products to come to market. Chemical reactions play a significant role in these fields, but there is a lack of high-quality open-source chemical reaction datasets for training ML models. Herein, we present ORDerly, an open-source Python package for customizable and reproducible preparation of reaction data stored in accordance with the increasingly popular Open Reaction Database (ORD) schema. We use ORDerly to clean US patent data stored in ORD and generate datasets for forward prediction, retrosynthesis, as well as the first benchmark for reaction condition prediction. We train neural networks on datasets generated with ORDerly for condition prediction and show that datasets missing key cleaning steps can lead to silently overinflated performance metrics. Additionally, we train transformers for forward and retrosynthesis prediction and demonstrate how non-patent data can be used to evaluate model generalisation. By providing a customizable open-source solution for cleaning and preparing large chemical reaction data, ORDerly is poised to push forward the boundaries of machine learning applications in chemistry. | ORDerly: Datasets and benchmarks for chemical reaction data | [
"Daniel Wigh",
"Joe Arrowsmith",
"Alexander Pomberger",
"Kobi Felton",
"Alexei Lapkin"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=R0Jbsytvhw | @inproceedings{
chard{\`e}s2023stochastic,
title={Stochastic force inference via density estimation},
author={Victor Chard{\`e}s and Suryanarayana Maddu and Michael J. Shelley},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=R0Jbsytvhw}
} | Inferring dynamical models from low-resolution temporal data continues to be a significant challenge in biophysics, especially within transcriptomics, where separating molecular programs from noise remains an important open problem. We explore a common scenario in which we have access to an adequate amount of cross-sectional samples at a few time-points, and assume that our samples are generated from a latent diffusion process. We propose an approach that relies on the probability flow associated with an underlying diffusion process to infer an autonomous, nonlinear force field interpolating between the distributions. Given a prior on the noise model, we employ score-matching to differentiate the force field from the intrinsic noise. Using relevant biophysical examples, we demonstrate that our approach can extract non-conservative forces from non-stationary data, that it learns equilibrium dynamics when applied to steady-state data, and that it can do so with both additive and multiplicative noise models. | Stochastic force inference via density estimation | [
"Victor Chardès",
"Suryanarayana Maddu",
"Michael J. Shelley"
] | Workshop/AI4Science | 2310.02366 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=PfpbWuC0Yk | @inproceedings{
zhang2023mitigating,
title={Mitigating Bias in Scientific Data: a Materials Science Case Study},
author={Hengrui Zhang and Wei Chen and James Rondinelli and Wei Chen},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=PfpbWuC0Yk}
} | Growing scientific data and data-driven informatics drastically promote scientific discovery. While there are significant advancements in data-driven models, the quality of data resources is less studied despite its huge impact on model performance. As an example, we focus on data bias arising from uneven coverage of materials families in existing knowledge. Observing different diversities among crystal systems in common materials databases, we propose an information entropy-based metric for measuring this bias. To mitigate the bias, we develop an entropy-targeted active learning (ET-AL) framework, which guides the acquisition of new data to improve the diversity of underrepresented crystal systems. We demonstrate the capability of ET-AL for bias mitigation and the resulting improvement in downstream machine learning models. This approach is broadly applicable to data-driven materials discovery, including autonomous data acquisition and dataset trimming to reduce bias, as well as data-driven informatics in other scientific domains. | Mitigating Bias in Scientific Data: A Materials Science Case Study | [
"Hengrui Zhang",
"Wei Chen",
"James Rondinelli",
"Wei Chen"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=PVOhNEqO72 | @inproceedings{
fang2023mapping,
title={Mapping the intermolecular interaction universe through self-supervised learning on molecular crystals},
author={Ada Fang and ZAIXI ZHANG and Marinka Zitnik},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=PVOhNEqO72}
} | Molecular interactions fundamentally influence all aspects of chemistry and biology. Prevailing machine learning approaches emphasize the modeling of molecules in isolation or at best provide limited modeling of molecular interactions, typically restricted to protein-ligand and protein-protein interactions. Here, we present how to use molecular crystals to define the MolInteractDB dataset that contains valuable biochemical knowledge, which can be captured by large self-supervised pre-trained models. MolInteractDB incorporates 344,858 molecular crystal structure entries from the Cambridge Structural Database. We formulate entries in the MolInteractDB dataset as radial patches of flexible size and at varying positions in the crystal to represent intermolecular interactions across crystal structures. We characterize a variety of interactions highlighted across 6 million patches. Leveraging MolInteractDB, we develop InteractNN, a self-supervised SE(3)-equivariant 3D message passing network. We show that InteractNN captures the latent knowledge of chemical elements as well as intermolecular interaction types at a scale not directly accessible to human scientists. To demonstrate its potential, we fine-tuned InteractNN to predict the binding affinity between proteins and ligands, producing results comparable with state-of-the-art models. | Mapping the intermolecular interaction universe through self-supervised learning on molecular crystals | [
"Ada Fang",
"ZAIXI ZHANG",
"Marinka Zitnik"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=PNM9mojudC | @inproceedings{
song2023atat,
title={{ATAT}: Automated Tissue Alignment and Traversal},
author={Steven Song and Emaan Mohsin and Andrey Kuznetsov and Christopher Weber and Robert Grossman and Aly Khan},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=PNM9mojudC}
} | The spatial geometry of tissue biopsies reveals complex landscapes of cellular interactions. With the advent of spatial transcriptomics (ST), the ability to measure RNA across these intricate terrains has significantly advanced. However, without a pathologist’s insight to delineate regions of interest, modeling gene expression transitions across specific regions becomes a daunting task. A case in point is grading the severity of inflammatory bowel disease (IBD) across the intestinal wall while identifying the organization of immune cell types across the tissue layers; such characterization will be essential in the push for precision medicine. Yet the challenge to harness ST data to decipher spatially dependent transcriptional programs in a scalable and automated manner remains a well acknowledged barrier to wider implementation. Our study aims to: (1) Utilize hematoxylin and eosin (H\&E) stained images for automated segmentation of histological regions and (2) Simulate the gene expression transition across these histological layers within a single algorithmic framework. To these ends, we present ATAT: Automated Tissue Alignment and Traversal. With our approach, we automate the integration of H\&E stained images with spatial transcriptomics and simplify the investigation of important biomedical questions, such as characterization of inflammatory conditions across intestinal walls. | ATAT: Automated Tissue Alignment and Traversal | [
"Steven Song",
"Emaan Mohsin",
"Andrey Kuznetsov",
"Christopher Weber",
"Robert L. Grossman",
"Aly A Khan"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=OQXCc21rgM | @inproceedings{
dutta2023van,
title={Van der Pol-informed Neural Networks for Multi-step-ahead Forecasting of Extreme Climatic Events},
author={Anurag Dutta and Madhurima Panja and Uttam Kumar and Chittaranjan Hens and Tanujit Chakraborty},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=OQXCc21rgM}
} | Deep learning has produced excellent results in several applied domains including computer vision, natural language processing, speech recognition, etc. Physics-informed neural networks (PINN) are a new family of deep learning models that combine prior knowledge of physics in the form of high-level abstraction of natural phenomena with data-driven neural networks. PINN has emerged as a flourishing area of scientific computing to deal with the challenges of shortage of training data, enhancing physical plausibility, and specifically aiming to solve complex differential equations. However, building PINNs for modeling and forecasting the dynamics of extreme climatic events of geophysical systems remains an open scientific problem. This study proposes Van der Pol-informed Neural Networks (VPINN), a physics-informed differential learning approach, for modeling extreme nonlinear dynamical systems such as climatic events, exploiting the physical differentials as the physics-derived loss function. Our proposal is compared to state-of-the-art time series forecasting models, showing superior performance. | Van der Pol-informed Neural Networks for Multi-step-ahead Forecasting of Extreme Climatic Events | [
"Anurag Dutta",
"Madhurima Panja",
"Uttam Kumar",
"Chittaranjan Hens",
"Tanujit Chakraborty"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NrAPGwleHA | @inproceedings{
liu2023deep,
title={Deep Learning with Physics Priors as Generalized Regularizers},
author={Frank Liu and Agniva Chowdhury},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=NrAPGwleHA}
} | In various scientific and engineering applications, there is typically an approximate model of the underlying complex system, even though it contains both aleatoric and epistemic uncertainties. In this paper, we present a principled method to incorporate these approximate models as physics priors in modeling, to prevent overfitting and enhancing the generalization capabilities of the trained models. Utilizing the structural risk minimization (SRM) inductive principle pioneered by Vapnik, this approach structures the physics priors into generalized regularizers. The experimental results demonstrate that our method achieves up to two orders of magnitude of improvement in testing accuracy. | Deep Learning with Physics Priors as Generalized Regularizers | [
"Frank Liu",
"Agniva Chowdhury"
] | Workshop/AI4Science | 2312.08678 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Nn43zREWvX | @inproceedings{
meidani2023snip,
title={{SNIP}: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training},
author={Kazem Meidani and Parshin Shojaee and Chandan Reddy and Amir Barati Farimani},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=Nn43zREWvX}
} | In scientific inquiry, symbolic mathematical equations play a fundamental role in modeling complex natural phenomena. Leveraging the power of deep learning, we introduce SNIP, a Multi-Modal Symbolic-Numeric Pre-training framework. By employing joint contrastive learning between symbolic and numeric domains, SNIP enhances their mutual alignment in pre-trained embeddings. Latent space analysis reveals that symbolic supervision significantly enriches the embeddings of numeric data, and vice versa. Evaluations across diverse tasks, including symbolic-to-numeric and numeric-to-symbolic property prediction, demonstrate SNIP's superior performance over fully supervised baselines. This advantage is particularly pronounced in few-shot learning scenarios, making SNIP a valuable asset in situations with limited available data. | SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training | [
"Kazem Meidani",
"Parshin Shojaee",
"Chandan K. Reddy",
"Amir Barati Farimani"
] | Workshop/AI4Science | 2310.02227 | [
"https://github.com/deep-symbolic-mathematics/Multimodal-Math-Pretraining"
] | https://huggingface.co/papers/2310.02227 | 2 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=MtlsaZObXf | @inproceedings{
lee2023interpretable,
title={Interpretable Neural {PDE} Solvers using Symbolic Frameworks},
author={Yolanne Lee},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=MtlsaZObXf}
} | Partial differential equations (PDEs) are ubiquitous in the world around us, modelling phenomena from heat and sound to quantum systems. Recent advances in deep learning have resulted in the development of powerful neural solvers; however, while these methods have demonstrated state-of-the-art performance in both accuracy and computational efficiency, a significant challenge remains in their interpretability. Most existing methodologies prioritize predictive accuracy over clarity in the underlying mechanisms driving the model's decisions. Interpretability is crucial for trustworthiness and broader applicability, especially in scientific and engineering domains where neural PDE solvers might see the most impact. In this context, a notable gap in current research is the integration of symbolic frameworks (such as symbolic regression) into these solvers. Symbolic frameworks have the potential to distill complex neural operations into human-readable mathematical expressions, bridging the divide between black-box predictions and solutions. | Interpretable Neural PDE Solvers using Symbolic Frameworks | [
"Yolanne Lee"
] | Workshop/AI4Science | 2310.20463 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MhHWPUazY4 | @inproceedings{
xu2023infusing,
title={Infusing Spatial Knowledge into Deep Learning for Earth Science: A Hydrological Application},
author={Zelin Xu and Tingsong Xiao and Wenchong He and Yu Wang and Zhe Jiang},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=MhHWPUazY4}
} | The integration of Artificial Intelligence (AI) into Earth science, including areas such as geology, ecology, and hydrology, brings potential for significant advancements. Despite this potential, applying deep learning techniques to spatial data in this field is often hindered by the lack of domain knowledge. This paper studies the integration of spatial domain knowledge and deep learning for Earth science. The problem is challenging due to the sparse and noisy input labels, spatial uncertainty, and high computational costs associated with a large number of sample locations. Existing works on neuro-symbolic models focus on integrating symbolic logic into neural networks (e.g., loss function, model architecture, and training label augmentation), but these methods do not fully address the specific spatial data challenges. To bridge this gap, we propose a Spatial Knowledge-Infused Hierarchical Learning (SKI-HL) framework, which iteratively infers labels within a multi-resolution hierarchy, and trains the deep learning model with uncertainty-aware multi-instance learning. The evaluation of real-world hydrological datasets demonstrates the enhanced performance of the SKI-HL framework over several baseline methods. The code is available at \url{https://github.com/ZelinXu2000/SKI-HL}. | Infusing Spatial Knowledge into Deep Learning for Earth Science: A Hydrological Application | [
"Zelin Xu",
"Tingsong Xiao",
"Wenchong He",
"Yu Wang",
"Zhe Jiang"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=MK7gIPSter | @inproceedings{
zhang2023protein,
title={Protein Language Model-Powered 3-Dimensional Ligand Binding Site Prediction from Protein Sequence},
author={Shuo Zhang and Lei Xie},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=MK7gIPSter}
} | Prediction of ligand binding sites of proteins is a fundamental and important task for understanding the function of proteins and screening potential drugs. Most existing methods require experimentally determined protein holo-structures as input. However, such structures can be unavailable on novel or less-studied proteins. To tackle this limitation, we propose LaMPSite, which only takes protein sequences and ligand molecular graphs as input for ligand binding site predictions. The protein sequences are used to retrieve residue-level embeddings and contact maps from the pre-trained ESM-2 protein language model. The ligand molecular graphs are fed into a graph neural network to compute atom-level embeddings. Then we compute and update the protein-ligand interaction embedding based on the protein residue-level embeddings and ligand atom-level embeddings, and the geometric constraints in the inferred protein contact map and ligand distance map. A final pooling on protein-ligand interaction embedding would indicate which residues belong to the binding sites. Without any 3D coordinate information of proteins, our proposed model achieves competitive performance compared to baseline methods that require 3D protein structures when predicting binding sites. Given that less than 50% of proteins have reliable structure information in the current stage, LaMPSite will provide new opportunities for drug discovery. | Protein Language Model-Powered 3D Ligand Binding Site Prediction from Protein Sequence | [
"Shuo Zhang",
"Lei Xie"
] | Workshop/AI4Science | 2312.03016 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=M12lmQKuxa | @inproceedings{
mccabe2023multiple,
title={Multiple Physics Pretraining for Physical Surrogate Models},
author={Michael McCabe and Bruno R{\'e}galdo-Saint Blancard and Liam Parker and Ruben Ohana and Miles Cranmer and Alberto Bietti and Michael Eickenberg and Siavash Golkar and Geraud Krawezik and Francois Lanusse and Mariel Pettee and Tiberiu Tesileanu and Kyunghyun Cho and Shirley Ho},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=M12lmQKuxa}
} | We introduce multiple physics pretraining (MPP), an autoregressive task-agnostic pretraining approach for physical surrogate modeling. MPP involves training large surrogate models to predict the dynamics of multiple heterogeneous physical systems simultaneously by learning features that are broadly useful across diverse physical tasks. In order to learn effectively in this setting, we introduce a shared embedding and normalization strategy that projects the fields of multiple systems into a single shared embedding space. We validate the efficacy of our approach on both pretraining and downstream tasks. In pretraining, we show that a single MPP-pretrained model is able to match or outperform task-specific baselines on all training sub-tasks without the need for finetuning. For downstream tasks, we explore how the benefits of MPP scale with available finetuning data and demonstrate pretraining gains even across large physics gaps. We open-source our code and model weights trained at multiple scales for reproducibility and community experimentation. | Multiple Physics Pretraining for Physical Surrogate Models | [
"Michael McCabe",
"Bruno Régaldo-Saint Blancard",
"Liam Holden Parker",
"Ruben Ohana",
"Miles Cranmer",
"Alberto Bietti",
"Michael Eickenberg",
"Siavash Golkar",
"Geraud Krawezik",
"Francois Lanusse",
"Mariel Pettee",
"Tiberiu Tesileanu",
"Kyunghyun Cho",
"Shirley Ho"
] | Workshop/AI4Science | 2310.02994 | [
"https://github.com/PolymathicAI/multiple_physics_pretraining"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=KWwwQ97HfS | @inproceedings{
plainer2023transition,
title={Transition Path Sampling with Boltzmann Generator-based {MCMC} Moves},
author={Michael Plainer and Hannes Stark and Charlotte Bunne and Stephan G{\"u}nnemann},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=KWwwQ97HfS}
} | Sampling all possible transition paths between two 3D states of a molecular system has various applications ranging from catalyst design to drug discovery. Current approaches to sample transition paths use Markov chain Monte Carlo and rely on time-intensive molecular dynamics simulations to find new paths. Our approach operates in the latent space of a normalizing flow that maps from the molecule's Boltzmann distribution to a Gaussian, where we propose new paths without requiring molecular simulations. Using alanine dipeptide, we explore Metropolis-Hastings acceptance criteria in the latent space for exact sampling and investigate different latent proposal mechanisms. | Transition Path Sampling with Boltzmann Generator-based MCMC Moves | [
"Michael Plainer",
"Hannes Stark",
"Charlotte Bunne",
"Stephan Günnemann"
] | Workshop/AI4Science | 2312.05340 | [
"https://github.com/plainerman/latent-tps"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=KMtM5ZHxct | @inproceedings{
xiong2023scclip,
title={sc{CLIP}: Multi-modal Single-cell Contrastive Learning Integration Pre-training},
author={Lei Xiong and Tianlong Chen and Manolis Kellis},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=KMtM5ZHxct}
} | Recent advances in multi-modal single-cell sequencing technologies enable the simultaneous profiling of chromatin accessibility and transcriptome in individual cells. Integration analysis of multi-modal single-cell data offers a more comprehensive understanding of the regulatory mechanisms linking chromatin status and gene expression, driving cellular processes and diseases. In order to acquire features that align peaks and genes within the same embedding space and facilitate seamless zero-shot transfer to new data, we introduced scCLIP (single-cell Contrastive Learning Integration Pretraining), a generalized multi-modal transformer model with contrastive learning. We show that this model outperforms other competing methods, and beyond this, scCLIP learns transferable features across modalities and generalizes to unseen datasets, which pose the great potential to bridge the vast number of unpaired unimodal datasets both existing and new data generated in the future. Specifically, we propose the first large-scale transformer model
designed for single-cell ATAC-seq data by patching peaks across the genomes and representing each patch as a token. This innovative approach enables us effectively to address the scalability challenges posed by scATAC-seq, even when dealing with datasets of up to one million dimensions. Codes are provided at: https://github.com/jsxlei/scCLIP. | scCLIP: Multi-modal Single-cell Contrastive Learning Integration Pre-training | [
"Lei Xiong",
"Tianlong Chen",
"Manolis Kellis"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=KHDMZtoF4i | @inproceedings{
golkar2023xval,
title={xVal: A Continuous Number Encoding for Large Language Models},
author={Siavash Golkar and Mariel Pettee and Michael Eickenberg and Alberto Bietti and Miles Cranmer and Geraud Krawezik and Francois Lanusse and Michael McCabe and Ruben Ohana and Liam Parker and Bruno R{\'e}galdo-Saint Blancard and Tiberiu Tesileanu and Kyunghyun Cho and Shirley Ho},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=KHDMZtoF4i}
} | Large Language Models (LLMs) have not yet been broadly adapted for the analysis of scientific datasets due in part to the unique difficulties of tokenizing numbers. We propose xVal, a numerical encoding scheme that represents any real number using just a single token. xVal represents a given real number by scaling a dedicated embedding vector by the number value. Combined with a modified number-inference approach, this strategy renders the model end-to-end continuous when considered as a map from the numbers of the input string to those of the output string. This leads to an inductive bias that is generally more suitable for applications in scientific domains. We empirically evaluate our proposal on a number of synthetic and real-world datasets. Compared with existing number encoding schemes, we find that xVal is more token-efficient and demonstrates improved generalization. | xVal: A Continuous Number Encoding for Large Language Models | [
"Siavash Golkar",
"Mariel Pettee",
"Michael Eickenberg",
"Alberto Bietti",
"Miles Cranmer",
"Geraud Krawezik",
"Francois Lanusse",
"Michael McCabe",
"Ruben Ohana",
"Liam Holden Parker",
"Bruno Régaldo-Saint Blancard",
"Tiberiu Tesileanu",
"Kyunghyun Cho",
"Shirley Ho"
] | Workshop/AI4Science | 2310.02989 | [
"https://github.com/PolymathicAI/xVal"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=J8HGMimNYe | @inproceedings{
rodr{\'\i}guez2023xlumina,
title={{XL}uminA: An Auto-differentiating Discovery Framework for Super-Resolution Microscopy},
author={Carla Rodr{\'\i}guez and S{\"o}ren Arlt and Leonhard M{\"o}ckl and Mario Krenn},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=J8HGMimNYe}
} | In this work we introduce XLuminA, an original computational framework designed for the discovery of novel optical hardware in super-resolution microscopy. Our framework offers auto-differentiation capabilities, allowing for the fast and efficient simulation and automated design of entirely new optical setups from scratch. We showcase its potential by rediscovering three foundational experiments, each one covering different areas in optics: an optical telescope, STED microscopy and the focusing beyond the diffraction limit of a radially polarized light beam. Intriguingly, for this last experiment, the machine found an alternative solution following the same physical principle exploited for breaking the diffraction limit. With XLuminA, can we go beyond simple optimization and calibration of known experimental setups, opening the door to potentially uncovering new microscopy concepts within the vast landscape of experimental possibilities. | XLuminA: An Auto-differentiating Discovery Framework for Super-Resolution Microscopy | [
"Carla Rodríguez",
"Sören Arlt",
"Leonhard Möckl",
"Mario Krenn"
] | Workshop/AI4Science | 2310.08408 | [
"https://github.com/artificial-scientist-lab/xlumina"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=IxyMZngWen | @inproceedings{
li2023latent,
title={Latent Diffusion Model for {DNA} Sequence Generation},
author={Zehui Li and Yuhao Ni and Tim Huygelen and Akashaditya Das and Guoxuan Xia and Guy-Bart Stan and Yiren Zhao},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=IxyMZngWen}
} | The harnessing of machine learning, especially deep generative models, has opened up promising avenues in the field of synthetic DNA sequence generation. Whilst Generative Adversarial Networks (GANs) have gained traction for this application, they often face issues such as limited sample diversity and mode collapse. On the other hand, Diffusion Models are a promising new class of generative models that are not burdened with these problems, enabling them to reach the state-of-the-art in domains such as image generation. In light of this, we propose a novel *latent diffusion* model, DiscDiff, tailored for discrete DNA sequence generation. By simply embedding discrete DNA sequences into a continuous latent space using an autoencoder, we are able to leverage the powerful generative abilities of continuous diffusion models for the generation of discrete data. Additionally, we introduce Fréchet Reconstruction Distance (FReD) as a new metric to measure the sample quality of DNA sequence generations. Our DiscDiff model demonstrates an ability to generate synthetic DNA sequences that align closely with real DNA in terms of Motif Distribution, Latent Embedding Distribution (FReD), and Chromatin Profiles. Additionally, we contribute a comprehensive cross-species dataset of 150K unique promoter-gene sequences from 15 species, enriching resources for future generative modelling in genomics. We have made our code and data publicly available at https://github.com/Zehui127/Latent-DNA-Diffusion. | Latent Diffusion Model for DNA Sequence Generation | [
"Zehui Li",
"Yuhao Ni",
"Tim August B. Huygelen",
"Akashaditya Das",
"Guoxuan Xia",
"Guy-Bart Stan",
"Yiren Zhao"
] | Workshop/AI4Science | 2310.06150 | [
"https://github.com/zehui127/latent-dna-diffusion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Il8mRcEiRq | @inproceedings{
pagel2023exploring,
title={Exploring the applications of Neural Cellular Automata in molecular sciences},
author={Sebastian Pagel and Leroy Cronin},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=Il8mRcEiRq}
} | In recent years, Cellular Automata have been merged with developments in deep learning to replace the traditional update rules with a neural network. These Neural Cellular Automata (NCAs) have been applied for 2D, and 3D object generation, morphogenesis, as well as the orchestration of goal-directed behavioural responses. While there have been numerous examples of applying NCAs to emoji-like, and common gameplay objects (like houses or trees in Minecraft), their adaption to molecule representations has yet to be explored. In this work, we present an adaptation of 3D NCAs to voxelized representations of small- and bio-molecules. We present three exemplary applications of NCAs to design small-molecule interactors, reconstruct missing parts of protein backbones, and model physical transformations. | Exploring the applications of Neural Cellular Automata in molecular sciences | [
"Sebastian Pagel",
"Leroy Cronin"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=IaUDEYN48p | @inproceedings{
rozwood2023koopmanassisted,
title={Koopman-Assisted Reinforcement Learning},
author={Preston Rozwood and Edward Mehrez and Ludger Paehler and Wen Sun and Steven Brunton},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=IaUDEYN48p}
} | The Bellman equation and its continuous form, the Hamilton-Jacobi-Bellman (HJB) equation, are ubiquitous in reinforcement learning and control theory contexts due, in part, to their guaranteed convergence towards a system’s optimal value function. However, its application presents very intense limitations. This paper explores the connection between the data-driven Koopman operator and Bellman Markov Decision Processes, resulting in the development of two new reinforcement learning algorithms to alleviate these limitations. In particular, we focus on Koopman operator methods that reformulate a nonlinear system by lifting into a new coordinate system where the dynamics become linear, and where HJB-based methods are more tractable. These transformations enable the estimation, prediction, and control of strongly nonlinear dynamics. Viewing the Bellman equation as a controlled dynamical system, the Koopman operator is able to describe the expectation of the time evolution of the value function in the given systems via linear dynamics in the lifted coordinates. By parameterizing the Koopman operator with control actions and making an assumption about the feature space of the time evolution of the value function, we are able to construct a new “Koopman tensor” that facilitates the estimation of the optimal value function. Finally, a transformation of Bellman’s framework in terms of the Koopman tensor enables us to reformulate two max-entropy reinforcement learning algorithms: soft-value iteration and soft actor-critic (SAC). This framework is very flexible and can be used for deterministic or stochastic systems as well as for discrete or continuous-time dynamics. We show that these algorithms attain SOTA with respect to traditional neural network-based SAC and linear quadratic regulator baselines while retaining interpretability on 3 controlled dynamical systems: the Lorenz system, the fluid flow past a cylinder, and a double-well potential with non-isotropic stochastic forcing. | Koopman-Assisted Reinforcement Learning | [
"Preston Rozwood",
"Edward Mehrez",
"Ludger Paehler",
"Wen Sun",
"Steven Brunton"
] | Workshop/AI4Science | 2403.02290 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=IYSljK0uEw | @inproceedings{
honig2023automated,
title={Automated distillation of genomic equations governing single cell gene expression},
author={Edouardo Honig and Frederique Ruf-Zamojski and Stuart Sealfon and Ying Nian Wu and Zijun Frank Zhang},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=IYSljK0uEw}
} | Gene expression is an essential cellular process that is controlled by a complex and orchestrated regulatory network of transcription factors and epigenetic modifications.
The advancement in single-cell RNA sequencing enables the investigation of gene expression control at an unprecedented fine resolution and large scale.
Yet, understanding the sequence determinants underlying distinct primary cell types remains elusive and challenging.
While deep neural networks have shown strong performance in predicting gene expression, the lack of meaningful explanations of predictions, especially in systematic understanding of the molecular mechanisms, motivates the search for more transparent models.
We present an automated model that predicts gene expression from genetic sequences while providing both strong performance and direct interpretations of predictions.
Our model combines a pre-trained genetic sequence class model and neural architecture search with symbolic regression to distill explainable genomic equations.
We applied our method to an in-house human pituitary (a specialized gland in the brain that controls the endocrine system) single-cell gene expression data. The distilled genomic equation prediction accuracy (Pearson r=0.713) is comparable to other explainable models, without artificially introducing strong inductive bias that may not hold for the complex and potentially non-linear cellular system.
The genomic equations shed light on how sequence classes interact and regulate the cell type-specific, finely-controlled transcriptomic program in the human endocrine system.
To our knowledge, this is the first attempt at distilling genomic equations from neural networks using symbolic regression. | Automated distillation of genomic equations governing single cell gene expression | [
"Edouardo Honig",
"Frederique Ruf-Zamojski",
"Stuart Sealfon",
"Ying Nian Wu",
"Zijun Frank Zhang"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=IQURgB6PZx | @inproceedings{
igashov2023retrobridge,
title={RetroBridge: Modeling Retrosynthesis with Markov Bridges},
author={Ilia Igashov and Arne Schneuing and Marwin Segler and Michael Bronstein and Bruno Correia},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=IQURgB6PZx}
} | Retrosynthesis planning is a fundamental challenge in chemistry which aims at designing multi-step reaction pathways from commercially available starting materials to a target molecule. Each step in multi-step retrosynthesis planning requires accurate prediction of possible precursor molecules given the target molecule and confidence estimates to guide heuristic search algorithms. We model single-step retrosynthesis as a distribution learning problem in a discrete state space. First, we introduce the Markov Bridge Model, a generative framework aimed to approximate the dependency between two intractable discrete distributions accessible via a finite sample of coupled data points. Our framework is based on the concept of a Markov bridge, a Markov process pinned at its endpoints. Unlike diffusion-based methods, our Markov Bridge Model does not need a tractable noise distribution as a sampling proxy and directly operates on the input product molecules as samples from the intractable prior distribution. We then address the retrosynthesis planning problem with our novel framework and introduce RetroBridge, a template-free retrosynthesis modeling approach that achieves state-of-the-art results on standard evaluation benchmarks. | RetroBridge: Modeling Retrosynthesis with Markov Bridges | [
"Ilia Igashov",
"Arne Schneuing",
"Marwin Segler",
"Michael M. Bronstein",
"Bruno Correia"
] | Workshop/AI4Science | 2308.16212 | [
"https://github.com/igashov/retrobridge"
] | https://huggingface.co/papers/2308.16212 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=HrUsm9Rorj | @inproceedings{
silva2023adaptive,
title={Adaptive learning acceleration for nonlinear {PDE} solvers},
author={Vinicius Silva and Pablo Salinas and Claire Heaney and Matthew Jackson and Christopher Pain},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=HrUsm9Rorj}
} | We propose a novel type of nonlinear solver acceleration for systems of nonlinear partial differential equations (PDEs) that is based on online/adaptive learning. It is applied in the context of multiphase porous media flow. The presented method is built on four pillars: compaction of the training space using dimensionless numbers, offline training in a representative simplistic (two-dimensional) numerical model, control of the numerical relaxation (or other tuning parameter) of a classical nonlinear solver, and online learning to improve the machine learning model in run time (online training). The approach is capable of reducing the number of nonlinear iterations by dynamically adjusting one single global parameter (the relaxation factor) and by learning on-the-job the characteristics of each numerical model. Its implementation is simple and general. In this work, we have also identified the key dimensionless parameters required, compared the performance of different machine learning models, showed the reduction in the number of nonlinear iterations obtained by using the proposed approach in complex realistic (three-dimensional) models, and for the first time properly coupled a machine learning model into an open-source multiphase flow simulator achieving up to 85\% reduction in computational time. | Adaptive learning acceleration for nonlinear PDE solvers | [
"Vinicius Luiz Santos Silva",
"Pablo Salinas",
"Claire E Heaney",
"Matthew Jackson",
"Christopher Charles Pain"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HSvg7qFFd2 | @inproceedings{
behrouz2023unsupervised,
title={Unsupervised Representation Learning of Brain Activity via Bridging Voxel Activity and Functional Connectivity},
author={Ali Behrouz and Parsa Delavari and Farnoosh Hashemi},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=HSvg7qFFd2}
} | Effective brain representation learning is a key step toward revealing the understanding of cognitive processes and unlocking detecting and potential therapeutic interventions for neurological diseases/disorders. Existing studies have focused on either (1) voxel-level activity, where only a single beta weight for each voxel (i.e., aggregation of voxel activity over a time window) is considered, missing their temporal dynamics, or (2) functional connectivity of the brain in the level of region of interests, missing voxel-level activities. In this paper, we bridge this gap and design BrainMixer, an unsupervised learning framework that effectively utilizes both functional connectivity and associated time series of voxels to learn voxel-level representation in an unsupervised manner. BrainMixer employs two simple yet effective MLP-based encoders to simultaneously learn the dynamics of voxel-level signals and their functional correlations. To encode voxel activity, BrainMixer fuses information across both time and voxel dimensions via a dynamic self-attention mechanism. To learn the structure of the functional connectivity graph, BrainMixer presents a temporal graph patching and encodes each patch by combining its nodes' features via a new adaptive temporal pooling. Our experiments show that BrainMixer attains outstanding performance and outperforms 14 baselines in different downstream tasks and experimental setups. | Unsupervised Representation Learning of Brain Activity via Bridging Voxel Activity and Functional Connectivity | [
"Ali Behrouz",
"Parsa Delavari",
"Farnoosh Hashemi"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=H98ZQrAUKD | @inproceedings{
liu2023representation,
title={Representation Learning for Spatial Multimodal Data Integration with Optimal Transport},
author={Xinhao Liu and Benjamin Raphael},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=H98ZQrAUKD}
} | Spatial sequencing technologies have advanced rapidly in the past few years, and recently multiple modalities of cells -- including mRNA expression, chromatin state, and other molecular modalities -- can be measured with corresponding spatial location in tissue slices. To facilitate scientific discoveries from spatial multi-omics sequencing experiments, methods for integrating multimodal spatial data are critically needed. Here we define the problem of spatial multimodal integration as integrating multiple modalities from related tissue slices into a Common Coordinate Framework (CCF) and learning biological meaningful representations for each spatial location in the CCF. We introduce a novel machine learning framework combining optimal transport and variational autoencoders to solve the spatial multimodal integration problem. Our method outperforms existing single-cell multi-omics integration methods that ignore spatial information. Our method allows researchers to analyze tissues comprehensively by integrating knowledge from spatial slices of multiple modalities. | Representation Learning for Spatial Multimodal Data Integration with Optimal Transport | [
"Xinhao Liu",
"Benjamin Raphael"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=F7hIk2Bw6X | @inproceedings{
novitasari2023alas,
title={{ALAS}: Active Learning for Autoconversion Rates Prediction from Satellite Data},
author={Maria Novitasari and Johannes Quaas and Miguel Rodrigues},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=F7hIk2Bw6X}
} | High-resolution simulations, such as the ICOsahedral Non-hydrostatic Large-Eddy Model (ICON-LEM), provide valuable insights into the complex interactions among aerosols, clouds, and precipitation, which are the major contributors to climate change uncertainty. However, due to its exorbitant computational costs, it can only be employed for a limited period and geographical area. To address this, we propose a more cost-effective method powered by emerging machine learning approach -- leveraging high-resolution climate simulation as the oracle and abundant unlabeled data drawn from satellite data -- to better understand the intricate dynamics of the climate system. Our approach involves active learning techniques to predict autoconversion rates, a crucial step in precipitation formation, while significantly reducing the need for a large number of labeled instances. In this study, we present novel methods: custom query strategy fusion for labeling instances, WiFi and MeFi, along with active feature selection based on SHAP, designed to tackle real-world challenges due to its simplicity and practicality in application, specifically focusing on the prediction of autoconversion rates. | ALAS: Active Learning for Autoconversion Rates Prediction from Satellite Data | [
"Maria Carolina Novitasari",
"Johannes Quaas",
"Miguel R. D. Rodrigues"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=EYEuvuu0Ky | @inproceedings{
janakarajan2023large,
title={Large Language Models in Molecular Discovery},
author={Nikita Janakarajan and Tim Erdmann and Sarathkrishna Swaminathan and Teodoro Laino and Jannis Born},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=EYEuvuu0Ky}
} | The success of language models, especially transformers in natural language processing, has trickled into scientific domains, giving rise to the concept of "scientific language models" that operate on small molecules, proteins or polymers. In chemistry, language models contribute to accelerating the molecule discovery cycle, as evidenced by promising recent findings in early-stage drug discovery. In this perspective, we review the role of language models in molecular discovery, underlining their strengths and examining their weaknesses in de novo drug design, property prediction and reaction chemistry. We highlight valuable open-source software assets to lower the entry barrier to the field of scientific language modeling. Furthermore, as a solution to some of the weaknesses we identify, we outline a vision for future molecular design that integrates a chat-bot interface with available computational chemistry tools. Our contribution serves as a valuable resource for researchers, chemists, and AI enthusiasts interested in understanding how language models can and will be used to accelerate chemical discovery. | Language Models in Molecular Discovery | [
"Nikita Janakarajan",
"Tim Erdmann",
"Sarathkrishna Swaminathan",
"Teodoro Laino",
"Jannis Born"
] | Workshop/AI4Science | 2309.16235 | [
""
] | https://huggingface.co/papers/2309.16235 | 3 | 10 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=EHblk6xzHe | @inproceedings{
halfon2023virtual,
title={Virtual Receptors for Efficient Molecular Diffusion},
author={Matan Halfon and Eyal Rozenberg and Ehud Rivlin and Daniel Freedman},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=EHblk6xzHe}
} | Machine learning approaches to Structure-Based Drug Design (SBDD) have proven quite fertile over the last few years. In particular, diffusion-based approaches to SBDD have shown great promise. We present a technique which expands on this diffusion approach in two crucial ways. First, we address the size disparity between the drug molecule and the target/receptor, which makes learning more challenging and inference slower. We do so through the notion of a Virtual Receptor, which is a compressed version of the receptor; it is learned so as to preserve key aspects of the structural information of the original receptor, while respecting the relevant group equivariance. Second, we incorporate a protein language embedding used originally in the context of protein folding. We experimentally demonstrate the contributions of both the virtual receptors and the protein embeddings: in practice, they lead to both better performance, as well as significantly faster computations. | Virtual Receptors for Efficient Molecular Diffusion | [
"Matan Halfon",
"Eyal Rozenberg",
"Ehud Rivlin",
"Daniel Freedman"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=E1khscdUdH | @inproceedings{
zhang2023insight,
title={Insight Miner: A Large-scale Multimodal Model for Insight Mining from Time Series},
author={Yunkai Zhang and Yawen Zhang and Ming Zheng and Kezhen Chen and Chongyang Gao and Ruian Ge and Siyuan Teng and Amine Jelloul and Jinmeng Rao and Xiaoyuan Guo and Chiang-Wei Fang and Zeyu Zheng and Jie Yang},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=E1khscdUdH}
} | Time-series data is essential in various science and industry domains, like environmental analysis, agriculture, transportation, and finance. Researchers need to use their domain knowledge to conduct insight mining from time-series data to study scientific topics. However, this process is time-consuming and highly depends on expert knowledge. This paper proposes a large-scale multimodal model (LMM), Insight Miner, to generate decent and comprehensive time-series descriptions with domain-specific knowledge. To introduce rich time-series insights to Insight Miner, we propose a time-series analysis dataset, TS-Insights, composed of time series and textual insight pairs. In the TS-Insights dataset, we include 100k time series windows sampled from 20 forecasting datasets spanning a wide variety of domains and granularities. Through a meticulous combination of heuristics and statistical tools, we preprocess each raw time series window and use GPT-4 to generate a coherent trend description based on the extracted features. After training with the TS-Insights dataset via instruct tuning, the Insight Miner model performs better in generating time series descriptions and insights compared with state-of-the-art multimodality models, such as LLaVA and GPT-4. Our findings suggest a promising direction of leveraging LMMs for time series analysis and potentially offering avenues for efficient insight mining in scientific domains. The TS-Insights dataset is available here: https://drive.google.com/drive/folders/1qGXigxE5GvmF1oLuGXaqLMkRgwoQfZ7V?usp=sharing. | Insight Miner: A Time Series Analysis Dataset for Cross-Domain Alignment with Natural Language | [
"Yunkai Zhang",
"Yawen Zhang",
"Ming Zheng",
"Kezhen Chen",
"Chongyang Gao",
"Ruian Ge",
"Siyuan Teng",
"Amine Jelloul",
"Jinmeng Rao",
"Xiaoyuan Guo",
"Chiang-Wei Fang",
"Zeyu Zheng",
"Jie Yang"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=DqJThcBJ6P | @inproceedings{
zaman2023stride,
title={{STRIDE}: Structure-guided Generation for Inverse Design of Molecules},
author={Shehtab Zaman and Denis Akhiyarov and Mauricio Araya-Polo and Kenneth Chiu},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=DqJThcBJ6P}
} | Machine learning and especially deep learning has had an increasing impact on molecule and materials design. In particular, given the growing access to an abundance of high-quality small molecule data for generative modeling for drug design, which has led to promising results for drug discovery. However, for many important classes of materials such as catalysts, antioxidants, and metal-organic frameworks, such large datasets are not available. Such families of molecules with limited samples and structural similarities are especially prevalent for industrial applications. As it is well-known, retraining and even fine-tuning are challenging on such small datasets. Novel, practically applicable molecules are most often derivatives of well-known molecules,
suggesting approaches to addressing data scarcity. To address this problem,
we introduce $\textbf{STRIDE}$, a generative molecule workflow that generates novel molecules with an unconditional generative model guided by known molecules without any retraining. We generate molecules outside of the training data from a highly specialized set of antioxidant molecules.
Our generated molecules on average 21.7\% lower synthetic accessibility scores and also reduce ionization potential by 5.9\% of generated molecules via guiding. | STRIDE: Structure-guided Generation for Inverse Design of Molecules | [
"Shehtab Zaman",
"Denis Akhiyarov",
"Mauricio Araya-Polo",
"Kenneth Chiu"
] | Workshop/AI4Science | 2311.06297 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=DBiWSzlaGz | @inproceedings{
lee2023stoichiometry,
title={Stoichiometry Representation Learning with Polymorphic Crystal Structures},
author={Namkyeong Lee and Heewoong Noh and Gyoung S. Na and Tianfan Fu and Jimeng Sun and Chanyoung Park},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=DBiWSzlaGz}
} | Despite the recent success of machine learning (ML) in materials science, its success heavily relies on the structural description of crystal, which is itself computationally demanding and occasionally unattainable. Stoichiometry descriptors can be an alternative approach, which reveals the ratio between elements involved to form a certain compound without any structural information. However, it is not trivial to learn the representations of stoichiometry due to the nature of materials science called polymorphism, i.e., a single stoichiometry can exist in multiple structural forms due to the flexibility of atomic arrangements, inducing uncertainties in representation. To this end, we propose PolySRL, which learns the probabilistic representation of stoichiometry by utilizing the readily available structural information, whose uncertainty reveals the polymorphic structures of stoichiometry. Extensive experiments on sixteen datasets demonstrate the superiority of PolySRL, and analysis of uncertainties shed light on the applicability of PolySRL in real-world material discovery. The source code for PolySRL is available at https://github.com/Namkyeong/PolySRL_AI4Science. | Stoichiometry Representation Learning with Polymorphic Crystal Structures | [
"Namkyeong Lee",
"Heewoong Noh",
"Gyoung S. Na",
"Tianfan Fu",
"Jimeng Sun",
"Chanyoung Park"
] | Workshop/AI4Science | 2312.13289 | [
"https://github.com/namkyeong/polysrl_ai4science"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=D1OcdXlFcL | @inproceedings{
sathujoda2023excitonpolariton,
title={Exciton-Polariton Condensates: A Fourier Neural Operator Approach},
author={Surya Sathujoda and Yuan Wang and Kanishk Gandhi},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=D1OcdXlFcL}
} | Advancements in semiconductor fabrication over the past decade have catalyzed extensive research into all-optical devices driven by exciton-polariton condensates. Preliminary validations of such devices, including transistors, have shown encouraging results even under ambient conditions. A significant challenge still remains for large scale application however: the lack of a robust solver that can be used to simulate complex nonlinear systems which require an extended period of time to stabilize. Addressing this need, we propose the application of a machine-learning-based Fourier Neural Operator approach to find the solution to the Gross-Pitaevskii equations coupled with extra exciton rate equations. This work marks the first direct application of Neural Operators to an exciton-polariton condensate system. Our findings show that the proposed method can predict final-state solutions to a high degree of accuracy almost 1000 times faster than CUDA-based GPU solvers. Moreover, this paves the way for potential all-optical chip design workflows by integrating experimental data. | Exciton-Polariton Condensates: A Fourier Neural Operator Approach | [
"Surya Teja Sathujoda",
"Yuan Wang",
"Kanishk Gandhi"
] | Workshop/AI4Science | 2309.15593 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CllNd4XWVF | @inproceedings{
shang2023ai,
title={{AI}, Robot Neuroscientist: Reimagining Hypothesis Generation},
author={Jiaqi Shang and Will Xiao},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=CllNd4XWVF}
} | Neuroscience has long relied on human-conceived hypotheses, yet the brain's complexity fundamentally challenges this epistemology. Modern technologies and the large-scale data collection they enable throw this challenge into sharp relief. We champion the potential of AI for neuroscience exploration. We highlight both implicit, 'uninterpretable' models as aids in hypothesis formulation and symbolic regression for explicit hypothesis generation. For researchers from non-neuroscience backgrounds, we discuss domain-specific considerations in integrating AI into neuroscience research. By spotlighting the underexplored avenues for AI to accelerate neuroscience, we aim to induce both communities toward these exciting research opportunities. | AI, Robot Neuroscientist: Reimagining Hypothesis Generation | [
"Jiaqi Shang",
"Will Xiao"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CZGHAeeBk3 | @inproceedings{
ma2023baking,
title={Baking Symmetry into {GF}lowNets},
author={George Ma and Emmanuel Bengio and Yoshua Bengio and Dinghuai Zhang},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=CZGHAeeBk3}
} | GFlowNets have exhibited promising performance in generating diverse candidates with high rewards. These networks generate objects incrementally and aim to learn a policy that assigns probability of sampling objects in proportion to rewards. However, the current training pipelines of GFlowNets do not consider the presence of isomorphic actions, which are actions resulting in symmetric or isomorphic states. This lack of symmetry increases the amount of samples required for training GFlowNets and can result in inefficient and potentially incorrect flow functions. As a consequence, the reward and diversity of the generated objects decrease. In this study, our objective is to integrate symmetries into GFlowNets by identifying equivalent actions during the generation process. Experimental results using synthetic data demonstrate the promising performance of our proposed approaches. | Baking Symmetry into GFlowNets | [
"George Ma",
"Emmanuel Bengio",
"Yoshua Bengio",
"Dinghuai Zhang"
] | Workshop/AI4Science | 2406.05426 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=BybTciafUJ | @inproceedings{
kacprzyk2023shape,
title={Shape Arithmetic Expressions},
author={Krzysztof Kacprzyk and Mihaela van der Schaar},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=BybTciafUJ}
} | Symbolic regression has excelled in uncovering equations from physics, chemistry, biology, and related disciplines. However, its effectiveness becomes less certain when applied to experimental data lacking inherent closed-form expressions. Empirically derived relationships, such as entire stress-strain curves, may defy concise closed-form representation, compelling us to explore more adaptive modeling approaches that balance flexibility with interpretability. In our pursuit, we turn to Generalized Additive Models (GAMs), a widely used class of models known for their versatility across various domains. Although GAMs can capture non-linear relationships between variables and targets, they cannot capture intricate feature interactions. In this work, we investigate both of these challenges and propose a novel class of models, Shape Arithmetic Expressions (SHAREs), that fuses GAM's flexible shape functions with the complex feature interactions found in mathematical expressions. SHAREs also provide a unifying framework for both of these approaches. We also design a set of rules for constructing SHAREs that guarantee transparency of the found expressions beyond the standard constraints based on the model's size. | Shape Arithmetic Expressions | [
"Krzysztof Kacprzyk",
"Mihaela van der Schaar"
] | Workshop/AI4Science | [
"https://github.com/krzysztof-kacprzyk/shares"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BaIor3Ur51 | @inproceedings{
dongre2023evaluating,
title={Evaluating Uncertainty Quantification approaches for Neural {PDE}s in scientific application},
author={Vardhan Dongre and Gurpreet Singh Hora},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=BaIor3Ur51}
} | The accessibility of spatially distributed data, enabled by affordable sensors, field, and numerical experiments, has facilitated the development of data-driven solutions for scientific problems, including climate change, weather prediction, and urban planning. Neural Partial Differential Equations (Neural PDEs), which combine deep learning (DL) techniques with domain expertise (e.g., governing equations) for parameterization, have proven to be effective in capturing valuable correlations within spatiotemporal datasets. However, sparse and noisy measurements coupled with modeling approximation introduce aleatoric and epistemic uncertainties. Therefore, quantifying uncertainties propagated from model inputs to outputs remains a challenge and an essential goal for establishing the trustworthiness of Neural PDEs. This work evaluates various Uncertainty Quantification (UQ) approaches for both Forward and Inverse Problems in scientific applications. Specifically, we investigate the effectiveness of Bayesian methods, such as Hamiltonian Monte Carlo (HMC) and Monte-Carlo Dropout (MCD), and a more conventional approach, Deep Ensembles (DE). To illustrate their performance, we take two canonical PDEs: Burger's equation and the Navier-Stokes equation. Our results indicate that Neural PDEs can effectively reconstruct flow systems and predict the associated unknown parameters. However, it is noteworthy that the results derived from Bayesian methods, based on our observations, tend to display a higher degree of certainty in their predictions as compared to those obtained using the DE. This elevated certainty in predictions suggests that Bayesian techniques might underestimate the true underlying uncertainty, thereby appearing more confident in their predictions than the DE approach. | Evaluating Uncertainty Quantification approaches for Neural PDEs in scientific applications | [
"Vardhan Dongre",
"Gurpreet Singh Hora"
] | Workshop/AI4Science | 2311.04457 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=B8EpSHEp9j | @inproceedings{
wang2023relaxed,
title={Relaxed Octahedral Group Convolution for Learning Symmetry Breaking in 3D Physical Systems},
author={Rui Wang and Robin Walters and Tess Smidt},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=B8EpSHEp9j}
} | Deep equivariant models use symmetries to improve sample efficiency and generalization. However, the assumption of perfect symmetry in many of these models can sometimes be restrictive, especially when the data does not perfectly align with such symmetries. Thus, we introduce relaxed octahedral group convolution for modeling 3D physical systems in this paper. This flexible convolution technique provably allows the model to both maintain the highest level of equivariance that is consistent with data and discover the subtle symmetry-breaking factors in the physical systems. Empirical results validate that our approach can not only provide insights into the symmetry-breaking factors in phase transitions but also achieves superior performance in fluid super-resolution tasks. | Relaxed Octahedral Group Convolution for Learning Symmetry Breaking in 3D Physical Systems | [
"Rui Wang",
"Robin Walters",
"Tess Smidt"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Ap4rSgB3O1 | @inproceedings{
koker2023higher,
title={Higher Order Equivariant Graph Neural Networks for Charge Density Prediction},
author={Teddy Koker and Keegan Quigley and Lin Li},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=Ap4rSgB3O1}
} | The calculation of electron density distribution in materials and molecules is central to the study of their quantum and macro-scale properties, yet accurate and efficient calculation remains a long-standing challenge in the field of material science. This work introduces ChargE3Net, an E(3)-equivariant graph neural network for predicting electron density in atomic systems. Unlike existing methods, ChargE3Net achieves equivariance through the use of higher-order tensor representations, and directly predicts the charge density at a set of desired locations. We demonstrate the effectiveness of ChargE3Net on large and diverse sets of molecules and materials, where it achieves state-of-the-art performance over existing methods, and scales to larger systems than what is feasible to compute with density functional theory. Through additional experimentation, we demonstrate the effect of introducing higher-order equivariant representations, and why they yield performance improvements in the charge density prediction setting. | Higher Order Equivariant Graph Neural Networks for Charge Density Prediction | [
"Teddy Koker",
"Keegan Quigley",
"Lin Li"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=AS5AZ5L8zk | @inproceedings{
li2023chatpathway,
title={ChatPathway: Conversational Large Language Models for Biology Pathway Detection},
author={Yanjing Li and Hannan Xu and Haiteng Zhao and Hongyu Guo and Shengchao Liu},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=AS5AZ5L8zk}
} | Biological pathways, like protein-protein interactions and metabolic networks, are vital for understanding diseases and drug development. Some databases such as KEGG are designed to store and map these pathways. However, many bioinformatics methods face limitations due to database constraints, and certain deep learning models struggle with the complexities of biochemical reactions involving large molecules and diverse enzymes. Importantly, the thorough exploration of biological pathways demands a deep understanding of scientific literature and past research. Despite this, recent advancements in Large Language Models (LLMs), especially ChatGPT, show promise. We first restructured data from KEGG and augmented it with molecule structural and functional information sourced from UniProt and PubChem. Our study evaluated LLMs, particularly GPT-3.5-turbo and Galactica, in predicting biochemical reactions and pathways using our constructed data. We also assessed its ability to predict novel pathways, not covered in its training dataset, using findings from recently published studies. While GPT demonstrated strengths in pathway mapping, Galactica encountered challenges. This research emphasizes the potential of merging LLMs with biology, suggesting a harmonious blend of human expertise and AI in decoding biological systems. | ChatPathway: Conversational Large Language Models for Biology Pathway Detection | [
"Yanjing Li",
"Hannan Xu",
"Haiteng Zhao",
"Hongyu Guo",
"Shengchao Liu"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=AIfqWNHKjo | @inproceedings{
lalande2023a,
title={A Transformer Model for Symbolic Regression towards Scientific Discovery},
author={Florian Lalande and Yoshitomo Matsubara and Naoya Chiba and Tatsunori Taniai and Ryo Igarashi and Yoshitaka Ushiku},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=AIfqWNHKjo}
} | Symbolic Regression (SR) searches for mathematical expressions which best describe numerical datasets. This allows to circumvent interpretation issues inherent to artificial neural networks, but SR algorithms are often computationally expensive. This work proposes a new Transformer model aiming at Symbolic Regression particularly focused on its application for Scientific Discovery. We propose three encoder architectures with increasing flexibility but at the cost of column-permutation equivariance violation. Training results indicate that the most flexible architecture is required to prevent from overfitting. Once trained, we apply our best model to the SRSD datasets (Symbolic Regression for Scientific Discovery datasets) which yields state-of-the-art results using the normalized tree-based edit distance, at no extra computational cost. | A Transformer Model for Symbolic Regression towards Scientific Discovery | [
"Florian Lalande",
"Yoshitomo Matsubara",
"Naoya Chiba",
"Tatsunori Taniai",
"Ryo Igarashi",
"Yoshitaka Ushiku"
] | Workshop/AI4Science | 2312.04070 | [
"https://github.com/omron-sinicx/transformer4sr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=A28gzFJd28 | @inproceedings{
thakkar2023using,
title={Using Foundation Models to Promote Digitization and Reproducibility in Scientific Experimentation},
author={Amol Thakkar and Andrea Giovannini and Antonio Foncubierta and Carlo Baldassari and Dimitrios Christofidellis and Federico Zipoli and Gianmarco Gabrieli and Jannis Born and Mara Graziani and Marvin Alberts and Matteo Manica and Michael Stiefel and Oliver Schilter and Teodoro Laino and Patrick Ruch},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=A28gzFJd28}
} | Accelerating scientific discovery through AI relies on the availability of high-quality data from scientific experimentation. Yet, scientific experimentation suffers from poor reproducibility and data capture challenges, mostly stemming from the difficulty in transcribing all details of an experiment and the different ways in which individuals document their lab work. With the emergence of foundation models capable of processing multiple data modalities including vision and language, there is a unique opportunity to redefine data and metadata capture and the corresponding scientific documentation process.
In this contribution, we discuss the challenges associated with lab digitization today and how multi-modal learning with transformer-based architectures can contribute to a new research infrastructure for scientific discovery in order to fully describe experimental methods and outcomes while facilitating data sharing and collaboration. We present a case study on a hybrid digital infrastructure and transformer-based vision-language models to transcribe high-dimensional raw data streams from non-invasive recording devices that represent the interaction of researchers with lab environments during scientific experimentation. The infrastructure is demonstrated in test cases related to semiconductor research and wet chemistry, where we show how vision-language foundation models fine-tuned on a limited set of experiments can be used to generate reports that exhibit high similarity with the recorded procedures. Our findings illustrate the feasibility of using foundation models to automate data capture and digitize all aspects of scientific experimentation, and suggest that the challenge of scarce training data for specific laboratory procedures can be alleviated by leveraging self-supervised pretraining on more abundant data from other domains. | Using Foundation Models to Promote Digitization and Reproducibility in Scientific Experimentation | [
"Amol Thakkar",
"Andrea Giovannini",
"Antonio Foncubierta",
"Carlo Baldassari",
"Dimitrios Christofidellis",
"Federico Zipoli",
"Gianmarco Gabrieli",
"Jannis Born",
"Mara Graziani",
"Marvin Alberts",
"Matteo Manica",
"Michael Stiefel",
"Oliver Schilter",
"Teodoro Laino",
"Patrick W. Ruch"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=9Z4XZOhwiz | @inproceedings{
koudounas2023bad,
title={Bad Exoplanet! Explaining Degraded Performance when Reconstructing Exoplanets Atmospheric Parameters},
author={Alkis Koudounas and Flavio Giobergia and Elena Baralis},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=9Z4XZOhwiz}
} | Deep learning techniques have been widely adopted to automate the reconstruction of atmospheric parameters in exoplanets, at a fraction of the computational cost required by traditional approaches. However, many of the reconstruction models used are intrinsically non-interpretable. With this work, we aim to produce descriptions for the characteristics of exoplanets that make their atmospheric composition reconstruction problematic.
We present a model-agnostic approach to detect biased data subgroups described via atmospheric parameters such as planet distance and surface gravity. We show that adopting an ensemble approach remarkably improves the quality of the outcomes overall, as well as at the subgroup level, on synthetic data simulated for the upcoming Ariel space mission. Experimental results further demonstrate the effectiveness of adopting explanation techniques in identifying and describing significant performance gaps between weak learners and their ensemble. Our work provides a more nuanced description of the results provided by deep learning techniques, to enable more meaningful assessments of what can be reasonably achieved with them. | Bad Exoplanet! Explaining Degraded Performance when Reconstructing Exoplanets Atmospheric Parameters | [
"Alkis Koudounas",
"Flavio Giobergia",
"Elena Baralis"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=82brfaM02h | @inproceedings{
hsieh2023xbrainlab,
title={{XB}rainLab: An Open-Source Software for Explainable Artificial Intelligence-Based {EEG} Analysis},
author={Chia-Ying Hsieh and Jing-Lun Chou and Yu-Hsin Chang and Chun-Shu Wei},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=82brfaM02h}
} | Recent advancements in explainable artificial intelligence have significantly accelerated scientific discoveries across various fields. In the realm of neuroscience research, the application of deep interpretation techniques has yielded valuable insights into brain functioning and mechanisms. We introduce XBrainLab, an accessible EEG analysis tool featuring a user-friendly graphical user interface (GUI) seamlessly compatible with code scripting. XBrainLab offers a comprehensive, end-to-end deep learning EEG analysis pipeline, capable of converting raw EEG signals into comprehensible visualizations of neural patterns. Through practical demonstrations using diverse EEG datasets, we highlight XBrainLab's versatility in interpreting neural representations in alignment with established neuroscience knowledge. This evolving open-source platform bridges cutting-edge computational techniques with the forefront of neuroscientific research. The code repository can be accessed at https://github.com/CECNL/XBrainLab. | XBrainLab: An Open-Source Software for Explainable Artificial Intelligence-Based EEG Analysis | [
"Chia-Ying Hsieh",
"Jing-Lun Chou",
"Yu-Hsin Chang",
"Chun-Shu Wei"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=7DULB2GfkG | @inproceedings{
na2023learning,
title={Learning Inter-Graph Interactions Between Heterogeneous Substructures of Chemical Systems},
author={Gyoung S. Na},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=7DULB2GfkG}
} | Complex chemical systems containing heterogeneous substructures are common in real-world applications. Various physical phenomena of the complex chemical systems are derived from the interactions between the heterogeneous substructures. However, existing graph representation learning methods for inter-graph interactions assumed graph-level interactions between homogeneous structures, such as organic molecules and inorganic crystalline materials. We propose a data descriptor of the complex chemical systems and a graph neural network for learning inter-graph interactions between organic and inorganic compounds. We applied the proposed method to predict the physical properties of hybrid solar cell materials containing heterogeneous substructures, which have received significant attention for sustainable energy resources. By learning heterogeneous inter-graph interactions, the proposed method achieved state-of-the-art accuracy in predicting band gaps of 1,682 hybrid solar cell materials. | Learning Inter-Graph Interactions Between Heterogeneous Substructures of Chemical Systems | [
"Gyoung S. Na"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=5ueXRkKMMg | @inproceedings{
wu2023compositional,
title={Compositional Generative Inverse Design},
author={Tailin Wu and Takashi Maruyama and Long Wei and Tao Zhang and Yilun Du and Gianluca Iaccarino and Jure Leskovec},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=5ueXRkKMMg}
} | Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem that arises across fields such as mechanical engineering to aerospace engineering. Inverse design is typically formulated as an optimization problem, with recent works leveraging optimization across learned dynamics models. However, as models are optimized they tend to fall into adversarial modes, preventing effective sampling. We illustrate that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples and significantly improve design performance. We further illustrate how such a design system is compositional, enabling us to combine multiple different diffusion models representing subcomponents of our desired system to design systems with every specified component. In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that our method allows us to design initial states and boundary shapes that are more complex than those in the training data. Our method outperforms state-of-the-art neural inverse design method for the N-body dataset and discovers formation flying to minimize drag in the multi-airfoil design task. | Compositional Generative Inverse Design | [
"Tailin Wu",
"Takashi Maruyama",
"Long Wei",
"Tao Zhang",
"Yilun Du",
"Gianluca Iaccarino",
"Jure Leskovec"
] | Workshop/AI4Science | 2401.13171 | [
"https://github.com/ai4science-westlakeu/cindm"
] | https://huggingface.co/papers/2401.13171 | 1 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=5iryL0a6x4 | @inproceedings{
maser2023moleclues,
title={Mole{CLUE}s: Molecular Conformers Maximally In-Distribution for Predictive Models},
author={Michael Maser and Natasa Tagasovska and Jae Hyeon Lee and Andrew Watkins},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=5iryL0a6x4}
} | Structure-based molecular ML (SBML) models can be highly sensitive to input geometries and give predictions with large variance.
We present an approach to mitigate the challenge of selecting conformations for such models by generating conformers that explicitly minimize predictive uncertainty. To achieve this, we compute estimates of aleatoric and epistemic uncertainties that are differentiable w.r.t. latent posteriors. We then iteratively sample new latents in the direction of lower uncertainty by gradient descent. As we train our predictive models jointly with a conformer decoder, the new latent embeddings can be mapped to their corresponding inputs, which we call MoleCLUEs, or (molecular) counterfactual latent uncertainty explanations (Antorán et al, 2020). We assess our algorithm for the task of predicting drug properties from 3D structure with maximum confidence. We additionally analyze the structure trajectories obtained from conformer optimizations, which provide insight into the sources of uncertainty in SBML. | MoleCLUEs: Molecular Conformers Maximally In-Distribution for Predictive Models | [
"Michael Maser",
"Natasa Tagasovska",
"Jae Hyeon Lee",
"Andrew Martin Watkins"
] | Workshop/AI4Science | 2306.11681 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=4fyg1VX80A | @inproceedings{
yang2023scbignn,
title={scBi{GNN}: Bilevel Graph Representation Learning for Cell Type Classification from Single-cell {RNA} Sequencing Data},
author={Rui Yang and Wenrui Dai and Chenglin Li and Junni Zou and Dapeng Wu and Hongkai Xiong},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=4fyg1VX80A}
} | Single-cell RNA sequencing (scRNA-seq) technology provides high-throughput gene expression data to study the cellular heterogeneity and dynamics of complex organisms. Graph neural networks (GNNs) have been widely used for automatic cell type classification, which is a fundamental problem to solve in scRNA-seq analysis. However, existing methods do not sufficiently exploit both gene-gene and cell-cell relationships, and thus the true potential of GNNs is not realized. In this work, we propose a bilevel graph representation learning method, named scBiGNN, to simultaneously mine the relationships at both gene and cell levels for more accurate single-cell classification. Specifically, scBiGNN comprises two GNN modules to identify cell types. A gene-level GNN is established to adaptively learn gene-gene interactions and cell representations via the self-attention mechanism, and a cell-level GNN builds on the cell-cell graph that is constructed from the cell representations generated by the gene-level GNN. To tackle the scalability issue for processing a large number of cells, scBiGNN adopts an Expectation Maximization (EM) framework in which the two modules are alternately trained via the E-step and M-step to learn from each other. Through this interaction, the gene- and cell-level structural information is integrated to gradually enhance the classification performance of both GNN modules. Experiments on benchmark datasets demonstrate that our scBiGNN outperforms a variety of existing methods for cell type classification from scRNA-seq data. | scBiGNN: Bilevel Graph Representation Learning for Cell Type Classification from Single-cell RNA Sequencing Data | [
"Rui Yang",
"Wenrui Dai",
"Chenglin Li",
"Junni Zou",
"Dapeng Wu",
"Hongkai Xiong"
] | Workshop/AI4Science | 2312.10310 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=4ePJjCP14u | @inproceedings{
azam2023rethinking,
title={Rethinking Bayesian Optimization with Gaussian Processes: Insights from Hyperspectral Trait Search},
author={Ruhana Azam and Sanmi Koyejo and Samuel B Fernandes and Mohammed Kebir and Andrew Leakey and Alexander Lipka},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=4ePJjCP14u}
} | The application of Bayesian Optimization using Gaussian Processes (BO-GP) for global optimization problems is ubiquitous across scientific disciplines because, beyond good performance, it supports exact inference, interpretability, and straightforward uncertainty quantification. In this paper, we revisit the biological application of BO-GP in searching trait spaces for genomic prediction, which uses genome-wide marker information to predict breeding values for agronomically important traits. Genomic predictions help breeders select desirable plants earlier in the field season without waiting to observe traits later. While these search spaces are known to be sharp and aperiodic, BO-GP is considered a feasible approach. However, our work finds that a simple random search surprisingly achieves comparable performance to BO-GP while requiring significantly less computing cost. Through a careful investigation, we can explain this observation as a fundamental limitation of BO-GP for sharp and aperiodic functions -- where the incompatible structure results in samples similar to random search but with higher computational cost. Our results highlight a blind spot in the current use of BO-GP for scientific applications, such as trait prediction, with sharp and aperiodic search spaces. | Rethinking Bayesian Optimization with Gaussian Processes: Insights from Hyperspectral Trait Search | [
"Ruhana Azam",
"Samuel B Fernandes",
"Andrew D.B. Leakey",
"Alexander Lipka",
"Mohammed Kebir",
"Sanmi Koyejo"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=4c4ArnOcBe | @inproceedings{
gao2023prediff,
title={PreDiff: Precipitation Nowcasting with Latent Diffusion Models},
author={Zhihan Gao and Xingjian Shi and Boran Han and Hao Wang and Xiaoyong Jin and Danielle Maddix and Yi Zhu and Mu Li and Bernie Wang},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=4c4ArnOcBe}
} | Earth system forecasting has traditionally relied on complex physical models that are computationally expensive and require significant domain expertise.
In the past decade, the unprecedented increase in spatiotemporal Earth observation data has enabled data-driven forecasting models using deep learning techniques.
These models have shown promise for diverse Earth system forecasting tasks but either struggle with handling uncertainty or neglect domain-specific prior knowledge, resulting in averaging possible futures to blurred forecasts or generating physically implausible predictions.
To address these limitations, we propose a *two-stage pipeline* for probabilistic spatiotemporal forecasting: 1) We develop *PreDiff*, a conditional latent diffusion model capable of probabilistic forecasts. 2) We incorporate an explicit knowledge alignment mechanism to align forecasts with domain-specific physical constraints.
This is achieved by estimating the deviation from imposed constraints at each denoising step and adjusting the transition distribution accordingly.
We conduct empirical studies on two datasets: *N*-body MNIST, a synthetic dataset with chaotic behavior, and SEVIR, a real-world precipitation nowcasting dataset.
Specifically, we impose the law of conservation of energy in *N*-body MNIST and anticipated precipitation intensity in SEVIR.
Experiments demonstrate the effectiveness of PreDiff in handling uncertainty, incorporating domain-specific prior knowledge, and generating forecasts that exhibit high operational utility. | PreDiff: Precipitation Nowcasting with Latent Diffusion Models | [
"Zhihan Gao",
"Xingjian Shi",
"Boran Han",
"Hao Wang",
"Xiaoyong Jin",
"Danielle C. Maddix",
"Yi Zhu",
"Mu Li",
"Bernie Wang"
] | Workshop/AI4Science | 2307.10422 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=4S3gmowKOr | @inproceedings{
cavazos2023explaining,
title={Explaining Drug Repositioning: A Case-Based Reasoning Graph Neural Network Approach},
author={Adriana Gonzalez Cavazos},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=4S3gmowKOr}
} | Drug repositioning, the identification of novel uses of existing therapies, has become an attractive strategy to accelerate drug development. Knowledge graphs (KGs) have emerged as a powerful representation of interconnected data within the biomedical domain. While link prediction on biomedical can ascertain new connections between drugs and diseases, most approaches only state whether two
nodes are related. Yet, they fail to explain why two nodes are related. In this project, we introduce an implementation of the semi-parametric Case-Based Reasoning over subgraphs (CBR-SUBG), designed to derive a drug query’s underlying mechanisms by gathering graph patterns of similar nodes. We show that our adaptation outperforms existing KG link prediction models on a drug repositioning task.
Furthermore, our findings demonstrate that CBR-SUBG strategy can provide interpretable biological paths as evidence supporting putative repositioning candidates, leading to more informed decisions. | Explaining Drug Repositioning: A Case-Based Reasoning Graph Neural Network Approach | [
"Adriana Carolina Gonzalez Cavazos",
"Roger Tu",
"Meghamala Sinha",
"Andrew Su"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3awrGYl7YD | @inproceedings{
thais2023ai,
title={{AI} Ethics Education for Scientists},
author={Savannah Thais},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=3awrGYl7YD}
} | Machine learning (ML) and artificial intelligence (AI) are becoming core components of scientific research across fields. While there are increasingly formal and informal domain-specific learning opportunities for students and early career scientists interested in AI/ML, AI ethics is often an overlooked part of these trainings. This is a concerning practice as a knowledge of the ethical considerations around AI/ML is an essential component of training effective and responsible scientists. This work presents a introductory AI Ethics curriculum tailored for scientists and describes implementations of the curriculum in various training scenarios. | AI Ethics Education for Scientists | [
"Savannah Jennifer Thais"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3WF88uMjGz | @inproceedings{
stark2023harmonic,
title={Harmonic Prior Self-conditioned Flow Matching for Multi-Ligand Docking and Binding Site Design},
author={Hannes Stark and Bowen Jing and Regina Barzilay and Tommi Jaakkola},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=3WF88uMjGz}
} | A significant amount of protein function requires binding small molecules, including enzymatic catalysis. As such, designing binding pockets for small molecules has several impactful applications ranging from drug synthesis to energy storage. Towards this goal, we first develop HarmonicFlow, an improved generative process over 3D protein-ligand binding structures based on our self-conditioned flow matching objective. FlowSite extends this flow model to jointly generate a protein pocket's discrete residue types and the molecule's binding 3D structure. We show that HarmonicFlow improves upon the state-of-the-art generative processes for docking in simplicity, generality, and performance. Enabled by this structure model, FlowSite designs binding sites substantially better than baseline approaches and provides the first general solution for binding site design. | Harmonic Prior Self-conditioned Flow Matching for Multi-Ligand Docking and Binding Site Design | [
"Hannes Stark",
"Bowen Jing",
"Regina Barzilay",
"Tommi Jaakkola"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=3Cg94Z1RZj | @inproceedings{
ruaud2023modelling,
title={Modelling Microbial Communities with Graph Neural Networks},
author={Albane Ruaud and Cansu Sancaktar and Marco Bagatella and Christoph Ratzke and Georg Martius},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=3Cg94Z1RZj}
} | Understanding the interactions and interplay of microorganisms is a great challenge with many applications in medical and environmental settings.
In this work, we model bacterial communities directly from their genomes using graph neural networks (GNNs). GNNs leverage the inductive bias induced by the set nature of bacteria, enforcing permutation invariance and granting combinatorial generalization. We propose to learn the dynamics implicitly by directly predicting community relative abundance profiles at steady state, thus escaping the need for growth curves.
On two real-world datasets, we show for the first time generalization to unseen bacteria and different community structures.
To investigate the prediction results more deeply, we create a simulation for flexible data generation and analyze effects of bacteria interaction strength, community size, and training data amount. | Modelling Microbial Communities with Graph Neural Networks | [
"Albane Ruaud",
"Cansu Sancaktar",
"Marco Bagatella",
"Christoph Ratzke",
"Georg Martius"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=2vt5z5x9fS | @inproceedings{
yang2023scalable,
title={Scalable Diffusion for Materials Generation},
author={Sherry Yang and KwangHwan Cho and Amil Merchant and Pieter Abbeel and Dale Schuurmans and Igor Mordatch and Ekin Cubuk},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=2vt5z5x9fS}
} | Generative models trained on internet-scale data are capable of generating novel and realistic texts, images, and videos. A natural next question is whether these models can advance science, for example by generating novel stable materials. Traditionally, models with explicit structures (e.g., graphs) have been used in modeling structural relationships in scientific data (e.g., atoms and bonds in crystals), but generating structures can be difficult to scale to large and complex systems. Another challenge in generating materials is the mismatch between standard generative modeling metrics and downstream applications. For instance, common metrics such as the reconstruction error do not correlate well with the downstream goal of discovering novel stable materials. In this work, we tackle the scalability challenge by developing a unified crystal representation that can represent any crystal structure (UniMat), followed by training a diffusion probabilistic model on these UniMat representations. Our empirical results suggest that despite the lack of explicit structure modeling, UniMat can generate high fidelity crystal structures from larger and more complex chemical systems, outperforming previous graph-based approaches under various generative modeling metrics. To better connect the generation quality of materials to downstream applications, such as discovering novel stable materials, we propose additional metrics for evaluating generative models of materials, including per-composition formation energy and stability with respect to convex hulls through decomposition energy from Density Function Theory (DFT). Lastly, we show that conditional generation with UniMat can scale to previously established crystal datasets with up to millions of crystals structures, outperforming random structure search (the current leading method for structure discovery) in discovering new stable materials. | Scalable Diffusion for Materials Generation | [
"Sherry Yang",
"KwangHwan Cho",
"Amil Merchant",
"Pieter Abbeel",
"Dale Schuurmans",
"Igor Mordatch",
"Ekin Dogus Cubuk"
] | Workshop/AI4Science | 2311.09235 | [
""
] | https://huggingface.co/papers/2311.09235 | 0 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=2vernsvOi2 | @inproceedings{
wang2023towards,
title={Towards out-of-distribution generalizable predictions of chemical kinetics properties},
author={Zihao Wang and Yongqiang Chen and Yang Duan and Weijiang Li and Bo Han and James Cheng and Hanghang Tong},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=2vernsvOi2}
} | Machine Learning (ML) techniques have found applications in estimating chemical kinetic properties. With the accumulated drug molecules identified through "AI4drug discovery", the next imperative lies in AI-driven design for high-throughput chemical synthesis processes, with the estimation of properties of unseen reactions with unexplored molecules. To this end, the existing ML approaches for kinetics property prediction are required to be Out-Of-Distribution (OOD) generalizable. In this paper, we categorize the OOD kinetic property prediction into three levels (structure, condition, and mechanism), revealing unique aspects of such problems. Under this framework, we create comprehensive datasets to benchmark (1) the state-of-the-art ML approaches for reaction prediction in the OOD setting and (2) the state-of-the-art graph OOD methods in kinetics property prediction problems. Our results demonstrated the challenges and opportunities in OOD kinetics property prediction. Our datasets and benchmarks can further support research in this direction. | Towards out-of-distribution generalizable predictions of chemical kinetic properties | [
"Zihao Wang",
"Yongqiang Chen",
"Yang Duan",
"Weijiang Li",
"Bo Han",
"James Cheng",
"Hanghang Tong"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=2mq6uezuGj | @inproceedings{
kim2023singlecell,
title={Single-cell Masked Autoencoder: An Accurate and Interpretable Automated Immunophenotyper},
author={Jaesik Kim and Matei Ionita and Matthew Lee and Michelle McKeague and Ajinkya Pattekar and Mark Painter and Joost Wagenaar and Van Truong and Dylan Norton and Divij Mathew and Yonghyun Nam and Sokratis Apostolidis and Patryk Orzechowski and Sang-Hyuk Jung and Jakob Woerner and Yidi Huang and Nuala J. Meyer and Allison R. Greenplate and Dokyoon Kim and John Wherry},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=2mq6uezuGj}
} | High-throughput single-cell cytometry data are crucial for understanding the immune system’s role in diseases and treatment response. However, the prevailing methods used for analyzing cytometry data, specifically manual gating and clustering methods, have certain limitations with scalability, robustness, and accuracy. In this study, we propose a single-cell masked autoencoder (scMAE), which offers an automated solution for immunophenotyping tasks such as cell type prediction. Our model aims to preserve the cell type definitions designed by the user, making interpretation and cross-study comparisons more accessible. The scMAE model follows a pre-train and fine-tune paradigm. During pre-training, scMAE utilizes Masked Single-cell Modelling (MScM) to learn relationships between protein markers in immune cells without the need for prior labeling information. Subsequently, the scMAE is fine-tuned on multiple specialized tasks, using a smaller designated portion of labeled data. Through evaluation experiments, we demonstrated that the pre-trained scMAE overcomes limitations of manual gating and clustering methods, providing accurate and interpretable cellular immunophenotyping. The introduction of scMAE represents a significant advancement in immunology research, enabling prediction and interpretation of cellular-level in immune disease. | Single-cell Masked Autoencoder: An Accurate and Interpretable Automated Immunophenotyper | [
"Jaesik Kim",
"Matei Ionita",
"Matthew Eric Lee",
"Michelle McKeague",
"Ajinkya Pattekar",
"Mark Painter",
"Joost Wagenaar",
"Van Quynh-Thi Truong",
"Dylan Norton",
"Divij Mathew",
"Yonghyun Nam",
"Sokratis Apostolidis",
"Patryk Orzechowski",
"Sang-Hyuk Jung",
"Jakob Woerner",
"Yidi Huang",
"Nuala J. Meyer",
"Allison R. Greenplate",
"Dokyoon Kim",
"John Wherry"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=2QMmoG1UCQ | @inproceedings{
solovev2023ai,
title={{AI} Framework for Generative Design of Computational Experiments with Structures in Physical Environment},
author={Gleb Solovev and Anna Kalyuzhnaya and Alexander Hvatov and Nikita Starodubcev and Oleg Petrov and Nikolay Nikitin},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=2QMmoG1UCQ}
} | We discuss the applicability of an open-source generative design for the automated design of computational experiments with structures in physical environments for various scientific fields. It may be used for scientific experiments where the searched structure can be represented as a set of 2D non-oriented graphs with any topology (grids, polygons, trees), and the physical environment can be described with any numerical model (classic or data-driven). The proposed framework gives the tools to efficiently explore a space of experiment configurations with generative AI models and evolutionary algorithms. The results are shown in examples from different fields: design of microfluidic devices, coastal engineering, research on heat transfer, and acoustics.
Due to the framework's focus on working with structures as graphs, it is possible to pre-train generative NN that is used to create an initial population of optimized structures. The framework finds application in diverse areas such as coastal engineering, acoustics, engineering design, heat transfer, hydrodynamics, and medicine. | AI Framework for Generative Design of Computational Experiments with Structures in Physical Environment | [
"Gleb Vitalevich Solovev",
"Anna Kalyuzhnaya",
"Alexander Hvatov",
"Nikita Starodubcev",
"Oleg Petrov",
"Nikolay Nikitin"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.