bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=FxxxBwcUlL
@inproceedings{ johnson2023value, title={Value Iteration with Value of Information Networks}, author={Samantha Johnson and Michael Buice and Koosha Khalvati}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=FxxxBwcUlL} }
Despite great success in recent years, deep reinforcement learning architectures still face a tremendous challenge in dealing with uncertainty and perceptual ambiguity. Similarly, networks that learn to build the world model from the input and perform model-based decision making in novel environments (e.g., value iteration networks) are mostly limited to fully observable tasks. In this paper, we propose a new planning module architecture, the VI$^2$N (Value Iteration with Value of Information Network), that learns to act in novel environments with a high amount of perceptual ambiguity. This architecture over-emphasizes reducing the uncertainty before exploiting the reward. Our network outperforms other deep architecture in challenging partially observable environments. Moreover, it generates interpretable cognitive maps highlighting both rewarding and informative locations. The similarity of principles and computations of our network with observed cognitive processes and neural activity in the Hippocampus draw a strong connection between VI$^2$N and principles of computations in the biological networks.
Value Iteration with Value of Information Networks
[ "Samantha Johnson", "Michael Buice", "Koosha Khalvati" ]
Workshop/GenPlan
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=FPqgo0jshE
@inproceedings{ black2023zeroshot, title={Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models}, author={Kevin Black and Mitsuhiko Nakamoto and Pranav Atreya and Homer Walke and Chelsea Finn and Aviral Kumar and Sergey Levine}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=FPqgo0jshE} }
If generalist robots are to operate in truly unstructured environments, they need to be able to recognize and reason about novel objects and scenarios. Such objects and scenarios might not be present in the robot's own training data. We propose SuSIE, a method that leverages an image editing diffusion model to act as a high-level planner by proposing intermediate subgoals that a low-level controller attains. Specifically, we fine-tune InstructPix2Pix on robot data such that it outputs a hypothetical future observation given the robot's current observation and a language command. We then use the same robot data to train a low-level goal-conditioned policy to reach a given image observation. We find that when these components are combined, the resulting system exhibits robust generalization capabilities. The high-level planner utilizes its Internet-scale pre-training and visual understanding to guide the low-level goal-conditioned policy, achieving significantly better generalization than conventional language-conditioned policies. We demonstrate that this approach solves real robot control tasks involving novel objects, distractors, and even environments, both in the real world and in simulation. The project website can be found at https://subgoal-image-editing.github.io
Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models
[ "Kevin Black", "Mitsuhiko Nakamoto", "Pranav Atreya", "Homer Walke", "Chelsea Finn", "Aviral Kumar", "Sergey Levine" ]
Workshop/GenPlan
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=9lkkqGagDF
@inproceedings{ wang2023coplanner, title={{COP}lanner: Plan to Roll Out Conservatively but to Explore Optimistically for Model-Based {RL}}, author={Xiyao Wang and Ruijie Zheng and Yanchao Sun and Ruonan Jia and Wichayaporn Wongkamjan and Huazhe Xu and Furong Huang}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=9lkkqGagDF} }
Dyna-style model-based reinforcement learning contains two phases: model rollouts to generate sample for policy learning and real environment exploration using current policy for dynamics model learning. However, due to the complex real-world environment, it is inevitable to learn an imperfect dynamics model with model prediction error, which can further mislead policy learning and result in sub-optimal solutions. In this paper, we propose $\texttt{COPlanner}$, a planning-driven framework for model-based methods to address the inaccurately learned dynamics model problem with conservative model rollouts and optimistic environment exploration. $\texttt{COPlanner}$ leverages an uncertainty-aware policy-guided model predictive control (UP-MPC) component to plan for multi-step uncertainty estimation. This estimated uncertainty then serves as a penalty during model rollouts and as a bonus during real environment exploration respectively, to choose actions. Consequently, $\texttt{COPlanner}$ can avoid model uncertain regions through conservative model rollouts, thereby alleviating the influence of model error. Simultaneously, it explores high-reward model uncertain regions to reduce model error actively through optimistic real environment exploration. $\texttt{COPlanner}$ is a plug-and-play framework that can be applied to any dyna-style model-based methods. Experimental results on a series of proprioceptive and visual continuous control tasks demonstrate that both sample efficiency and asymptotic performance of strong model-based methods are significantly improved combined with $\texttt{COPlanner}$.
COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically for Model-Based RL
[ "Xiyao Wang", "Ruijie Zheng", "Yanchao Sun", "Ruonan Jia", "Wichayaporn Wongkamjan", "Huazhe Xu", "Furong Huang" ]
Workshop/GenPlan
2310.07220
[ "" ]
https://huggingface.co/papers/2310.07220
3
1
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=8pbqiLYe0I
@inproceedings{ bonet2023general, title={General and Reusable Indexical Policies and Sketches}, author={Blai Bonet and Dominik Drexler and Hector Geffner}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=8pbqiLYe0I} }
Recently, a simple but powerful language for expressing and learning general policies and problem decompositions (sketches) has been introduced, which is based on collections of rules defined on a set of Boolean and numerical features. In this work, we consider extensions of this basic language aimed at making policies and sketches more flexible and reusable. For this, three basic extensions are considered: 1) internal memory states, as in finite state controllers, 2) indexical features, whose values are a function of the state and a number of internal registers that can be loaded with objects, and 3) modules that wrap up policies and sketches and allow them to call each other by passing parameters. In addition, unlike previously defined policies that select actions indirectly by the selection of state transitions, the new language allows for the selection of actions directly. The expressive power of the resulting language for recombining policies and sketches is illustrated through examples. The problem of learning policies and sketches in the new language is left for future work.
General and Reusable Indexical Policies and Sketches
[ "Blai Bonet", "Dominik Drexler", "Hector Geffner" ]
Workshop/GenPlan
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=8mDBzYDOYC
@inproceedings{ sigal2023improving, title={Improving Generalization in Reinforcement Learning Training Regimes for Social Robot Navigation}, author={Adam Sigal and Hsiu-Chin Lin and AJung Moon}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=8mDBzYDOYC} }
In order for autonomous mobile robots to navigate in human spaces, they must abide by our social norms. Reinforcement learning (RL) has emerged as an effective method to train robot sequential decision-making policies that are able to respect these norms. However, a large portion of existing work in the field conducts both RL training and testing in simplistic environments. This limits the generalization potential of these models to unseen environments, and undermines the meaningfulness of their reported results. We propose a method to improve the generalization performance of RL social navigation methods using curriculum learning. By employing multiple environment types and by modeling pedestrians using multiple dynamics models, we are able to progressively diversify and escalate difficulty in training. Our results show that the use of curriculum learning in training can be used to achieve better generalization performance than previous training methods. We also show that results presented in many existing state-of-the art RL social navigation works do not evaluate their methods outside of their training environments, and thus do not reflect their policies' failure to adequately generalize to out-of-distribution scenarios. In response, we validate our training approach on larger and more crowded testing environments than those used in training, allowing for more meaningful measurements of model performance.
Improving Generalization in Reinforcement Learning Training Regimes for Social Robot Navigation
[ "Adam Sigal", "Hsiu-Chin Lin", "AJung Moon" ]
Workshop/GenPlan
2308.14947
[ "https://github.com/raise-lab/soc-nav-training" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6oPZcdzFjW
@inproceedings{ jeen2023conservative, title={Conservative World Models}, author={Scott Jeen and Tom Bewley and Jonathan Cullen}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=6oPZcdzFjW} }
Zero-shot reinforcement learning (RL) promises to provide agents that can perform _any_ task in an environment after an offline pre-training phase. _Forward-backward_ (FB) representations represent remarkable progress towards this ideal, achieving 85% of the performance of task-specific agents in this setting. However, such performance is contingent on access to large and diverse datasets for pre-training, which cannot be expected for most real problems. Here, we explore how FB performance degrades when trained on small datasets that lack diversity, and mitigate it with _conservatism_, a well-established feature of performant offline RL algorithms. We evaluate our family of methods across various datasets, domains and tasks, reaching 150% of vanilla FB performance in aggregate. Somewhat surprisingly, conservative FB algorithms also outperform the task-specific baseline, despite lacking access to reward labels and being required to maintain policies for all tasks. Conservative FB algorithms perform no worse than FB on full datasets, and so present little downside over their predecessor. Our code is available open-source via: https://enjeeneer.io/projects/conservative-world-models/.
Conservative World Models
[ "Scott Jeen", "Tom Bewley", "Jonathan Cullen" ]
Workshop/GenPlan
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=5kO572pUvr
@inproceedings{ gadot2023targeted, title={Targeted Uncertainty Reduction in Robust {MDP}s}, author={Uri Gadot and Kaixin Wang and Esther Derman and Navdeep Kumar and Kfir Levy and Shie Mannor}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=5kO572pUvr} }
Robust Markov decision processes (MDPs) provide a practical framework for generalizing trained agents to new environments. There, the objective is to maximize performance under the worst model of a given uncertainty set. By construction, this raises a performance-robustness dilemma: accounting for too large uncertainty yields guarantees against larger disturbances, whilst too small uncertainty may result in over-sensitivity to model misspecification. In this work, we introduce an online method that addresses the conservativeness of robust MDPs by strategically contracting the uncertainty set. First, we explicitly formulate the gradient of the robust return with respect to the uncertainty radius. This gradient derivation enables us to prioritize efforts in reducing uncertainty and leads us to interesting findings on the relation between the robust return and the uncertainty set. Second, we present a sampling-based algorithm aimed at enhancing our uncertainty estimation with respect to the robust return. Third, we illustrate the effectiveness of our algorithm within a tabular environment.
Targeted Uncertainty Reduction in Robust MDPs
[ "Uri Gadot", "Kaixin Wang", "Esther Derman", "Navdeep Kumar", "Kfir Levy", "Shie Mannor" ]
Workshop/GenPlan
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=4xGCYC4dFp
@inproceedings{ hwang2023quantized, title={Quantized Local Independence Discovery for Fine-Grained Causal Dynamics Learning in Reinforcement Learning}, author={Inwoo Hwang and Yunhyeok Kwak and Suhyung Choi and Byoung-Tak Zhang and Sanghack Lee}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=4xGCYC4dFp} }
Incorporating causal relationships between the variables into dynamics learning has emerged as a promising approach to enhance robustness and generalization in reinforcement learning (RL). Recent studies have focused on examining conditional independences and leveraging only relevant state and action variables for prediction. However, such approaches tend to overlook local independence relationships that hold under certain circumstances referred as event. In this work, we present a theoretically-grounded and practical approach to dynamics learning which discovers such meaningful events and infers fine-grained causal relationships. The key idea is to learn a discrete latent variable that represents the pair of event and causal relationships specific to the event via vector quantization. As a result, our method provides a fine-grained understanding of the dynamics by capturing event-specific causal relationships, leading to improved robustness and generalization in RL. Experimental results demonstrate that our method is more robust to unseen states and generalizes well to downstream tasks compared to prior approaches. In addition, we find that our method successfully identifies meaningful events and recovers event-specific causal relationships.
Quantized Local Independence Discovery for Fine-Grained Causal Dynamics Learning in Reinforcement Learning
[ "Inwoo Hwang", "Yunhyeok Kwak", "Suhyung Choi", "Byoung-Tak Zhang", "Sanghack Lee" ]
Workshop/GenPlan
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2N9HoSAJ48
@inproceedings{ patra2023relating, title={Relating Goal and Environmental Complexity for Improved Task Transfer: Initial Results}, author={Sunandita Patra and Paul Rademacher and Kristen Jacobson and Kyle Hassold and Onur Kulaksizoglu and Laura Hiatt and Mark Roberts and Dana Nau}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=2N9HoSAJ48} }
The complexity of an environment and the difficulty of an actor's goals both impact transfer learning in Reinforcement Learning (RL). Yet, few works have examined using the environment and goals in tandem to generate a learning curriculum that improves transfer. To explore this relationship, we introduce a task graph that quantifies the environment complexity using environment descriptors and the goal difficulty using goal descriptors; edges in the task graph indicate a change in the environment or the goal. We use the task graph in two sets of studies. First, we evaluate the task graph in two synthetic environments where we control environment and goal complexity. Second, we introduce an algorithm that generates a Task-Graph Curriculum to train policies using the task graph. In a delivery environment with up to ten skills, we demonstrate that a planner can execute these trained policies to achieve long-horizon goals in increasingly complex environments. Our results demonstrate that (1) the task graph promotes skill transfer in the synthetic environments and (2) the Task-Graph Curriculum trains nearly perfect policies and does so significantly faster than learning a policy from scratch.
Relating Goal and Environmental Complexity for Improved Task Transfer: Initial Results
[ "Sunandita Patra", "Paul Rademacher", "Kristen Jacobson", "Kyle Hassold", "Onur Kulaksizoglu", "Laura Hiatt", "Mark Roberts", "Dana Nau" ]
Workshop/GenPlan
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=225zToqXbL
@inproceedings{ lee2023uncertaintyaware, title={Uncertainty-Aware Action Repeating Options}, author={Joongkyu Lee and Seung Joon Park and Yunhao Tang and Min-hwan Oh}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=225zToqXbL} }
In reinforcement learning, employing temporal abstraction within the action space is a prevalent strategy for simplifying policy learning through temporally-extended actions. Recently, algorithms that repeat a primitive action for a certain number of steps, a simple method to implement temporal abstraction in practice, have demonstrated better performance than traditional algorithms. However, a significant drawback of earlier studies on action repetition is the potential for repeated sub-optimal actions to considerably degrade performance. To tackle this problem, we introduce a new algorithm that employs ensemble methods to estimate uncertainty when extending an action. Our framework offers flexibility, allowing policies to either prioritize exploration or adopt an uncertainty-averse stance based on their specific needs. We provide empirical results on various environments, highlighting the superior performance of our proposed method compared to other action-repeating algorithms. These results indicate that our uncertainty-aware strategy effectively counters the downsides of action repetition, enhancing policy learning efficiency.
Uncertainty-Aware Action Repeating Options
[ "Joongkyu Lee", "Seung Joon Park", "Yunhao Tang", "Min-hwan Oh" ]
Workshop/GenPlan
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=1S538MAe9x
@inproceedings{ jayawardana2023robust, title={Robust Driving Across Scenarios via Multi-residual Task Learning}, author={Vindula Jayawardana and Sirui Li and Cathy Wu and Yashar Farid and Kentaro Oguchi}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=1S538MAe9x} }
Conventional control, such as model-based control, is commonly utilized in autonomous driving due to its efficiency and reliability. However, real-world autonomous driving contends with a multitude of diverse traffic scenarios that are challenging for these planning algorithms. Model-free Deep Reinforcement Learning (DRL) presents a promising avenue in this direction, but learning DRL control policies that generalize to multiple traffic scenarios is still a challenge. To address this, we introduce Multi-residual Task Learning (MRTL), a generic learning framework based on multi-task learning that, for a set of task scenarios, decomposes the control into nominal components that are effectively solved by conventional control methods and residual terms which are solved using learning. We employ MRTL for fleet-level emission reduction in mixed traffic using autonomous vehicles as a means of system control. By analyzing the performance of MRTL across nearly 600 signalized intersections and 1200 traffic scenarios, we demonstrate that it emerges as a promising approach to synergize the strengths of DRL and conventional methods in generalizable control.
Robust Driving Across Scenarios via Multi-residual Task Learning
[ "Vindula Jayawardana", "Sirui Li", "Cathy Wu", "Yashar Farid", "Kentaro Oguchi" ]
Workshop/GenPlan
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=0uD5kEH8ES
@inproceedings{ jiralerspong2023forecaster, title={Forecaster: Towards Temporally Abstract Tree-Search Planning from Pixels}, author={Thomas Jiralerspong and Flemming Kondrup and Doina Precup and Khimya Khetarpal}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=0uD5kEH8ES} }
The ability to plan at many different levels of abstraction enables agents to envision the long-term repercussions of their decisions and thus enables sample-efficient learning. This becomes particularly beneficial in complex environments from high-dimensional state space such as pixels, where the goal is distant and the reward sparse. We introduce Forecaster, a deep hierarchical reinforcement learning approach which plans over high-level goals leveraging a temporally abstract world model. Forecaster learns an abstract model of its environment by modelling the transitions dynamics at an abstract level and training a world model on such transition. It then uses this world model to choose optimal high-level goals through a tree-search planning procedure. It additionally trains a low-level policy that learns to reach those goals. Our method not only captures building world models with longer horizons, but also, planning with such models in downstream tasks. We empirically demonstrate Forecaster's potential in both single-task learning and generalization to new tasks in the AntMaze domain.
Forecaster: Towards Temporally Abstract Tree-Search Planning from Pixels
[ "Thomas Jiralerspong", "Flemming Kondrup", "Doina Precup", "Khimya Khetarpal" ]
Workshop/GenPlan
2310.09997
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=0Pbh8u7oXi
@inproceedings{ mediratta2023a, title={A Study of Generalization in Offline Reinforcement Learning}, author={Ishita Mediratta and Qingfei You and Minqi Jiang and Roberta Raileanu}, booktitle={NeurIPS 2023 Workshop on Generalization in Planning}, year={2023}, url={https://openreview.net/forum?id=0Pbh8u7oXi} }
Despite the recent progress in offline reinforcement learning (RL) algorithms, agents are usually trained and tested on the same environment. In this paper, we perform an in-depth study of the generalization abilities of offline RL algorithms, showing that they struggle to generalize to new environments. We also introduce the first benchmark for evaluating generalization in offline learning, collecting datasets with varying sizes and skill-levels from Procgen (2D video games) and WebShop (e-commerce websites). The datasets contain trajectories for a limited number of game levels or natural language instructions and at test time, the agent has to generalize to new levels or instructions. Our experiments reveal that existing offline learning algorithms perform significantly worse than online RL on both train and test environments. Behavioral cloning is a strong baseline, typically outperforming offline RL and sequence modeling approaches when trained on data from multiple environments and tested on new ones. Finally, we find that increasing the diversity of the data, rather than its size, improves generalization for all algorithms. Our study demonstrates the limited generalization of current offline learning algorithms highlighting the need for more research in this area.
A Study of Generalization in Offline Reinforcement Learning
[ "Ishita Mediratta", "Qingfei You", "Minqi Jiang", "Roberta Raileanu" ]
Workshop/GenPlan
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yavtWi6ew9
@inproceedings{ shumaylov2023provably, title={Provably Convergent Data-Driven Convex-Nonconvex Regularization}, author={Zakhar Shumaylov and Jeremy Budd and Subhadip Mukherjee and Carola-Bibiane Sch{\"o}nlieb}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=yavtWi6ew9} }
An emerging new paradigm for solving inverse problems is via the use of deep learning to learn a regularizer from data. This leads to high-quality results, but often at the cost of provable guarantees. In this work, we show how well-posedness and convergent regularisation arises within the convex-nonconvex (CNC) framework for inverse problems. We introduce a novel input weakly convex neural network (IWCNN) construction to adapt the method of learned adversarial regularization to the CNC framework. Empirically we show that our method overcomes numerical issues of previous adversarial methods.
Provably Convergent Data-Driven Convex-Nonconvex Regularization
[ "Zakhar Shumaylov", "Jeremy Budd", "Subhadip Mukherjee", "Carola-Bibiane Schönlieb" ]
Workshop/Deep_Inverse
2310.05812
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=wHnKIsCz4j
@inproceedings{ chan2023sud, title={{SUD}\${\textasciicircum}2\$: Supervision by Denoising Diffusion Models for Image Reconstruction}, author={Matthew Chan and Sean Young and Christopher Metzler}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=wHnKIsCz4j} }
Many imaging inverse problems---such as image-dependent in-painting and dehazing---are challenging because their forward models are unknown or depend on unknown latent parameters. While one can solve such problems by training a neural network with vast quantities of paired training data, such paired training data is often unavailable. In this paper, we propose a generalized framework for training image reconstruction networks when paired training data is scarce. In particular, we demonstrate the ability of image denoising algorithms and, by extension, denoising diffusion models to supervise network training in the absence of paired training data. (The unabridged version of this manuscript is available at https://arxiv.org/abs/2303.09642}{https://arxiv.org/abs/2303.09642).
SUD^2: Supervision by Denoising Diffusion Models for Image Reconstruction
[ "Matthew Chan", "Sean Young", "Christopher Metzler" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qvLWGWsMyq
@inproceedings{ mulang'2023feature, title={Feature Importance Random Search for Hyperparameter Optimization of Data-Consistent Model Inversion}, author={Isaiah Onando Mulang' and Stephen Obonyo and Timothy Rumbell and Viatcheslav Gurev and Catherine Wanjiru}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=qvLWGWsMyq} }
We consider hyperparameter optimization (HPO) of approaches that employ outputs of mechanistic models as priors in hybrid modeling for data consistent inversion. An implicit density estimator (DE) models a non-parametric distribution of model input parameters, and the push forward of those generated samples produces a model output distribution that should match a target distribution of observed data. A rejection sampler then filters out “undesirable” samples through a discriminator function. In a samples-generate-reject pipeline with the objective of fitting the push-forward to the observed experimental outputs, several DEs can be employed within the generator and discriminator components. However, the extensive evaluation of these end-to-end inversion frameworks is still lacking. Specifically, this data-consistent model inversion pipeline offers an extra challenge concerning optimization of constituent models. Traditional HPO are often limited to single-model scenarios and might not directly map to frameworks that optimize several models to achieve a single loss. To overcome the time overhead due to summative optimization of each component, and the expanded combinatorial search space, we introduce a method that performs an initial random search to bootstrap a HPO that applies weighted feature importance to gradually update the hyperparameter set, periodically probing the pipeline to track the loss. Our experiments show reduced number of time intensive pipeline runs but with the faster convergence.
Feature Importance Random Search for Hyperparameter Optimization of Data-Consistent Model Inversion
[ "Stephen Obonyo", "Isaiah Onando Mulang'", "Timothy Rumbell", "Catherine Wanjiru", "Viatcheslav Gurev" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qfWlk4RWOZ
@inproceedings{ neumayer2023boosting, title={Boosting Weakly Convex Ridge Regularizers with Spatial Adaptivity}, author={Sebastian Neumayer and Mehrsa Pourya and Alexis Goujon and Michael Unser}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=qfWlk4RWOZ} }
We propose to enhance 1-weakly convex ridge regularizers for image reconstruction by incorporating spatial adaptivity. To this end, we resort to a neural network that generates a weighting mask from an initial reconstruction, which is obtained with the baseline regularizer. Empirically, the learned mask can capture long-range dependencies and leads to a smaller penalization of inherent image structures. Our experiments show that spatial adaptivity improves the performance of image denoising and MRI reconstruction.
Boosting Weakly Convex Ridge Regularizers with Spatial Adaptivity
[ "Sebastian Neumayer", "Mehrsa Pourya", "Alexis Goujon", "Michael Unser" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=mGwg10bgHk
@inproceedings{ daras2023solving, title={Solving Inverse Problems with Ambient Diffusion}, author={Giannis Daras and Alex Dimakis}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=mGwg10bgHk} }
We provide the first framework to solve inverse problems with diffusion models learned from linearly corrupted data. Our method leverages a generative model trained on one type of corruption (e.g. highly inpainted images) to perform posterior sampling conditioned on measurements from a different forward process (e.g. blurred images). This fully unlocks the potential of ambient diffusion models that are essential in scientific applications where access to fully observed samples is impossible or undesirable. Our experimental evaluation shows that diffusion models trained on corrupted data can even outperform models trained on clean data for image restoration in both speed and performance.
Solving Inverse Problems with Ambient Diffusion
[ "Giannis Daras", "Alex Dimakis" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=l4ki2nsrwS
@inproceedings{ feng2023efficient, title={Efficient Bayesian Computational Imaging with a Surrogate Score-Based Prior}, author={Berthy Feng and Katherine Bouman}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=l4ki2nsrwS} }
We propose a surrogate function for efficient use of score-based priors for Bayesian inverse imaging. Recent work turned score-based diffusion models into probabilistic priors for solving ill-posed imaging problems by appealing to an ODE-based log-probability function. However, evaluating this function is computationally inefficient and inhibits posterior estimation of high-dimensional images. Our proposed surrogate prior is based on the evidence lower-bound of a score-based diffusion model. We demonstrate the surrogate prior on variational inference for efficient approximate posterior sampling of large images. Compared to the exact prior in previous work, our surrogate prior accelerates optimization of the variational image distribution by at least two orders of magnitude. We also find that our principled approach achieves higher-fidelity images than non-Bayesian baselines that involve hyperparameter-tuning at inference. Our work establishes a practical path forward for using score-based diffusion models as general-purpose priors for imaging.
Efficient Bayesian Computational Imaging with a Surrogate Score-Based Prior
[ "Berthy Feng", "Katherine Bouman" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=jjCGqnhTr7
@inproceedings{ chien2023spacetime, title={Space-Time Implicit Neural Representations for Atomic Electron Tomography on Dynamic Samples}, author={Tiffany Chien and Colin Ophus and Laura Waller}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=jjCGqnhTr7} }
Solving for the 3D atomic structure of unknown materials is a key problem in materials science. Atomic electron tomography (AET) is a technique capable of reconstructing the 3D position and chemical species of all atoms in a nanoscale sample from a series of 2D projections from different angles. One challenge in AET is carbon contamination that accumulates on the sample while collecting the tomographic projections, creating an unwanted temporal dynamic that degrades reconstruction quality when existing tomography algorithms expect a static sample. In this work, we use an unsupervised implicit neural representation (INR) as a space-time model to computationally remove the contamination and recover a clean 3D reconstruction, and show promising preliminary results on simulated data.
Space-Time Implicit Neural Representations for Atomic Electron Tomography on Dynamic Samples
[ "Tiffany Chien", "Colin Ophus", "Laura Waller" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=j6YOCS0cyM
@inproceedings{ hu2023poissongaussian, title={Poisson-Gaussian Holographic Phase Retrieval with Score-based Image Prior}, author={Jason Hu and Zongyu Li and Xiaojian Xu and Liyue Shen and Jeffrey A Fessler}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=j6YOCS0cyM} }
Phase retrieval (PR) is a crucial problem in many imaging applications. This study focuses on resolving the holographic phase retrieval problem in situations where the measurements are affected by a combination of Poisson and Gaussian noise, which commonly occurs in optical imaging systems. To address this problem, we propose a new algorithm called ``AWFS" that uses the accelerated Wirtinger flow (AWF) with a score function as generative prior. We calculate the gradient of the log-likelihood function for PR and provide an implementable estimate for it. Additionally, we introduce a generative prior in our regularization framework by using score matching to capture information about the gradient of image prior distributions. The results of our simulation experiments on three different datasets show the following. 1) By using the PG likelihood model, the proposed algorithm improves reconstruction compared to algorithms based solely on Gaussian or Poisson likelihood. 2) The proposed score-based image prior method leads to improved reconstruction quality over the method based on denoising diffusion probabilistic model (DDPM), as well as plug-and-play alternating direction method of multipliers (PnP-ADMM) and regularization by denoising (RED).
Poisson-Gaussian Holographic Phase Retrieval with Score-based Image Prior
[ "Jason Hu", "Zongyu Li", "Xiaojian Xu", "Liyue Shen", "Jeffrey A Fessler" ]
Workshop/Deep_Inverse
2305.07712
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ijSTOcngKs
@inproceedings{ belhasin2023volumeoriented, title={Volume-Oriented Uncertainty for Inverse Problems}, author={Omer Belhasin and Yaniv Romano and Daniel Freedman and Ehud Rivlin and Michael Elad}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=ijSTOcngKs} }
Uncertainty quantification for imaging-related inverse problems is drawing much attention lately. Existing approaches towards this task define uncertainty regions per pixel while ignoring spatial correlations. In this paper we propose PUQ (Principal Uncertainty Quantification) -- a novel definition of uncertainty that takes into account spatial relationships within the image, thus providing reduced uncertainty volume. Leveraging diffusion models, we derive uncertainty intervals around principal components of the empirical posterior distribution, accompanied by probabilistic guarantees. The proposed approach can operate globally on the entire image, or locally on patches, resulting in informative and interpretable uncertainty regions. We verify our approach on several inverse problems, showing a significantly tighter uncertainty regions compared to baseline methods.
Volume-Oriented Uncertainty for Inverse Problems
[ "Omer Belhasin", "Yaniv Romano", "Daniel Freedman", "Ehud Rivlin", "Michael Elad" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ijJJA6uQbL
@inproceedings{ legin2023scorebased, title={Score-Based Likelihood Characterization for Inverse Problems in the Presence of Non-Gaussian Noise}, author={Ronan Legin and Alexandre Adam and Yashar Hezaveh and Laurence Perreault-Levasseur}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=ijJJA6uQbL} }
Likelihood analysis is typically limited to normally distributed noise due to the difficulty of determining the probability density function of complex, high-dimensional, non-Gaussian, and anisotropic noise. This work presents Score-based LIkelihood Characterization (SLIC), a framework that resolves this issue by building a data-driven noise model using a set of noise realizations from observations. We show that the approach produces unbiased and precise likelihoods even in the presence of highly non-Gaussian correlated and spatially varying noise. We use diffusion generative models to estimate the gradient of the probability density of noise with respect to data elements. In combination with the Jacobian of the physical model of the signal, we use Langevin sampling to produce independent samples from the unbiased likelihood. We demonstrate the effectiveness of the method using real data from the Hubble Space Telescope and James Webb Space Telescope.
Score-Based Likelihood Characterization for Inverse Problems in the Presence of Non-Gaussian Noise
[ "Ronan Legin", "Alexandre Adam", "Yashar Hezaveh", "Laurence Perreault-Levasseur" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=igQtADDflr
@inproceedings{ patel2023improved, title={Improved Black-box Variational Inference for High-dimensional Bayesian Inversion involving Black-box Simulators}, author={Dhruv V Patel and Jonghyun Harry Lee and Matthew Farthing and Tyler Hesser and Peter Kitanidis and Eric Darve}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=igQtADDflr} }
Black-box forward model simulators are widely used in scientific and engineering domains for their exceptional capability to mimic complex physical systems. However, applying current state-of-the-art gradient-based Bayesian inference techniques like Hamiltonian Monte Carlo or Variational Inference with them becomes infeasible due to the opaque nature of these simulators. We address this challenge by introducing a modular approach that combines black-box variational inference (BBVI) with deep generative priors, making it possible to efficiently and accurately perform high-dimensional Bayesian inversion in these settings. Our method introduces a novel gradient correction term and a sampling strategy for BBVI, which collectively diminish gradient errors by several orders of magnitude across different dimensions, even with minimal batch sizes. Furthermore, integrating our method with Generative Adversarial Network (GAN)-based priors significantly enhances the solution of high-dimensional inverse problems. We validate our algorithm's effectiveness on a range of physics-based inverse problems using both simulated and experimental data. In comparison to Markov Chain Monte Carlo (MCMC) methods, our approach consistently delivers superior accuracy and substantial improvements in both statistical and computational efficiency, often by an order of magnitude.
Improved Black-box Variational Inference for High-dimensional Bayesian Inversion involving Black-box Simulators
[ "Dhruv V Patel", "Jonghyun Harry Lee", "Matthew Farthing", "Tyler Hesser", "Peter Kitanidis", "Eric Darve" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=idCTwAgeDU
@inproceedings{ fabian2023adapt, title={Adapt and Diffuse: Sample-adaptive Reconstruction via Latent Diffusion Models}, author={Zalan Fabian and Berk Tinaz and Mahdi Soltanolkotabi}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=idCTwAgeDU} }
Inverse problems arise in a multitude of applications, where the goal is to recover a clean signal from noisy and possibly (non)linear observations. The difficulty of a reconstruction problem depends on multiple factors, such as the structure of the ground truth signal, the severity of the degradation, the implicit bias of the reconstruction model and the complex interactions between the above factors. This results in natural sample-by-sample variation in the difficulty of a reconstruction task, which is often overlooked by contemporary techniques, resulting in long inference times, subpar performance and wasteful resource allocation. We propose a novel method to estimate the degradation severity of noisy, degraded signals in the latent space of an autoencoder. We show that the estimated severity has strong correlation with the true corruption level and can give useful hints at the difficulty of reconstruction problems on a sample-by-sample basis. Furthermore, we propose a reconstruction method based on latent diffusion models that leverages the predicted degradation severities to fine-tune the reverse diffusion sampling trajectory and thus achieve sample-adaptive inference.
Adapt and Diffuse: Sample-adaptive Reconstruction via Latent Diffusion Models
[ "Zalan Fabian", "Berk Tinaz", "Mahdi Soltanolkotabi" ]
Workshop/Deep_Inverse
2309.06642
[ "https://github.com/z-fabian/flash-diffusion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=hUOHV4SKNw
@inproceedings{ mancu2023selfsupervised, title={Self-supervised Low-rank plus Sparse Network for Radial {MRI} Reconstruction}, author={Andrei Mancu and Wenqi Huang and Gastao Lima da Cruz and Daniel Rueckert and Kerstin Hammernik}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=hUOHV4SKNw} }
In this work, we introduce a physics-guided self-supervised learning approach to reconstruct dynamic magnetic resonance images (MRI) from sparsely sampled radial data. The architecture incorporates a variable splitting scheme via a quadratic penalty approach consisting of iterative data consistency and denoiser step. To accommodate cardiac motion, the denoiser implements a learnable low-rank and sparse component instead of a conventional convolutional neural network. We compare the proposed model to iterative regularized MRI reconstruction techniques and to other deep neural network approaches adapted to radial data, both in supervised and self-supervised tasks. Our proposed method surpasses the performance of other techniques for a single heartbeat and four heartbeat MRI reconstruction. Furthermore, our approach outperforms other deep neural network reconstruction approaches in both supervision and self-supervision tasks.
Self-supervised Low-rank plus Sparse Network for Radial MRI Reconstruction
[ "Andrei Mancu", "Wenqi Huang", "Gastao Lima da Cruz", "Daniel Rueckert", "Kerstin Hammernik" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=hEyIHsyZ9F
@inproceedings{ corso2023particle, title={Particle Guidance: non-I.I.D. Diverse Sampling with Diffusion Models}, author={Gabriele Corso and Yilun Xu and Valentin De Bortoli and Regina Barzilay and Tommi Jaakkola}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=hEyIHsyZ9F} }
In light of the widespread success of generative models, a significant amount of research has gone into speeding up their sampling time. However, generative models are often sampled multiple times to obtain a diverse set incurring in a cost that is orthogonal to sampling time. We tackle the question of how to improve diversity and sample efficiency by moving beyond the common assumption of independent samples. For this we propose particle guidance, an extension of diffusion-based generative sampling where a joint-particle time-evolving potential enforces diversity. We analyze theoretically the joint distribution that particle guidance generates, its implications on the choice of potential, and the connections with methods in other disciplines. Empirically, we test the framework both in the setting of conditional image generation, where we are able to increase diversity without affecting quality, and molecular conformer generation, where we reduce the state-of-the-art median error by 13% on average.
Particle Guidance: non-I.I.D. Diverse Sampling with Diffusion Models
[ "Gabriele Corso", "Yilun Xu", "Valentin De Bortoli", "Regina Barzilay", "Tommi Jaakkola" ]
Workshop/Deep_Inverse
2310.13102
[ "https://github.com/gcorso/particle-guidance" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=ee6feOhQ4H
@inproceedings{ ma2023optogpt, title={Opto{GPT}: A Versatile Inverse Design Model for Optical Multilayer Thin Film Structures}, author={Taigao Ma and L. Jay Guo and Haozhu Wang}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=ee6feOhQ4H} }
Optical multilayer thin film structures are widely used in various photonic applications. Inverse design is an important but difficult step to enable these applications, which seeks to find out the best structure (material & thickness arrangements) given a target optical response. Recently, deep learning-based methods have been developed to solve the inverse design efficiently. However, existing methods usually fix the material arrangements and only design the thickness, which is not versatile for a different material arrangement and may lead to sub-optimal performance. In this study, we resolve this issue by treating the structure as a sequence and using structure tokens to represent the material and thickness simultaneously. Later on, the inverse design problem can be formulated as a common sequence generation task conditioned on the input optical responses. Based on this, we propose OptoGPT to act as a versatile inverse design model that can design material and thickness simultaneously, significantly expanding the design capability. In addition, using probability resampling further provides a versatile method to satisfy fabrication and design requirements in practical applications.
OptoGPT: A Versatile Inverse Design Model for Optical Multilayer Thin Film Structures
[ "Taigao Ma", "L. Jay Guo", "Haozhu Wang" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=bz24Rt6hZ7
@inproceedings{ zhang2023nbi, title={nbi: the Astronomer's Package for Neural Posterior Estimation}, author={Keming Zhang and Joshua Bloom and Nina Hernitschek}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=bz24Rt6hZ7} }
Despite the promise of Neural Posterior Estimation (NPE) methods in astronomy, the adaptation of NPE into the routine inference workflow has been slow. We identify three critical issues: the need for custom featurizer networks tailored to the observed data, the inference inexactness, and the under-specification of physical forward models. To address the first two issues, we introduce a new framework and open-source software nbi (Neural Bayesian Inference), which supports both amortized and sequential NPE. First, nbi provides built-in ``featurizer'' networks with demonstrated efficacy on sequential data, such as light curve and spectra, thus obviating the need for this customization on the user end. Second, we introduce a modified algorithm SNPE-IS, which facilities asymptotically exact inference by using the surrogate posterior under NPE only as a proposal distribution for importance sampling. These features allow nbi to be applied off-the-shelf to astronomical inference problems involving light curves and spectra. We discuss how nbi may serve as an effective alternative to existing methods such as Nested Sampling. Our package is at https://github.com/kmzzhang/nbi.
nbi: the Astronomer's Package for Neural Posterior Estimation
[ "Keming Zhang", "Joshua Bloom", "Stéfan van der Walt", "Nina Hernitschek" ]
Workshop/Deep_Inverse
2312.03824
[ "https://github.com/kmzzhang/nbi" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ZL5wlFMg0Y
@inproceedings{ dasgupta2023conditional, title={Conditional score-based generative models for solving physics-based inverse problems}, author={Agnimitra Dasgupta and Javier Murgoitio-Esandi and Deep Ray and Assad Oberai}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=ZL5wlFMg0Y} }
We propose to sample from high-dimensional posterior distributions arising in physics-based inverse problems using conditional score-based generative models. The proposed approach trains a noise-conditional score network to approximate the score function of the posterior distribution. Then, the network is used to sample from the posterior distribution through annealed Langevin dynamics. The proposed method is applicable even when we can only simulate the forward problem. We apply it to two physics-based inverse problems and compare its performance with conditional generative adversarial networks. Results show that conditional score-based generative models can reliably perform Bayesian inference.
Conditional score-based generative models for solving physics-based inverse problems
[ "Agnimitra Dasgupta", "Javier Murgoitio-Esandi", "Deep Ray", "Assad Oberai" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VqFHhTYonI
@inproceedings{ ekmekci2023quantifying, title={Quantifying Generative Model Uncertainty in Posterior Sampling Methods for Computational Imaging}, author={Canberk Ekmekci and Mujdat Cetin}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=VqFHhTYonI} }
The idea of using generative models to perform posterior sampling for imaging inverse problems has elicited attention from the computational imaging community. The main limitation of the existing generative model-based posterior sampling methods is that they do not provide any information about how uncertain the generative model is. In this work, we propose a quick-to-adopt framework that can transform a given generative model-based posterior sampling method into a statistical model that can quantify the generative model uncertainty. The proposed framework is built upon the principles of Bayesian neural networks with latent variables and uses ensembling to capture the uncertainty on the parameters of a generative model. We evaluate the proposed framework on the computed tomography reconstruction problem and demonstrate its capability to quantify generative model uncertainty with an illustrative example. We also show that the proposed method can improve the quality of the reconstructions and the predictive uncertainty estimates of the generative model-based posterior sampling method used within the proposed framework.
Quantifying Generative Model Uncertainty in Posterior Sampling Methods for Computational Imaging
[ "Canberk Ekmekci", "Mujdat Cetin" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VWKe1ZIDsa
@inproceedings{ shastri2023phase, title={Phase Retrieval via Deep Expectation-Consistent Approximation}, author={Saurav K Shastri and Philip Schniter}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=VWKe1ZIDsa} }
The expectation consistent (EC) approximation framework is a state-of-the-art approach for solving (generalized) linear inverse problems with random forward operators and i.i.d. signal priors. In image inverse problems, however, both the forward operator and image pixels are structured, which plagues traditional EC implementations. In this work, we propose a novel incarnation of EC that exploits deep neural networks to handle structured operators and signals. For phase-retrieval, we propose a simplified variant called ''deepECpr'' that reduces to iterative denoising. In experiments recovering natural images from phaseless, shot-noise corrupted, coded-diffraction-pattern outputs, we observe accuracy surpassing the state-of-the-art prDeep (Metzler et al., 2018) and Diffusion Posterior Sampling (Chung et al., 2023) approaches with two-orders-of-magnitude complexity reduction.
Phase Retrieval via Deep Expectation-Consistent Approximation
[ "Saurav K Shastri", "Philip Schniter" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=RV4bTQfLW4
@inproceedings{ berk2023modeladapted, title={Model-adapted Fourier sampling for generative compressed sensing}, author={Aaron Berk and Simone Brugiapaglia and Yaniv Plan and Matthew Scott and Xia Sheng and Ozgur Yilmaz}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=RV4bTQfLW4} }
We study generative compressed sensing when the measurement matrix is randomly subsampled from a unitary matrix (with the DFT as an important special case). It was recently shown that $O(kdn\lVert \boldsymbol{\alpha}\rVert_{\infty}^{2})$ uniformly random Fourier measurements are sufficient to recover signals in the range of a neural network $G:\mathbb{R}^k \to \mathbb{R}^n$ of depth $d$, where each component of the so-called local coherence vector $\boldsymbol{\alpha}$ quantifies the alignment of a corresponding Fourier vector with the range of $G$. We construct a model-adapted sampling strategy with an improved sample complexity of $\mathcal{O}(kd\lVert \boldsymbol{\alpha}\rVert_{2}^{2})$ measurements. This is enabled by: (1) new theoretical recovery guarantees that we develop for nonuniformly random sampling distributions and then (2) optimizing the sampling distribution to minimize the number of measurements needed for these guarantees. This development offers a sample complexity applicable to natural signal classes, which are often almost maximally coherent with low Fourier frequencies. Finally, we consider a surrogate sampling scheme, and validate its performance in recovery experiments using the CelebA dataset.
Model-adapted Fourier sampling for generative compressed sensing
[ "Aaron Berk", "Simone Brugiapaglia", "Yaniv Plan", "Matthew Scott", "Xia Sheng", "Ozgur Yilmaz" ]
Workshop/Deep_Inverse
2310.04984
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=RP12BvHTnt
@inproceedings{ senouf2023inferring, title={Inferring Cardiovascular Biomarkers with Hybrid Model Learning}, author={Ortal Senouf and Jens Behrmann and Joern-Henrik Jacobsen and Pascal Frossard and Emmanuel Abbe and Antoine Wehenkel}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=RP12BvHTnt} }
Wearable devices offer continuous monitoring of biomarkers, presenting an opportunity to diagnose cardiovascular diseases earlier, potentially reducing their fatality rate. While machine learning holds promise for predicting cardiovascular biomarkers from sensor data, its use often depends on the availability of labeled datasets, which are limited due to technical and ethical constraints. On the other hand, biophysical simulations present a solution to data scarcity but face challenges in model transfer from simulation to reality due to inherent model simplifications and misspecifications. Building on advancements in hybrid learning, we introduce a method that combines a pulse-wave propagation model, rooted in biophysical simulations, with a correction model trained with unlabeled real-world data. This generative model transforms cardiovascular parameters into real-world sensor measurements and, when trained as an auto-encoder, also provides the inverse transformation, mapping measurements to cardiovascular biomarkers. Notably, when assessed using real pulse-wave data, our hybrid method appears to outperform models based solely on simulations in inferring cardiovascular biomarkers, opening new avenues for inferring physiological biomarkers in data-limited scenarios.
Inferring Cardiovascular Biomarkers with Hybrid Model Learning
[ "Ortal Senouf", "Jens Behrmann", "Joern-Henrik Jacobsen", "Pascal Frossard", "Emmanuel Abbe", "Antoine Wehenkel" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QaIEU0wWj8
@inproceedings{ bakker2023switching, title={Switching policies for solving inverse problems}, author={Tim Bakker and Fabio Valerio Massoli and Thomas Hehn and Tribhuvanesh Orekondy and Arash Behboodi}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=QaIEU0wWj8} }
In recent years, inverse problems for black-box simulators have enjoyed increased focus of the machine learning community due to their prevalence in science and engineering domains. Such simulators describe a forward process $f: (\psi, x) \rightarrow y$. Here the intent is to optimise simulator parameters $\psi$ to minimise some observation loss on $y$, under some input distribution on $x$. Optimisation of such objectives is often challenging, since it is not trivial to estimate simulator gradients accurately. In settings where multiple related inverse problems need to be solved simultaneously, from-scratch/ab-initio optimisation of each may be infeasible if the forward model is expensive to evaluate. In this paper, we propose a novel method for solving such families of inverse problems with reinforcement learning. We train a policy to guide the optimisation by selecting between gradients estimated numerically from the simulator and gradients estimated from a pre-trained surrogate model. After training the surrogate and the policy, downstream inverse problem optimisations require 10\%-70\% fewer simulator evaluations. Moreover, the policy does successful optimisations on functions where using just simulator gradient estimates fails.
Switching policies for solving inverse problems
[ "Tim Bakker", "Fabio Valerio Massoli", "Thomas Hehn", "Tribhuvanesh Orekondy", "Arash Behboodi" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=P2r4AxyeE2
@inproceedings{ ravula2023optimizing, title={Optimizing Sampling Patterns for Compressed Sensing {MRI} with Diffusion Generative Models}, author={Sriram Ravula and Brett Levac and Ajil Jalal and Jon Tamir and Alex Dimakis}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=P2r4AxyeE2} }
Diffusion-based generative models have been used as powerful priors for magnetic resonance imaging (MRI) reconstruction. We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI that leverages pre-trained diffusion generative models. Crucially, during training we use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process. Experiments across varying acceleration factors and pattern types show that sampling operators learned with our method lead to competitive, and in the case of 2D patterns, improved reconstructions compared to baseline patterns.
Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion Generative Models
[ "Sriram Ravula", "Brett Levac", "Ajil Jalal", "Jon Tamir", "Alex Dimakis" ]
Workshop/Deep_Inverse
2306.03284
[ "https://github.com/sriram-ravula/mri_sampling_diffusion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=NRGZmGbteB
@inproceedings{ ozturkler2023regularization, title={Regularization by Denoising Diffusion Process for {MRI} Reconstruction}, author={Batu Ozturkler and Morteza Mardani and Arash Vahdat and Jan Kautz and John M. Pauly}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=NRGZmGbteB} }
Diffusion models have recently delivered state-of-the-art performance for MRI reconstruction with improved robustness. However, these models fail when there is a large distribution shift, and their long inference times impede their clinical utility. Recently, regularization by denoising diffusion process (RED-diff) was introduced for solving general inverse problems. RED-diff uses a variational sampler based on a measurement consistency loss and a score matching regularization. In this paper, we extend RED-diff to MRI reconstruction. RED-diff formulates MRI reconstruction as stochastic optimization, and outperforms diffusion baselines in PSNR/SSIM with $3 \times$ faster inference while using the same amount of memory. The code is publicly available at https://github.com/NVlabs/SMRD.
Regularization by Denoising Diffusion Process for MRI Reconstruction
[ "Batu Ozturkler", "Morteza Mardani", "Arash Vahdat", "Jan Kautz", "John M. Pauly" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=NGcq0Uacie
@inproceedings{ zhuang2023blind, title={Blind Image Deblurring with Unknown Kernel Size and Substantial Noise}, author={Zhong Zhuang and Taihui Li and Hengkang Wang and Ju Sun}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=NGcq0Uacie} }
Blind image deblurring (BID) has been extensively studied in computer vision and adjacent fields. Modern methods for BID can be grouped into two categories: single-instance methods that deal with individual instances using statistical infer- ence and numerical optimization, and data-driven methods that train deep-learning models to deblur future instances directly. Data-driven methods can be free from the difficulty in deriving accurate blur models, but are fundamentally limited by the diversity and quality of the training data—collecting sufficiently expressive and realistic training data is a standing challenge. In this paper, we focus on single-instance methods that remain competitive and indispensable, and address the challenging setting unknown kernel size and substantial noise, failing state-of- the-art (SOTA) methods. We propose a practical BID method that is stable against both, the first of its kind. Also, we show that our method, a non-data-driven method, can perform on par with SOTA data-driven methods on similar data the latter are trained on, and can perform consistently better on novel data.
Blind Image Deblurring with Unknown Kernel Size and Substantial Noise
[ "Zhong Zhuang", "Taihui Li", "Hengkang Wang", "Ju Sun" ]
Workshop/Deep_Inverse
2208.09483
[ "https://github.com/subeeshvasu/Awesome-Deblurring" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=LIL1M2fdGb
@inproceedings{ fang2023whats, title={What{\textquoteright}s in a Prior? Learned Proximal Networks for Inverse Problems}, author={Zhenghan Fang and Sam Buchanan and Jeremias Sulam}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=LIL1M2fdGb} }
Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed. Modern deep learning models have been brought to bear for these tasks too, as in the framework of plug-and-play or deep unrolling, where they loosely resemble proximal operators. Yet, something essential is lost in employing these purely data-driven approaches: there is no guarantee that a general deep network represents the proximal operator of any function, nor is there any characterization of the function for which the network might provide some approximate proximal. This not only makes guaranteeing convergence of iterative schemes challenging but, more fundamentally, complicates the analysis of what has been learned by these networks about their training data. Herein we provide a framework to develop learned proximal networks (LPN), prove that they provide exact proximal operators for a data-driven nonconvex regularizer, and show how a new training strategy, dubbed proximal matching, provably promotes the recovery of the log-prior of the true data distribution. Such LPN provide general, unsupervised, expressive proximal operators that can be used for general inverse problems with convergence guarantees. We illustrate our results in a series of cases of increasing complexity, demonstrating that these models not only result in state-of-the-art performance, but provide a window into the resulting priors learned from data.
What’s in a Prior? Learned Proximal Networks for Inverse Problems
[ "Zhenghan Fang", "Sam Buchanan", "Jeremias Sulam" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=G8wMnihF6E
@inproceedings{ chen2023multilook, title={Multilook compressive sensing in the presence of speckle noise}, author={Xi Chen and Zhewen Hou and Christopher Metzler and Arian Maleki and Shirin Jalali}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=G8wMnihF6E} }
Multiplicative speckle noise is an inherent part of coherent imaging systems, such as synthetic aperture radar and digital holography. Speckle noise is mitigated by obtaining multiple measurement vectors with independent speckle noise, a technique commonly referred to as "multi-look", followed by appropriate averaging. However, in many applications, even with multi-look, the achievable performance is not satisfactory. Moreover, in this approach, every look (or every set of measurements) is required to be over-determined,which imposes additional constraints on spatial resolution. In this work, we develop a maximum likelihood based approach for recovering images from a set of compressive measurements contaminated by speckle noise. We propose an iterative multi-look compressive sensing recovery algorithm, DIP-$M^3$, that i) requires no training data, ii) is computationally efficient, and iii) generates high-quality reconstruction images from multi-look, where each look is underdetermined and corrupted by speckle noise.
Multilook compressive sensing in the presence of speckle noise
[ "Xi Chen", "Zhewen Hou", "Christopher Metzler", "Arian Maleki", "Shirin Jalali" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=FjOxEqJpia
@inproceedings{ peng2023how, title={How Good Are Deep Generative Models for Solving Inverse Problems?}, author={Shichong Peng and Alireza Moazeni and Ke Li}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=FjOxEqJpia} }
Deep generative models, such as diffusion models, GANs, and IMLE, have shown impressive capability in tackling inverse problems. However, the validity of model-generated solutions w.r.t. the forward process and the reliability of associated uncertainty estimates remain understudied. This study evaluates recent diffusion-based, GAN-based, and IMLE-based methods on three inverse problems, i.e., 16x super-resolution, colourization, and image decompression. We assess the validity of these models' outputs as solutions to the inverse problems and conduct a thorough analysis of the reliability of the models' estimates of uncertainty over the solution. Overall, we find that the IMLE-based CHIMLE method outperforms other methods in terms of producing valid solutions and reliable uncertainty estimates.
How Good Are Deep Generative Models for Solving Inverse Problems?
[ "Shichong Peng", "Alireza Moazeni", "Ke Li" ]
Workshop/Deep_Inverse
2312.12691
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Cei9ee2zfJ
@inproceedings{ oscanoa2023variational, title={Variational Diffusion Models for {MRI} Blind Inverse Problems}, author={Julio Oscanoa and Cagan Alkan and Daniel Abraham and Aizada Nurdinova and Daniel Ennis and Shreyas Vasanawala and Morteza Mardani and John M. Pauly}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=Cei9ee2zfJ} }
Diffusion models have demonstrated state-of-the-art results in solving inverse problems in various domains including medical imaging. However, existing works generally consider the cases where the forward operator is fully known. Therefore, blind inverse problems with unknown forward operator parameters require modifications on existing methods. In this work, we present an extension of the recently developed regularization by denoising diffusion process (RED-diff) algorithm to blind inverse problems. Similarly to RED-diff, our method can reconstruct images without model re-training or fine-tuning for arbitrary acquisition settings. Tested in fieldmap-corrected MR image reconstruction, our blind RED-diff framework can successfully approximate the unknown forward model parameters and produce fieldmap-corrected reconstructions accurately.
Variational Diffusion Models for Blind MRI Inverse Problems
[ "Cagan Alkan", "Julio Oscanoa", "Daniel Abraham", "Mengze Gao", "Aizada Nurdinova", "Kawin Setsompop", "John M. Pauly", "Morteza Mardani", "Shreyas Vasanawala" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=B3VR4dIy9e
@inproceedings{ kelkar2023ambientflow, title={AmbientFlow: Invertible generative models from incomplete, noisy imaging measurements}, author={Varun A. Kelkar and Rucha Deshpande and Arindam Banerjee and Mark Anastasio}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=B3VR4dIy9e} }
Generative models, including normalizing flows, are gaining popularity in imaging science for tasks such as image reconstruction, posterior sampling, and data sharing. However, training them requires a high-quality dataset of objects, which can be challenging to obtain in fields such as tomographic imaging. This work proposes AmbientFlow, a framework for training flow-based generative models directly from noisy and incomplete data using variational Bayesian methods. The effectiveness of AmbientFlow in learning invertible generative models of objects from noisy, incomplete stylized imaging measurements is demonstrated via numerical studies.
AmbientFlow: Invertible generative models from incomplete, noisy imaging measurements
[ "Varun A. Kelkar", "Rucha Deshpande", "Arindam Banerjee", "Mark Anastasio" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=AopY31LYgO
@inproceedings{ bendel2023maskagnostic, title={Mask-Agnostic Posterior Sampling {MRI} via Conditional {GAN}s with Guided Reconstruction}, author={Matthew Bendel and Rizwan Ahmad and Philip Schniter}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=AopY31LYgO} }
For accelerated magnetic resonance imaging (MRI), conditional generative adversarial networks (cGANs), when trained end-to-end with a fixed subsampling mask, have been shown to compete with contemporary diffusion-based techniques while generating samples thousands of times faster. To handle unseen sampling masks at inference, we propose ``guided reconstruction'' (GR), wherein the cGAN code vectors are projected onto the measurement subspace. Using fastMRI brain data, we demonstrate that GR allows a cGAN to successfully handle changes in sampling mask, as well as changes in acceleration rate, yielding faster and more accurate recoveries than the Langevin approach from (Jalal et al., 2021) and the DDRM diffusion approach from (Kawar et al., 2022). Our code will be made available at https://github.com/matt-bendel/rcGAN-agnostic.
Mask-Agnostic Posterior Sampling MRI via Conditional GANs with Guided Reconstruction
[ "Matthew Bendel", "Rizwan Ahmad", "Philip Schniter" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=AUiZyqYiGb
@inproceedings{ adamson2023using, title={Using Deep Feature Distances for Evaluating {MR} Image Reconstruction Quality}, author={Philip M Adamson and Arjun D Desai and Jeffrey Dominic and Christian Bluethgen and Jeff P. Wood and Ali B Syed and Robert D. Boutin and Kathryn J. Stevens and Shreyas Vasanawala and John M. Pauly and Akshay S Chaudhari and Beliz Gunel}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=AUiZyqYiGb} }
Evaluation of MR reconstruction methods is challenged by the need for image quality (IQ) metrics which correlate strongly with radiologist-perceived IQ. We explore Deep Feature Distances (DFDs) as MR reconstruction IQ metrics, whereby distances between ground truth and reconstructed MR images are computed in a lower-dimensional feature space encoded by a CNN. In addition to comparing DFDs to two commonly used pixel-based MR IQ metrics in PSNR and SSIM via correlations to radiologist reader scores of MR image reconstructions, we explore the impact of domain shifts between the DFD encoder training data and the evaluated MR images. In particular, we assess two state-of-the-art but "out-of-domain" DFDs with encoders trained on natural images, an in-domain DFD trained on MR images alone, and propose two domain-adjacent DFDs trained on large medical imaging datasets (not limited to MR data). IQ metric performance is assessed via their correlations to 5 expert radiologist reader scores of MR image reconstructions. We make three striking observations: 1) all DFDs out-perform traditional IQ metrics, 2) DFDs performance approaches that of radiologist inter-reader variability, and, 3) surprisingly, out-of-domain DFDs perform comparably as an MR reconstruction IQ metric to in-domain and domain-adjacent DFDs. These results make it evident that DFDs should be used alongside traditional IQ metrics in evaluating MR reconstruction IQ, and suggest that general vision encoders are able to assess visual IQ across image domains.
Using Deep Feature Distances for Evaluating MR Image Reconstruction Quality
[ "Philip M Adamson", "Arjun D Desai", "Jeffrey Dominic", "Christian Bluethgen", "Jeff P. Wood", "Ali B Syed", "Robert D. Boutin", "Kathryn J. Stevens", "Shreyas Vasanawala", "John M. Pauly", "Akshay S Chaudhari", "Beliz Gunel" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=9ki8qsGSoc
@inproceedings{ dove2023physicsguided, title={Physics-guided Training of Neural Electromagnetic Wave Simulators with Time-reversal Consistency}, author={Charles Dove and Jatearoon Boondicharern and Laura Waller}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=9ki8qsGSoc} }
Conventional electromagnetic wave simulators often have long simulation times, so are not suitable for computational imaging and photonic inverse problems (e.g. end-to-end design, iterative reconstruction) that require evaluating the forward model many times. Electromagnetic wave simulators based on neural networks promise speed improvements of several orders-of-magnitude, but standard supervised training approaches have difficulty fitting the true physics. Physics-informed approaches help, but existing residual-based methods use only local information and must be used in conjunction with standard supervised loss. In this work, we introduce Time Reversal Consistency (TReC), a new physics-based training method based on the time reversibility of Maxwell's equations. TReC uses a time-reversed, differentiable finite-difference simulator to compare neural network predictions with a known initial condition. TReC provides both global physics guidance and supervision in a single function. When trained only on randomized scatterers, we find that networks trained with TReC generalize well to a range of arbitrary structured media. We validate the method on the inverse design of a set of angle-to-angle couplers, addressing almost two magnitudes more parameters than previous methods, and find that the design quality corresponds closely with designs based on a conventional simulator while requiring 5\% of the design time.
Physics-guided Training of Neural Electromagnetic Wave Simulators with Time-reversal Consistency
[ "Charles Dove", "Jatearoon Boondicharern", "Laura Waller" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=8hz6X2GGnD
@inproceedings{ mahbub2023multimodal, title={Multimodal Neural Surface Reconstruction: Recovering the Geometry and Appearance of 3D Scenes from Events and Grayscale Images}, author={Sazan Mahbub and Brandon Feng and Christopher Metzler}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=8hz6X2GGnD} }
Event cameras offer high frame rates, minimal motion blur, and excellent dynamic range. As a result they excel at reconstructing the geometry of 3D scenes. However, their measurements do not contain absolute intensity information, which can make accurately reconstructing the appearance of 3D scenes from events challenging. In this work, we develop a multimodal neural 3D scene reconstruction framework that simultaneously reconstructs scene geometry from events and scene appearance from grayscale images. Our framework---which is based on neural surface representations, as opposed to the neural radiance fields used in previous works---is able to reconstruct both the structure and appearance of 3D scenes more accurately than existing unimodal reconstruction methods.
Multimodal Neural Surface Reconstruction: Recovering the Geometry and Appearance of 3D Scenes from Events and Grayscale Images
[ "Sazan Mahbub", "Brandon Feng", "Christopher Metzler" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7q3i7nLdeF
@inproceedings{ zhuang2023phase, title={Phase Retrieval Using Double Deep Image Priors}, author={Zhong Zhuang and David Yang and David Barmherzig and Ju Sun}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=7q3i7nLdeF} }
Phase retrieval (PR) concerns the recovery of complex phases from complex magnitudes. We identify the connection between the difficulty level and the number and variety of symmetries in PR problems. We focus on the most difficult far-field PR (FFPR), and propose a novel method using double deep image priors. In realistic evaluation, our method outperforms all competing methods by large margins. As a single-instance method, our method requires no training data and minimal hyperparameter tuning, and hence enjoys good practicality.
Phase Retrieval Using Double Deep Image Priors
[ "Zhong Zhuang", "David Yang", "David Barmherzig", "Ju Sun" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=5NoVk8nBWc
@inproceedings{ rumbell2023sequential, title={Sequential data-consistent model inversion}, author={Timothy Rumbell and Catherine Wanjiru and Isaiah Onando Mulang' and Stephen Obonyo and James Kozloski and Viatcheslav Gurev}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=5NoVk8nBWc} }
Data-consistent model inversion problems aim to infer distributions of model parameters from distributions of experimental observations. Previous approaches to solving these problems include rejection algorithms, which are impractical for many real-world problems, and generative adversarial networks, which require a differentiable simulation. Here, we introduce a sequential sample refinement algorithm that overcomes these drawbacks. A set of parameters is iteratively refined using density ratio estimates in the model input and output domains, and parameters are resampled by training a generative implicit density estimator. We implement this novel approach using a combination of standard models from artificial intelligence and machine learning, including density estimators, binary classifiers, and diffusion models. To demonstrate the method, we show two examples from computational biology, with different levels of complexity.
Sequential data-consistent model inversion
[ "Timothy Rumbell", "Catherine Wanjiru", "Isaiah Onando Mulang'", "Stephen Obonyo", "James Kozloski", "Viatcheslav Gurev" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=4YA8NFJK1J
@inproceedings{ ahuja2023transformers, title={Transformers Can Learn To Solve Linear-Inverse Problems In-Context}, author={Kabir Ahuja and Madhur Panwar and Navin Goyal}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=4YA8NFJK1J} }
In-context learning is one of the surprising and useful features of large language models. How it works is an active area of research. Recently, stylized meta-learning-like setups have been devised that train these models on a sequence of input-output pairs $(x, f(x))$ from a function class using the language modeling loss and observe generalization to unseen functions from the same class. One of the main discoveries in this line of research has been that for several problems such as linear regression, trained transformers (TFs) learn algorithms for learning functions in context. We extend this setup to different types of linear-inverse problems and show that TFs are able to in-context learn these problems as well. Additionally, we show that TFs are able to recover the solutions in fewer-measurements than the number of unknowns, leveraging the structure of these problems and are in accordance with the recovery bounds. Finally, we also discuss the multi-task setup, where the TF is pre-trained on multiple types of linear-inverse problems at once and show that at inference time, given the measurements, they are able to identify the correct problem structure and solve the inverse problem efficiently.
Transformers Can Learn To Solve Linear-Inverse Problems In-Context
[ "Kabir Ahuja", "Madhur Panwar", "Navin Goyal" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2rl2uyLOC6
@inproceedings{ xia2023modeling, title={Modeling {GAN} Latent Dynamics using Neural {ODE}s}, author={Weihao Xia and Yujiu Yang and Jing-Hao Xue}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=2rl2uyLOC6} }
In this paper, we propose DynODE, a method to model the video dynamics by learning the trajectory of independently inverted latent codes from GANs. The entire sequence is seen as discrete-time observations of a continuous trajectory of the initial latent code. The latent codes representing different frames are therefore reformulated as state transitions of the initial frame, which can be modeled by neural ordinary differential equations. Our DynODE learns the holistic geometry of the video dynamic space from given sparse observations and specifies continuous latent states, allowing us to engage in various video applications such as frame interpolation and video editing. Extensive experiments demonstrate that our method achieves state-of-the-art performance but with much less computation. Code is available at https://github.com/weihaox/dynode_released.
Modeling GAN Latent Dynamics using Neural ODEs
[ "Weihao Xia", "Yujiu Yang", "Jing-Hao Xue" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=28zXoRIcZd
@inproceedings{ kim2023refined, title={Refined Tensorial Radiance Field: Harnessing Coordinate-Based Networks for Novel View Synthesis from Sparse Inputs}, author={Mingyu Kim and Kim Jun-Seong and Se-Young Yun and Jin-Hwa Kim}, booktitle={NeurIPS 2023 Workshop on Deep Learning and Inverse Problems}, year={2023}, url={https://openreview.net/forum?id=28zXoRIcZd} }
The multi-plane encoding approach has been highlighted for its ability to serve as static and dynamic neural radiance fields without sacrificing generality. This approach constructs related features through projection onto learnable planes and interpolating adjacent vertices. This mechanism allows the model to learn fine-grained details rapidly and achieves outstanding performance. However, it has limitations in representing the global context of the scene, such as object shapes and dynamic motion over times when available training poses are sparse. In this work, we propose refined tensorial radiance fields that harness coordinate-based networks known for strong bias toward low-frequency signals. The coordinate-based network is responsible for capturing global context, while the multi-plane network focuses on capturing fine-grained details. We demonstrate that using residual connections effectively preserves their inherent properties. Additionally, the proposed curriculum training scheme accelerates the disentanglement of these two features. We empirically show that the proposed method outperforms others for the task with static and dynamic NeRFs using sparse inputs. In particular, we prove that excessively increasing denoising regularization for multi-plane encoding effectively eliminates artifacts; however, it can lead to artificial details that appear authentic but are not present in the data. On the other hand, we note that the proposed method does not suffer from this issue.
Refined Tensorial Radiance Field: Harnessing Coordinate-Based Networks for Novel View Synthesis from Sparse Inputs
[ "Mingyu Kim", "Kim Jun-Seong", "Se-Young Yun", "Jin-Hwa Kim" ]
Workshop/Deep_Inverse
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zkPxoyPeva
@inproceedings{ li2023gradient, title={Gradient Estimation For Exactly-\$k\$ Constraints}, author={Ruoyan Li and Dipti Ranjan Sahu and Guy Van den Broeck and Zhe Zeng}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=zkPxoyPeva} }
The exactly-$k$ constraint is ubiquitous in machine learning and scientific applications, such as ensuring that the sum of electric charges in a neutral atom is zero. However, enforcing such constraints in machine learning models while allowing differentiable learning is challenging. In this work, we aim to provide a ''cookbook'' for seamlessly incorporating exactly-$k$ constraints into machine learning models by extending a recent gradient estimator from Bernoulli variables to Gaussian and Poisson variables, utilizing constraint probabilities. We show the effectiveness of our proposed gradient estimators in synthetic experiments, and further demonstrate the practical utility of our approach by training neural networks to predict partial charges for metal-organic frameworks, aiding virtual screening in chemistry. Our proposed method not only enhances the capability of learning models but also expands their applicability to a wider range of scientific domains where satisfaction of constraints is crucial.
Gradient Estimation For Exactly-k Constraints
[ "Ruoyan Li", "Dipti Ranjan Sahu", "Guy Van den Broeck", "Zhe Zeng" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zXhQnoaRQW
@inproceedings{ flashner2023learning, title={Learning Expert-Interpretable Programs for Myocardial Infarction Localization}, author={Joshua Flashner and Jennifer Sun and David Ouyang and Yisong Yue}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=zXhQnoaRQW} }
We study how to learn accurate and interpretable models for assisted clinical diagnostics. We focus on myocardial infarction (heart attack) localization from electrocardiogram (ECG) signals, which is known to have a complex mapping that is challenging even for expert cardiologists to understand. Our approach leverages recent advances in learning neurosymbolic models, and yields inherently expert interpretable programs as compositions of ECG features and learned temporal filters. We evaluate our method on a set of 21,844 ECG recordings, to localize myocardial infarction at different levels of granularity. Results demonstrate that our model performs comparably to conventional black-box baselines, but with a much simpler and more interpretable structure.
Learning Expert-Interpretable Programs for Myocardial Infarction Localization
[ "Joshua Alan Flashner", "Jennifer J. Sun", "David Ouyang", "Yisong Yue" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yhaIv0MN5m
@inproceedings{ jindal2023predicting, title={Predicting the Initial Conditions of the Universe using a Deterministic Neural Network}, author={Vaibhav Jindal and Albert Liang and Aarti Singh and Shirley Ho and Drew Jamieson}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=yhaIv0MN5m} }
Finding the initial conditions that led to the current state of the universe is challenging because it involves searching over an intractable input space of initial conditions, along with modeling their evolution via tools such as N-body simulations which are computationally expensive. Recently, deep learning has emerged as a surrogate for N-body simulations by directly learning the mapping between the linear input of an N-body simulation and the final nonlinear output from the simulation, significantly accelerating the forward modeling. However, this still does not reduce the search space for initial conditions. In this work, we pioneer the use of a deterministic convolutional neural network for learning the reverse mapping and show that it accurately recovers the initial linear displacement field over a wide range of scales ($<1$-$2$% error up to nearly $k \simeq 0.8$ - $0.9 \text{ Mpc}^{-1}h$), despite the one-to-many mapping of the inverse problem (due to the divergent backward trajectories at smaller scales). Specifically, we train a V-Net architecture, which outputs the linear displacement of an N-body simulation, given the nonlinear displacement at redshift $z=0$ and the cosmological parameters. The results of our method suggest that a simple deterministic neural network is sufficient for accurately approximating the initial linear states, potentially obviating the need for the more complex and computationally demanding backward modeling methods that were recently proposed.
Predicting the Initial Conditions of the Universe using a Deterministic Neural Network
[ "Vaibhav Jindal", "Albert Liang", "Aarti Singh", "Shirley Ho", "Drew Jamieson" ]
Workshop/AI4Science
2303.13056
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yLydB04RxR
@inproceedings{ gil2023holistic, title={Holistic chemical evaluation reveals pitfalls in reaction prediction models}, author={Victor Sabanza Gil and Andres M Bran and Malte Franke and Jeremy S. Luterbacher and Philippe Schwaller}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=yLydB04RxR} }
The prediction of chemical reactions has gained significant interest within the machine learning community in recent years, owing to its complexity and crucial applications in chemistry. However, model evaluation for this task has been mostly limited to simple metrics like top-k accuracy, which obfuscates fine details of a model's limitations. Inspired by progress in other fields, we propose a new assessment scheme that builds on top of current approaches, steering towards a more holistic evaluation. We introduce the following key components for this goal: ChORISO, a curated dataset along with multiple tailored splits to recreate chemically relevant scenarios, and a collection of metrics that provide a holistic view of a model's advantages and limitations. Application of this method to state-of-the-art models reveals important differences on sensitive fronts, especially stereoselectivity and chemical out-of-distribution generalization. Our work paves the way towards robust prediction models that can ultimately accelerate chemical discovery.
Holistic chemical evaluation reveals pitfalls in reaction prediction models
[ "Victor Sabanza Gil", "Andres M Bran", "Malte Franke", "Rémi Schlama", "Jeremy S. Luterbacher", "Philippe Schwaller" ]
Workshop/AI4Science
2312.09004
[ "https://github.com/schwallergroup/choriso" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=yGVChrbJ4E
@inproceedings{ mehta2023towards, title={Towards {LLM}s as Operational Copilots for Fusion Reactors}, author={Viraj Mehta and Joseph Abbate and Allen Wang and Andrew Rothstein and Ian Char and Jeff Schneider and Egemen Kolemen and Cristina Rea and Darren Garnier}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=yGVChrbJ4E} }
The tokamak is one of the most promising approaches for achieving nuclear fusion as an energy source. As such, many tokamaks have been built with rich experimental histories and datasets. While the quantitative data generated by tokamaks is invaluable, tokamaks also generate another, often underutilized data stream: text logs written by experimental operators. In this work, we leverage these extensive text logs by employing Retrieval-Augmented Generation (RAG) with state-of-the-art large language models (LLMs) to create a prototype "copilot". Instances of this copilot were created using text logs from the fusion experiments DIII-D and Alcator C-Mod and deployed for researchers to use. In this paper, we report on the datasets and methodology used to create this ``copilot", along with its performance on three use cases: 1) semantic search of experiments, 2) assisting with device-specific operations, and 3) answering general tokamak questions. Although we found via a survey of researchers that for general tokamak operations questions RAG doesn't offer a clear advantage over the base GPT-4 model, in the first two use cases, we observe clear advantages that RAG offers over base LLMs and simple keyword search.
Towards LLMs as Operational Copilots for Fusion Reactors
[ "Viraj Mehta", "Joseph Abbate", "Allen Wang", "Andrew Rothstein", "Ian Char", "Jeff Schneider", "Egemen Kolemen", "Cristina Rea", "Darren Garnier" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xyfd9svlDH
@inproceedings{ majha2023on, title={On Modelability and Generalizability: Are Machine Learning Models for Drug Synergy Exploiting Artefacts and Biases in Available Data?}, author={Arushi GK Majha and Andreas Bender and Ian Stott}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=xyfd9svlDH} }
Synergy models are useful tools for exploring drug combinatorial search space and identifying promising sub-spaces for in vitro/vivo experiments. Here, we report that distributional biases in the training-validation-test sets used for predictive modeling of drug synergy can explain much of the variability observed in model performances (up to $0.22$ $\Delta$AUPRC). We built 145 classification models spanning 4,577 unique drugs and 75,276 pair-wise drug combinations extracted from DrugComb, and examined spurious correlations in both the input feature and output label spaces. We posit that some synergy datasets are easier to model than others due to factors such as synergy spread, class separation, chemical structural diversity, physicochemical diversity, combinatorial tests per drug, and combinatorial label entropy. We simulate distribution shifts for these dataset attributes and report that the drug-wise homogeneity of combinatorial labels most influences modelability ($0.16\pm0.06$ $\Delta$AUPRC). Our findings imply that seemingly high-performing drug synergy models may not generalize well to broader medicinal space. We caution that the synergy modeling community's efforts may be better expended in examining data-specific artefacts and biases rigorously prior to model building.
On Modelability and Generalizability: Are Machine Learning Models for Drug Synergy Exploiting Artefacts and Biases in Available Data?
[ "Arushi GK Majha", "Andreas Bender", "Ian Stott" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xww53DuKJO
@inproceedings{ chang2023lensiam, title={LenSiam: Self-Supervised Learning on Strong Gravitational Lens Images}, author={Po-Wen Chang and Kuan-Wei Huang and Joshua Fagin and James Hung-Hsu Chan and Joshua Yao-Yu Lin}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=xww53DuKJO} }
Self-supervised learning has been known for learning good representations from data without the need for annotated labels. We explore the simple siamese (SimSiam) architecture for representation learning on strong gravitational lens images. Commonly used image augmentations tend to change lens properties; for example, zoom-in would affect the Einstein radius. To create image pairs representing the same underlying lens model, we introduce a lens augmentation method to preserve lens properties by fixing the lens model while varying the source galaxies. Our research demonstrates this lens augmentation works well with SimSiam for learning the lens image representation without labels, so we name it LenSiam. We also show that a pre-trained LenSiam model can benefit downstream tasks. We plan to open-source our code and datasets.
LenSiam: Self-Supervised Learning on Strong Gravitational Lens Images
[ "Po-Wen Chang", "Kuan-Wei Huang", "Joshua Fagin", "James Hung-Hsu Chan", "Joshua Yao-Yu Lin" ]
Workshop/AI4Science
2311.10100
[ "https://github.com/kuanweih/lensiam" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xvCxvmrSJg
@inproceedings{ hirashima2023surrogate, title={Surrogate Modeling for Computationally Expensive Simulations of Supernovae in High-Resolution Galaxy Simulations}, author={KEIYA HIRASHIMA and Kana Moriwaki and Michiko S. Fujii and Yutaka Hirai and Takayuki R. Saitoh and Junichiro Makino and Shirley Ho}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=xvCxvmrSJg} }
Some stars are known to explode at the end of their lives, called supernovae (SNe). The substantial amount of matter and energy that SNe release provides significant feedback to star formation and gas dynamics in a galaxy. SNe release a substantial amount of matter and energy to the interstellar medium, resulting in significant feedback to star formation and gas dynamics in a galaxy. While such feedback has a crucial role in galaxy formation and evolution, in simulations of galaxy formation, it has only been implemented using simple {\it sub-grid models} instead of numerically solving the evolution of gas elements around SNe in detail due to a lack of resolution. We develop a method combining machine learning and Gibbs sampling to predict how a supernova (SN) affects the surrounding gas. The fidelity of our model in the thermal energy and momentum distribution outperforms the low-resolution SN simulations. Our method can replace the SN sub-grid models and help properly simulate un-resolved SN feedback in galaxy formation simulations. We find that employing our new approach reduces the necessary computational cost to $\sim$ 1 percent compared to directly resolving SN feedback.
Surrogate Modeling for Computationally Expensive Simulations of Supernovae in High-Resolution Galaxy Simulations
[ "Keiya Hirashima", "Kana Moriwaki", "Michiko S. Fujii", "Yutaka Hirai", "Takayuki R. Saitoh", "Junichiro Makino", "Shirley Ho" ]
Workshop/AI4Science
2311.08460
[ "" ]
https://huggingface.co/papers/2311.08460
0
0
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=xiNRyrBAjt
@inproceedings{ lehmann2023seismic, title={Seismic hazard analysis with a Factorized Fourier Neural Operator (F-{FNO}) surrogate model enhanced by transfer learning}, author={Fanny Lehmann and Filippo Gatti and Micha{\"e}l Bertin and Didier Clouteau}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=xiNRyrBAjt} }
Seismic hazard analyses in the area of a nuclear installation must account for a large number of uncertainties, including limited geological knowledge. It is known that some geological features can create site-effects that considerably amplify ground motion. Combining the accuracy of physics-based simulations with the expressivity of deep neural networks can help quantifying the influence of geological heterogeneities on surface ground motion. This work demonstrates the use of a Factorized Fourier Neural Operator (F-FNO) that learns the relationship between 3D heterogeneous geologies and time-dependent surface wavefields. The F-FNO was pretrained on the generic HEMEW-3D database with 30 000 samples. Then, a smaller database was built specifically for the region of the Le Teil earthquake (South-Eastern France) and the F-FNO was further trained with only 250 specific samples. Transfer learning improved the prediction error by 22 %. As quantified by the Goodness-Of-Fit (GOF) criteria, 90% of predictions had excellent phase GOF (62% for the enveloppe GOF). Although the intensity measures of surface ground motion were, in average, slightly underestimated by the FNO, considering a set of heterogeneous geologies always led to ground motion intensities larger than those obtained from a single homogeneous geology. These results suggest that neural operators are an efficient tool to quantify the range of ground motions a nuclear installation could face in the presence of geological uncertainties.
Seismic hazard analysis with a Factorized Fourier Neural Operator (F-FNO) surrogate model enhanced by transfer learning
[ "Fanny Lehmann", "Filippo Gatti", "Michaël Bertin", "Didier Clouteau" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xZqEdxTqd2
@inproceedings{ wang2023beno, title={{BENO}: Boundary-embedded Neural Operators for Elliptic {PDE}s}, author={Haixin Wang and Jiaxin LI and Anubhav Dwivedi and Kentaro Hara and Tailin Wu}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=xZqEdxTqd2} }
Elliptic partial differential equations (PDEs) are a major class of time-independent PDEs that play a key role in many scientific and engineering domains such as fluid dynamics, plasma physics, and solid mechanics. Recently, neural operators have emerged as a promising technique to solve elliptic PDEs more efficiently by directly mapping the input to solutions. However, existing networks typically neglect complex geometries and inhomogeneous boundary values present in the real world. Here we introduce Boundary-Embedded Neural Operators (BENO), a novel neural operator architecture that embeds the complex geometries and inhomogeneous boundary values into the solving of elliptic PDEs. Inspired by classical Green's function, BENO consists of two Graph Neural Networks (GNNs) for interior source term and boundary values, respectively. Furthermore, a Transformer encoder maps the global boundary geometry into a latent vector which influences each message passing layer of the GNNs. We test our model and strong baselines extensively in elliptic PDEs with complex boundary conditions. We show that all existing baseline methods fail to learn the solution operator. In contrast, our model, endowed with boundary-embedded architecture, outperforms state-of-the-art neural operators and strong baselines by an average of 60.96%.
BENO: Boundary-embedded Neural Operators for Elliptic PDEs
[ "Haixin Wang", "Jiaxin LI", "Anubhav Dwivedi", "Kentaro Hara", "Tailin Wu" ]
Workshop/AI4Science
2401.09323
[ "https://github.com/ai4science-westlakeu/beno" ]
https://huggingface.co/papers/2401.09323
0
0
0
5
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=xYbECDx0JF
@inproceedings{ liao2023machine, title={Machine Learning for Practical Quantum Error Mitigation}, author={Haoran Liao and Derek S. Wang and Iskandar Sitdikov and Ciro Salcedo and Alireza Seif and Zlatko K. Minev}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=xYbECDx0JF} }
Quantum computers are actively competing to surpass classical supercomputers, but quantum errors remain their chief obstacle. The key to overcoming these on near-term devices has emerged through the field of quantum error mitigation, enabling improved accuracy at the cost of additional runtime. In practice, however, the success of mitigation is limited by a generally exponential overhead. Can classical machine learning address this challenge on today's quantum computers? Here, through both simulations and experiments on state-of-the-art quantum computers using up to 100 qubits, we demonstrate that machine learning for quantum error mitigation (ML-QEM) can drastically reduce overheads, maintain or even surpass the accuracy of conventional methods, and yield near noise-free results for quantum algorithms. We benchmark a variety of machine learning models---linear regression, random forests, multi-layer perceptrons, and graph neural networks---on diverse classes of quantum circuits, over increasingly complex device-noise profiles, under interpolation and extrapolation, and for small and large quantum circuits. These tests employ the popular digital zero-noise extrapolation method as an added reference. We further show how to scale ML-QEM to classically intractable quantum circuits by mimicking the results of traditional mitigation results, while significantly reducing overhead. Our results highlight the potential of classical machine learning for practical quantum computation.
Machine Learning for Practical Quantum Error Mitigation
[ "Haoran Liao", "Derek S. Wang", "Iskandar Sitdikov", "Ciro Salcedo", "Alireza Seif", "Zlatko K. Minev" ]
Workshop/AI4Science
2309.17368
[ "https://github.com/qiskit-community/blackwater" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xW5JQo6TXO
@inproceedings{ xiong2023distilling, title={Distilling human decision-making dynamics: a comparative analysis of low-dimensional architectures}, author={Huadong Xiong and Li Ji-An and Marcelo Mattar and Robert C. Wilson}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=xW5JQo6TXO} }
Recent advances in examining biological decision-making behaviors have increasingly favored recurrent neural networks (RNNs) over traditional cognitive models grounded in normative principles such as reinforcement learning. This shift owes to RNN’s superior predictive performance on behavioral data, achieved with minimal manual engineering. To glean insights into biological decision-making through these networks, this approach focuses on identifying a compact set of latent dynamical variables by restricting the size of the recurrent layer's bottleneck. Yet, little is known about the distinctions between these low-dimensional RNN architectures and their practical effectiveness in capturing behavioral patterns of biological decision-making. Our study bridges this knowledge gap by 1) offering an architectural comparison of these low-dimensional RNNs with standardized terminology; 2) evaluating their predictive accuracy for human decision-making in an explore-exploit task; and 3) delivering these RNN-derived insights that traditional cognitive models overlook. Remarkably, our findings highlight the superiority of low-rank RNNs over alternatives like gated recurrent units in this task setting. More crucially, these RNNs reveal diverse strategies that individuals employ across different decision-making phases, advancing our understanding of intricate human decision-making processes. Our approach offers a powerful framework for discerning individual cognitive nuances.
Distilling human decision-making dynamics: a comparative analysis of low-dimensional architectures
[ "Hua-Dong Xiong", "Li Ji-An", "Marcelo G Mattar", "Robert C. Wilson" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xTXmEKbprI
@inproceedings{ hwang2023genomic, title={Genomic language model predicts protein co-regulation and function}, author={Yunha Hwang and Andre Cornman and Elizabeth Kellogg and Sergey Ovchinnikov and Peter Girguis}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=xTXmEKbprI} }
Deciphering the relationship between a gene and its genomic context is fundamental to understanding and engineering biological systems. Machine learning has shown promise in learning latent relationships underlying the sequence-structure-function paradigm from massive protein sequence datasets; However, to date, limited attempts have been made in extending this continuum to include higher order genomic context information. Here, we trained a genomic language model (gLM) on millions of metagenomic scaffolds to learn the latent functional and regulatory relationships between genes. gLM learns contextualized protein embeddings that capture the genomic context as well as the protein sequence itself, and appears to encode biologically meaningful and functionally relevant information (e.g. enzymatic function). Our analysis of the attention patterns demonstrates that gLM is learning co-regulated functional modules (i.e. operons). Our findings illustrate that gLM’s unsupervised deep learning of the metagenomic corpus is an effective and promising approach to encode functional semantics and regulatory syntax of genes in their genomic contexts and uncover complex relationships between genes in a genomic region.
Genomic language model predicts protein co-regulation and function
[ "Yunha Hwang", "Andre Cornman", "Elizabeth Kellogg", "Sergey Ovchinnikov", "Peter Girguis" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xRqAvRNUmq
@inproceedings{ kim2023learning, title={Learning to Scale Logits for Temperature-Conditional {GF}lowNets}, author={Minsu Kim and Joohwan Ko and Dinghuai Zhang and Ling Pan and Taeyoung Yun and Woo Chang Kim and Jinkyoo Park and Yoshua Bengio}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=xRqAvRNUmq} }
GFlowNets are probabilistic models that learn a stochastic policy that sequentially generates compositional structures, such as molecular graphs. They are trained with the objective of sampling such objects with probability proportional to the object's reward. Among GFlowNets, the temperature-conditional GFlowNets represent a family of policies indexed by temperature, and each is associated with the correspondingly tempered reward function. The major benefit of temperature-conditional GFlowNets is the controllability of GFlowNets' exploration and exploitation through adjusting temperature. We propose a \textit{Learning to Scale Logits for temperature-conditional GFlowNets} (LSL-GFN), a novel architectural design that greatly accelerates the training of temperature-conditional GFlowNets. It is based on the idea that previously proposed temperature-conditioning approaches introduced numerical challenges in the training of the deep network because different temperatures may give rise to very different gradient profiles and ideal scales of the policy's logits. We find that the challenge is greatly reduced if a learned function of the temperature is used to scale the policy's logits directly. We empirically show that our strategy dramatically improves the performances of GFlowNets, outperforming other baselines, including reinforcement learning and sampling methods, in terms of discovering diverse modes in multiple biochemical tasks.
Learning to Scale Logits for Temperature-Conditional GFlowNets
[ "Minsu Kim", "Joohwan Ko", "Dinghuai Zhang", "Ling Pan", "Taeyoung Yun", "Woo Chang Kim", "Jinkyoo Park", "Yoshua Bengio" ]
Workshop/AI4Science
2310.02823
[ "https://github.com/dbsxodud-11/logit-gfn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xJFITn0hRx
@inproceedings{ wang2023autopinn, title={Auto-{PINN}: Understanding and Optimizing Physics-Informed Neural Architecture}, author={Yicheng Wang and Xiaotian Han and Chia-Yuan Chang and Daochen Zha and Ulisses Braga-Neto and Xia Hu}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=xJFITn0hRx} }
Physics-Informed Neural Networks (PINNs) are revolutionizing science and engineering practices by harnessing the power of deep learning for scientific computation. The neural architecture's hyperparameters significantly impact the efficiency and accuracy of the PINN solver. However, optimizing these hyperparameters remains an open and challenging problem because of the large search space and the difficulty in identifying a suitable search objective for PDEs. In this paper, we propose Auto-PINN, the first systematic, automated hyperparameter optimization approach for PINNs, which employs Neural Architecture Search (NAS) techniques for PINN design. Auto-PINN avoids manually or exhaustively searching the hyperparameter space associated with PINNs. A comprehensive set of pre-experiments, using standard PDE benchmarks, enables us to probe the structure-performance relationship in PINNs. We discover that the different hyperparameters can be decoupled and that the training loss function of PINNs serves as an effective search objective. Comparison experiments with baseline methods demonstrate that Auto-PINN produces neural architectures with superior stability and accuracy over alternative baselines.
Auto-PINN: Understanding and Optimizing Physics-Informed Neural Architecture
[ "Yicheng Wang", "Xiaotian Han", "Chia-Yuan Chang", "Daochen Zha", "Ulisses Braga-Neto", "Xia Hu" ]
Workshop/AI4Science
2205.13748
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=wdGIL6lx3l
@inproceedings{ bran2023augmenting, title={Augmenting large language models with chemistry tools}, author={Andres M Bran and Sam Cox and Oliver Schilter and Carlo Baldassari and Andrew White and Philippe Schwaller}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=wdGIL6lx3l} }
Over the last decades, excellent computational chemistry tools have been developed. Integrating them into a single platform with enhanced accessibility could help reaching their full potential by overcoming steep learning curves. Recently, large-language models (LLMs) have shown strong performance in tasks across domains, but struggle with chemistry-related problems. Moreover, these models lack access to external knowledge sources, limiting their usefulness in scientific applications. In this study, we introduce ChemCrow, an LLM chemistry agent designed to accomplish tasks across organic synthesis, drug discovery, and materials design. By integrating 18 expert-designed tools, ChemCrow augments the LLM performance in chemistry, and new capabilities emerge. Our agent autonomously planned and executed the syntheses of an insect repellent, three organocatalysts, and guided the discovery of a novel chromophore. Our evaluation, including both LLM and expert assessments, demonstrates ChemCrow’s effectiveness in automating a diverse set of chemical tasks. Surprisingly, we find that GPT-4 as an evaluator cannot distinguish between clearly wrong GPT-4 completions and Chemcrow’s performance. Our work not only aids expert chemists and lowers barriers for non-experts, but also fosters scientific advancement by bridging the gap between experimental and computational chemistry.
Augmenting large language models with chemistry tools
[ "Andres M Bran", "Sam Cox", "Oliver Schilter", "Carlo Baldassari", "Andrew White", "Philippe Schwaller" ]
Workshop/AI4Science
[ "https://github.com/ur-whitelab/chemcrow-runs" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=wUCulO2kKX
@inproceedings{ zhao2023scalable, title={Scalable Particle Generation for Granular Shape Study}, author={Yifeng Zhao and Jinxin Liu and Xiangbo Gao and Pei Zhang and Sergio Torres and Stan Li}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=wUCulO2kKX} }
The shape of granular matter (particle) is crucial for understanding their properties and assembly behavior. Existing studies often rely on intuitive or machine-derived shape descriptors (e.g. sphericity and Corey shape factors) and are usually carried out on single, individual particles with specific shape features, lacking statistical evaluation on a large number of particles. Meanwhile, it is also questionable whether the pre-selected shape descriptors would sufficiently capture the rich morphological information provided by the particle. In this paper, we first propose a two-step particle generation pipeline to evaluate the quality of the previous shape descriptors. To overcome the scarcity issue of particle samples, we explicitly use a Metaball-Imaging algorithm to transform pixel data into a lower-dimensional space and propose a conditional generative method to design 3D realistic style particles. Meanwhile, we also design a new shape estimator to provide shape constraints to guide the conditional generation process. Building on this, we then propose "attribute twins" --- particles that share identical shape features but differ in actual morphologies. Attribute twins provide essential particle samples to investigate whether existing shape descriptors are sufficient to represent the effects of particle shape. In a series of simulations focusing on the drag force experienced by settling particles in a fluid, we use these distilled attribute twins under different constraints of single or multiple shape descriptors. Our results shed light on the limitations of current shape descriptors in representing the influence of particle shape in this physical process and highlight the need for improved shape descriptors in the future.
Scalable Particle Generation for Granular Shape Study
[ "Yifeng Zhao", "Jinxin Liu", "Xiangbo Gao", "Sergio Torres", "Stan Z. Li" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=wEhqPfs22C
@inproceedings{ chen2023adsgt, title={Ads{GT}: Graph Transformer for Predicting Global Minimum Adsorption Energy}, author={Junwu Chen and Xu Huang and Cheng Hua and Yulian He and Philippe Schwaller}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=wEhqPfs22C} }
The fast assessment of the binding strength between adsorbates and catalyst surfaces is crucial for catalyst design, where global minimum adsorption energy (GMAE) is one of the most representative descriptors. However, catalyst surfaces typically have multiple adsorption sites and numerous possible adsorption configurations, which makes it prohibitively expensive to calculate the GMAE using Density Functional Theory (DFT). Additionally, most machine learning methods can only predict local minimum adsorption energies and rely on information about adsorption configurations. To overcome these challenges, we designed a graph transformer (AdsGT) that can predict the GMAE based on surface graphs and adsorbate feature vectors without any binding structure information. To evaluate the performance of AdsGT, three new datasets on GMAE were constructed from OC20-Dense, Catalysis Hub, and FG-dataset. For a wide range of combinations of catalyst surfaces and adsorbates, AdsGT achieves test mean absolute errors of 0.10 and 0.14 eV on the two GMAE datasets respectively, demonstrating its good reliability and generalizability.
AdsGT: Graph Transformer for Predicting Global Minimum Adsorption Energy
[ "Junwu Chen", "Xu Huang", "Cheng Hua", "Yulian He", "Philippe Schwaller" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=w90SAEcUP7
@inproceedings{ zhao2023generation, title={Generation of 3D Realistic Soil Particles with Metaball Descriptor}, author={Yifeng Zhao and Jinxin Liu and xiangbo gao and Pei Zhang and Stan Li and Sergio Andres Galindo Torres}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=w90SAEcUP7} }
The accurate representation of soil particle morphology is crucial for understanding its granular characteristics and assembly responses. However, incorporating realistic and diverse particle morphologies into modeling presents challenges, often requiring time-consuming and expensive X-ray Computed Tomography (XRCT). This has resulted in a prevalent issues in modeling: morphological particle generation. On this topic, we introduce the Metaball Variational Autoencoder. This method leverages deep neural networks to generate new 3D particles in the form of Metaballs while preserving essential morphological features from the parental particles. Furthermore, the method allows for shape control through an arithmetic pattern, enabling the generation of particles with specific shapes. We validate the generation fidelity by comparing the morphologies and shape-feature distributions of the generated particles with the parental data. Additionally, we provide examples to demonstrate the controllability of the generated shapes. By integrating these methods into the Metaball-based simulation framework proposed by the authors previously, we enable the incorporation of real particle shapes into simulations. This could facilitate the simulation of a large number of soil particles with varying shapes and behaviors, providing valuable insights into the properties and behavior of actual soil particles.
Generation of 3D Realistic Soil Particles with Metaball Descriptor
[ "Yifeng Zhao", "Jinxin Liu", "xiangbo gao", "Pei Zhang", "Stan Z. Li", "Sergio Andres Galindo Torres" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=vhEUYsNrGa
@inproceedings{ yakaboski2023ai, title={{AI} for Open Science: A Multi-Agent Perspective for Ethically Translating Data to Knowledge}, author={Chase Yakaboski and Gregory Hyde and Clement Nyanhongo and Eugene Santos}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=vhEUYsNrGa} }
AI for Science (AI4Science), particularly in the form of self-driving labs, has the potential to sideline human involvement and hinder scientific discovery within the broader community. While prior research has focused on ensuring the responsible deployment of AI applications, enhancing security, and ensuring interpretability, we also propose that promoting openness in AI4Science discoveries should be carefully considered. In this paper, we introduce the concept of AI for Open Science (AI4OS) as a multi-agent extension of AI4Science with the core principle of maximizing open knowledge translation throughout the scientific enterprise rather than a single organizational unit. We use the established principles of Knowledge Discovery and Data Mining (KDD) to formalize a language around AI4OS. We then discuss three principle stages of knowledge translation embedded in AI4Science systems and detail specific points where openness can be applied to yield an AI4OS alternative. Lastly, we formulate a theoretical metric to assess AI4OS with a supporting ethical argument highlighting its importance. Our goal is that by drawing attention to AI4OS we can ensure the natural consequence of AI4Science (e.g., self-driving labs) is a benefit not only for its developers but for society as a whole.
AI for Open Science: A Multi-Agent Perspective for Ethically Translating Data to Knowledge
[ "Chase Yakaboski", "Gregory Hyde", "Clement Nyanhongo", "Eugene Santos" ]
Workshop/AI4Science
2310.18852
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=vYixJUwAD4
@inproceedings{ schaeffer2023testing, title={Testing Assumptions Underlying a Unified Theory for the Origin of Grid Cells}, author={Rylan Schaeffer and Mikail Khona and Adrian Bertagnoli and Sanmi Koyejo and Ila Fiete}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=vYixJUwAD4} }
Representing and reasoning about physical space is fundamental to animal survival, and the mammalian lineage expresses a wealth of specialized neural representations that encode space. Grid cells, whose discovery earned a Nobel prize, are a striking example: a grid cell is a neuron that fires if and only if the animal is spatially located at the vertices of a regular triangular lattice that tiles all explored two-dimensional environments. Significant theoretical work has gone into understanding why mammals have learned these particular representations, and recent work has proposed a ``unified theory for the computational and mechanistic origin of grid cells," claiming to answer why the mammalian lineage has learned grid cells. However, the Unified Theory makes a series of highly specific assumptions about the target readouts of grid cells - putatively place cells. In this work, we explicitly identify what these mathematical assumptions are, then test two of the critical assumptions using biological place cell data. At both the population and single-cell levels, we find evidence suggesting that neither of the assumptions are likely true in biological neural representations. These results call the Unified Theory into question, suggesting that biological grid cells likely have a different origin than those obtained in trained artificial neural networks.
Testing Assumptions Underlying a Unified Theory for the Origin of Grid Cells
[ "Rylan Schaeffer", "Mikail Khona", "Adrian Bertagnoli", "Sanmi Koyejo", "Ila R Fiete" ]
Workshop/AI4Science
2311.16295
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v6aJLdXsZx
@inproceedings{ shin2023arxiveri, title={arXiVeri: Automatic table verification with {GPT}}, author={Gyungin Shin and Weidi Xie and Samuel Albanie}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=v6aJLdXsZx} }
Without accurate transcription of numerical data in scientific documents, a scientist cannot draw accurate conclusions. Unfortunately, the process of copying numerical data from one paper to another is prone to human error. In this paper, we propose to meet this challenge through the novel task of automatic table verification (AutoTV), in which the objective is to verify the accuracy of numerical data in tables by cross-referencing cited sources. To support this task, we propose a new benchmark, arXiVeri, which comprises tabular data drawn from open-access academic papers on arXiv. We introduce metrics to evaluate the performance of a table verifier in two key areas: (i) table matching, which aims to identify the source table in a cited document that corresponds to a target table, and (ii) cell matching, which aims to locate shared cells between a target and source table and identify their row and column indices accurately. By leveraging the flexible capabilities of modern large language models (LLMs), we propose simple baselines for table verification. Our findings highlight the complexity of this task, even for state-of-the-art LLMs like OpenAI’s GPT-4. The code and benchmark is made publicly available.
arXiVeri: Automatic table verification with GPT
[ "Gyungin Shin", "Weidi Xie", "Samuel Albanie" ]
Workshop/AI4Science
2306.07968
[ "https://github.com/caml-lab/research/tree/main/arxiveri" ]
https://huggingface.co/papers/2306.07968
3
6
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=uWx1utwxxw
@inproceedings{ rader2023lineax, title={Lineax: unified linear solves and linear least-squares in {JAX} and Equinox}, author={Jason Rader and Terry Lyons}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=uWx1utwxxw} }
We introduce Lineax, a library bringing linear solves and linear least-squares to the JAX+Equinox scientific computing ecosystem. Lineax uses general linear operators, and unifies linear solves and least-squares into a single, autodifferentiable API. Solvers and operators are user-extensible, without requiring the user to implement any custom derivative rules to get differentiability. Lineax is available at https://github.com/$\textbf{anonymised}$/lineax.
Lineax: unified linear solves and linear least-squares in JAX and Equinox
[ "Jason Michael Rader", "Terry Lyons", "Patrick Kidger" ]
Workshop/AI4Science
2311.17283
[ "https://github.com/google/lineax" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uQnSomfo0L
@inproceedings{ midgley2023se, title={{SE}(3) Equivariant Augmented Coupling Flows}, author={Laurence Midgley and Vincent Stimper and Javier Antoran and Emile Mathieu and Bernhard Sch{\"o}lkopf and Jos{\'e} Miguel Hern{\'a}ndez-Lobato}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=uQnSomfo0L} }
Coupling normalizing flows allow for fast sampling and density evaluation, making them the tool of choice for probabilistic modeling of physical systems. However, the standard coupling architecture precludes endowing flows that operate on the Cartesian coordinates of atoms with the SE(3) and permutation invariances of physical systems. This work proposes a coupling flow that preserves SE(3) and permutation equivariance by performing coordinate splits along additional augmented dimensions. At each layer, the flow maps atoms' positions into learned SE(3) invariant bases, where we apply standard flow transformations, such as monotonic rational-quadratic splines, before returning to the original basis. Crucially, our flow preserves fast sampling and density evaluation, and may be used to produce unbiased estimates of expectations with respect to the target distribution via importance sampling. When trained on the DW4, LJ13, and QM9-positional datasets, our flow is competitive with equivariant continuous normalizing flows, while allowing sampling more than an order of magnitude faster. Moreover, to the best of our knowledge, we are the first to learn the full Boltzmann distribution of alanine dipeptide by only modeling the Cartesian positions of its atoms. Lastly, we demonstrate that our flow can be trained to approximately sample from the Boltzmann distribution of the DW4 and LJ13 particle systems using only their energy functions.
SE(3) Equivariant Augmented Coupling Flows
[ "Laurence Illing Midgley", "Vincent Stimper", "Javier Antoran", "Emile Mathieu", "Bernhard Schölkopf", "José Miguel Hernández-Lobato" ]
Workshop/AI4Science
2308.10364
[ "https://github.com/lollcat/se3-augmented-coupling-flows" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uBQrcgtYwK
@inproceedings{ tecot2023randomized, title={Randomized Benchmarking of Local Zeroth-Order Optimizers for Variational Quantum Systems}, author={Lucas Tecot and Cho-Jui Hsieh}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=uBQrcgtYwK} }
In the field of quantum information, classical optimizers play an important role. From experimentalists optimizing their physical devices to theorists exploring variational quantum algorithms, many aspects of quantum information require the use of a classical optimizer. For this reason, there are many papers that benchmark the effectiveness of different optimizers for specific quantum learning tasks and choices of parameterized algorithms. However, for researchers exploring new algorithms or physical devices, the insights from these studies don't necessarily translate. To address this concern, we compare the performance of classical optimizers across a series of partially-randomized tasks to more broadly sample the space of quantum learning problems. We focus on local zeroth-order optimizers due to their generally favorable performance and query-efficiency on quantum systems. We discuss insights from these experiments that can help motivate future works to improve these optimizers for use on quantum systems.
Randomized Benchmarking of Local Zeroth-Order Optimizers for Variational Quantum Systems
[ "Lucas Tecot", "Cho-Jui Hsieh" ]
Workshop/AI4Science
2310.09468
[ "https://github.com/ltecot/rand_bench_opt_quantum" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tuXhnv6pgo
@inproceedings{ chen2023using, title={Using the Transformer Model for Physical Simulation: An application on Transient Thermal Analysis for 3D Printing Process Simulation}, author={Qian Chen and Luyang Kong and Florian Dugast and Albert To}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=tuXhnv6pgo} }
Transient thermal analysis is widely used in many science and engineering areas such as electronic packaging, engine design and manufacturing. High dimensional simulations are very expensive to run. Here we propose a machine learning model consisting of a pre-trained convolutional neural network (CNN), a transformer encoder and a multilayer perceptron (MLP) to predict the temperature field of 3D printed parts. The CAD part used in 3D printing is firstly sliced into layers and represented as images. We use the pre-trained ResNet 34 to extract low level geometry features, taking the output feature map of its Conv_4 layer as the geometry embedding vector. The transformer encoder are used to capture the long-range dependencies between layer-wise geometry features. MLP then takes the transformer's output and predicts the temperatures at given locations and time step. Our results show the model can accurately predict the thermal history in 3D printing process on different geometries. Our model is also very efficient, running 1~2 orders of magnitude faster than the simulation on which it is trained, without requiring the complicated pre-processing steps in transient thermal analysis including CAD file fix, material property setup, mesh generation and refinement, and defining the boundary conditions and dynamic loading in every time step.
Using the Transformer Model for Physical Simulation: An application on Transient Thermal Analysis for 3D Printing Process Simulation
[ "Qian Chen", "Luyang Kong", "Florian Dugast", "Albert To" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ttQuA69lML
@inproceedings{ bal2023pgraphdta, title={{PG}raph{DTA}: Improving Drug Target Interaction Prediction using Protein Language Models and Contact Maps}, author={Rakesh Bal and Yijia Xiao and Wei Wang}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=ttQuA69lML} }
Developing and discovering new drugs is a complex and resource-intensive endeavor that often involves substantial costs, time investment, and safety concerns. A key aspect of drug discovery involves identifying novel drug-target (DT) interactions. Existing computational methods for predicting DT interactions have primarily focused on binary classification tasks, aiming to determine whether a DT pair interacts or not. However, protein-ligand interactions exhibit a continuum of binding strengths, known as binding affinity, presenting a persistent challenge for accurate prediction. In this study, we investigate various techniques employed in Drug Target Interaction (DTI) prediction and propose novel enhancements to enhance their performance. Our approaches include the integration of Protein Language Models (PLMs) and the incorporation of Contact Map information as an inductive bias within current models. Through extensive experimentation, we demonstrate that our proposed approaches outperform the baseline models considered in this study, presenting a compelling case for further development in this direction. We anticipate that the insights gained from this work will significantly narrow the search space for potential drugs targeting specific proteins, thereby accelerating drug discovery. Code and data for PGraphDTA are available at https://github.com/Yijia-Xiao/PGraphDTA/.
PGraphDTA: Improving Drug Target Interaction Prediction using Protein Language Models and Contact Maps
[ "Rakesh Bal", "Yijia Xiao", "Wei Wang" ]
Workshop/AI4Science
2310.04017
[ "https://github.com/yijia-xiao/pgraphdta" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=szafjMzvWH
@inproceedings{ cipcigan2023discovery, title={Discovery of Novel Reticular Materials for Carbon Dioxide Capture using {GF}lowNets}, author={Flaviu Cipcigan and Jonathan Booth and Rodrigo Neumann Barros Ferreira and Carine Dos Santos and Mathias Steiner}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=szafjMzvWH} }
Artificial intelligence holds promise to improve materials discovery. GFlowNets are an emerging deep learning algorithm with many applications in AI-assisted discovery. Using GFlowNets, we generate porous reticular materials, such as metal organic frameworks and covalent organic frameworks, for applications in carbon dioxide capture. We introduce a new Python package (matgfn) to train and sample GFlowNets. We use matgfn to generate the matgfn-rm dataset of novel and diverse reticular materials with gravimetric surface area above 5000 $m^2 /g$. We calculate single- and two-component gas adsorption isotherms for the top-100 candidates in matgfn-rm. These candidates are novel compared to the state-of-art ARC-MOF dataset and rank in the 90th percentile in terms of working capacity compared to the CoRE2019 dataset. We discover 15 hypothetical materials outperforming all materials in CoRE2019.
Discovery of Novel Reticular Materials for Carbon Dioxide Capture using GFlowNets
[ "Flaviu Cipcigan", "Jonathan Booth", "Rodrigo Neumann Barros Ferreira", "Carine Ribeiro Dos Santos", "Mathias B Steiner" ]
Workshop/AI4Science
2310.07671
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=sdmFiPijqC
@inproceedings{ mototake2023extracting, title={Extracting Nonlinear Symmetries From Trained Neural Networks on Dynamics Data}, author={Yoh-ichi Mototake}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=sdmFiPijqC} }
To support scientists who are developing the reduced model of complex physics systems, we propose a method for extracting interpretable physics information from a deep neural network (DNN) trained on time series data of a physics system. Specifically, we propose a framework for estimating the hidden nonlinear symmetries of a system from a DNN trained on time series data that can be regarded as a finite-degree-of-freedom classical Hamiltonian dynamical system. Our proposed framework can estimate the nonlinear symmetries corresponding to the Laplace-Lunge-Renz vector, a conservation value that keeps the long-axis direction of the elliptical motion of a planet constant, and visualize its Lie manifold.
Extracting Nonlinear Symmetries From Trained Neural Networks on Dynamics Data
[ "Yoh-ichi Mototake" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=sXRpvW3cRR
@inproceedings{ palma2023modelling, title={Modelling single-cell {RNA}-seq trajectories on a flat statistical manifold}, author={Alessandro Palma and Sergei Rybakov and Leon Hetzel and Fabian Theis}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=sXRpvW3cRR} }
Optimal transport has demonstrated remarkable potential in the field of single-cell biology, addressing relevant tasks such as trajectory modelling and perturbation effect prediction. However, the standard formulation of optimal transport assumes Euclidean geometry in the representation space, which may not hold in traditional single-cell embedding methods based on Variational Autoencoders. In this study, we introduce a novel approach for matching the latent dynamics learnt by Euclidean optimal transport with geodesic trajectories in the decoded space. We achieve this by implementing a "flattening" regularisation derived from the pullback metric of a Negative Binomial statistical manifold. The method ensures alignment between the latent space of a discrete Variational Autoencoder modelling single-cell data and Euclidean space, thereby improving compatibility with optimal transport. Our results in four biological settings demonstrate that these constraints enhance the reconstruction of cellular trajectories and velocity fields. We believe that our versatile approach holds promise for advancing single-cell representation learning and temporal modelling.
Modelling single-cell RNA-seq trajectories on a flat statistical manifold
[ "Alessandro Palma", "Sergei Rybakov", "Leon Hetzel", "Fabian J Theis" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=s0UNtuuqU5
@inproceedings{ chen2023geomformer, title={Geo{MF}ormer: A General Architecture for Geometric Molecular Representation Learning}, author={Tianlang Chen and Shengjie Luo and Di He and Shuxin Zheng and Tie-Yan Liu and Liwei Wang}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=s0UNtuuqU5} }
Molecular modeling, a central topic in quantum mechanics, aims to accurately calculate the properties and simulate the behaviors of molecular systems. The molecular model is governed by physical laws, which impose geometric constraints such as invariance and equivariance to coordinate rotation and translation. While numerous deep learning approaches have been developed to learn molecular representations under these constraints, most of them are built upon heuristic and costly modules. We argue that there is a strong need for a general and flexible framework for learning both invariant and equivariant features. In this work, we introduce a novel Transformer-based molecular model called GeoMFormer to achieve this goal. Using the standard Transformer modules, two separate streams are developed to maintain and learn invariant and equivariant representations. Carefully designed *cross-attention* modules bridge the two streams, allowing information fusion and enhancing geometric modeling in each stream. As a general and flexible architecture, we show that many previous architectures can be viewed as special instantiations of GeoMFormer. Extensive experiments are conducted to demonstrate the power of GeoMFormer. All empirical results show that GeoMFormer achieves strong performance on both invariant and equivariant tasks of different types and scales. Code and models will be made publicly available.
GeoMFormer: A General Architecture for Geometric Molecular Representation Learning
[ "Tianlang Chen", "Shengjie Luo", "Di He", "Shuxin Zheng", "Tie-Yan Liu", "Liwei Wang" ]
Workshop/AI4Science
2406.16853
[ "https://github.com/c-tl/geomformer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ros8ISUn7Y
@inproceedings{ poli2023scalable, title={Scalable Deep Potentials as Implicit Hierarchical Semi-Separable Operators}, author={Michael Poli and Stefano Massaroli and Christopher Re and Stefano Ermon}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=ros8ISUn7Y} }
Direct application of Transformer architectures in scientific domains poses computational challenges, due to quadratic scaling in the number of inputs. In this work, we propose an alternative method based on hierarchical semi-separable matrices (HSS), a class of rank-structured operators with linear-time evaluation algorithms. Through connections between linearized attention and HSS, we devise an implicit hierarchical parametrization strategy that interpolates between linear and quadratic attention, achieving both subquadratic scaling and high accuracy. We demonstrate the effectiveness of the proposed approach on the approximation of potentials from computational physics.
Scalable Deep Potentials as Implicit Hierarchical Semi-Separable Operators
[ "Michael Poli", "Stefano Massaroli", "Christopher Re", "Stefano Ermon" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rdgB5BqWCw
@inproceedings{ musielewicz2023predictive, title={Predictive Uncertainty Quantification for Graph Neural Network Driven Relaxed Energy Calculations}, author={Joseph Musielewicz and Janice Lan and Matt Uyttendaele}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=rdgB5BqWCw} }
Graph neural networks (GNNs) have been shown to be astonishingly capable models for molecular property prediction, particularly as surrogates for expensive density functional theory calculations of relaxed energy for novel material discovery. However, one limitation of GNNs in this context is the lack of useful uncertainty prediction methods, as this is critical to the material discovery pipeline. In this work, we show that uncertainty quantification for relaxed energy calculations is more complex than uncertainty quantification for other kinds of molecular property prediction, due to the effect that structure optimizations have on the error distribution. We propose that distribution-free techniques are more useful tools for assessing calibration, recalibrating, and developing uncertainty prediction methods for GNNs performing relaxed energy calculations. We also develop a relaxed energy task for evaluating uncertainty methods for equivariant GNNs, based on distribution-free recalibration and using the Open Catalyst Project dataset. We benchmark a set of popular uncertainty prediction methods on this task, and show that latent distance methods, with our novel improvements, are the most well-calibrated and economical approach for relaxed energy calculations. Further, we challenge the community to develop improved uncertainty prediction methods for GNN-driven relaxed energy calculations, and benchmark them on this task.
Predictive Uncertainty Quantification for Graph Neural Network Driven Relaxed Energy Calculations
[ "Joseph Musielewicz", "Janice Lan", "Matt Uyttendaele" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=r3fzEWnaY4
@inproceedings{ li2023gfnsr, title={{GFN}-{SR}: Symbolic Regression with Generative Flow Networks}, author={Sida Li and Ioana Marinescu and Sebastian Musslick}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=r3fzEWnaY4} }
Symbolic regression (SR) is an area of interpretable machine learning that aims to identify mathematical expressions, often composed of simple functions, that best fit in a given set of covariates $X$ and response $y$. In recent years, deep symbolic regression (DSR) has emerged as a popular method in the field by leveraging deep reinforcement learning to solve the complicated combinatorial search problem. In this work, we propose an alternative framework (GFN-SR) to approach SR with deep learning. We model the construction of an expression tree as traversing through a directed acyclic graph (DAG) so that GFlowNet can learn a stochastic policy to generate such trees sequentially. Enhanced with an adaptive reward baseline, our method is capable of generating a diverse set of best-fitting expressions. Notably, we observe that GFN-SR outperforms other SR algorithms in noisy data regimes, owing to its ability to learn a distribution of rewards over a space of candidate solutions.
GFN-SR: Symbolic Regression with Generative Flow Networks
[ "Sida Li", "Ioana Marinescu", "Sebastian Musslick" ]
Workshop/AI4Science
2312.00396
[ "https://github.com/listar2000/gfn-sr" ]
https://huggingface.co/papers/2312.00396
1
0
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=r2UZipqx8F
@inproceedings{ bukharin2023machine, title={Machine Learning Force Fields with Data Cost Aware Training}, author={Alexander Bukharin and Tianyi Liu and Shengjie Wang and Simiao Zuo and Weihao Gao and Wen Yan and Tuo Zhao}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=r2UZipqx8F} }
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation, which finds widespread applications in chemistry and biomedical research. Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels generated by expensive quantum mechanical algorithms, which may scale as $O(n^3)$ to $O(n^7)$, with $n$ proportional to the number of basis functions. To address this issue, we propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data. The motivation behind ASTEROID is that inaccurate data, though incurring large bias, can help capture the sophisticated structures of the underlying force field. Therefore, we first train a MLFF model on a large amount of inaccurate training data, employing a bias-aware loss function to prevent the model from overfitting tahe potential bias of this data. We then fine-tune the obtained model using a small amount of accurate training data, which preserves the knowledge learned from the inaccurate training data while significantly improving the model's accuracy. Moreover, we propose a variant of ASTEROID based on score matching for the setting where the inaccurate training data are unlabeled. Extensive experiments on MD datasets and downstream tasks validate the efficacy of ASTEROID.
Machine Learning Force Fields with Data Cost Aware Training
[ "Alexander Bukharin", "Tianyi Liu", "Shengjie Wang", "Simiao Zuo", "Weihao Gao", "Wen Yan", "Tuo Zhao" ]
Workshop/AI4Science
2306.03109
[ "https://github.com/abukharin3/asteroid" ]
https://huggingface.co/papers/2306.03109
1
0
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=qFIs4hYZaZ
@inproceedings{ fu2023learning, title={Learning Interatomic Potentials at Multiple Scales}, author={Xiang Fu and Albert Musaelian and Anders Johansson and Tommi Jaakkola and Boris Kozinsky}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=qFIs4hYZaZ} }
The need to use a short time step is a key limit on the speed of molecular dynamics (MD) simulations. Simulations governed by classical potentials are often accelerated by using a multiple-time-step (MTS) integrator that evaluates certain potential energy terms that vary more slowly than others less frequently. This approach is enabled by the simple but limiting analytic forms of classical potentials. Machine learning interatomic potentials (MLIPs), in particular recent equivariant neural networks, are much more broadly applicable than classical potentials and can faithfully reproduce the expensive but accurate reference electronic structure calculations used to train them. They still, however, require the use of a single short time step, as they lack the inherent term-by-term scale separation of classical potentials. This work introduces a method to learn a scale separation in complex interatomic interactions by co-training two MLIPs. Initially, a small and efficient model is trained to reproduce short-time-scale interactions. Subsequently, a large and expressive model is trained jointly to capture the remaining interactions not captured by the small model. When running MD, the MTS integrator then evaluates the smaller model for every time step and the larger model less frequently, accelerating simulation. Compared to a conventionally trained MLIP, our approach can achieve a significant speedup (~3x in our experiments) without a loss of accuracy on the potential energy or simulation-derived quantities.
Learning Interatomic Potentials at Multiple Scales
[ "Xiang Fu", "Albert Musaelian", "Anders Johansson", "Tommi Jaakkola", "Boris Kozinsky" ]
Workshop/AI4Science
2310.13756
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pwYCCq4xAf
@inproceedings{ petersen2023dynamicsdiffusion, title={DynamicsDiffusion: Generating and Rare Event Sampling of Molecular Dynamic Trajectories Using Diffusion Models}, author={Magnus Petersen and Gemma Roig and Roberto Covino}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=pwYCCq4xAf} }
Molecular dynamics simulations are fundamental tools for quantitative molecular sciences. However, these simulations are computationally demanding and often struggle to sample rare events crucial for understanding spontaneous organization and reconfiguration in complex systems. To improve general speed and the ability to sample rare events in a directed fashion, we propose a method called $\textit{DynamicsDiffusion}$ based on denoising diffusion probabilistic models (DDPM) to generate molecular dynamics trajectories from noise. The generative model can then serve as a surrogate to sample rare events. We leverage the properties of DDPMs, such as conditional generation, the ability to generate variations of trajectories, and those with certain conditions, such as crossing from one state to another, using the 'inpainting' property of DDPMs, which became only applicable when generating whole trajectories and not just individual conformations. To our knowledge, this is the first deep generative modeling for generating molecular dynamics trajectories. We hope this work will motivate a new generation of generative modeling for the study of molecular dynamics.
DynamicsDiffusion: Generating and Rare Event Sampling of Molecular Dynamic Trajectories Using Diffusion Models
[ "Magnus Petersen", "Gemma Roig", "Roberto Covino" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=peoYe7ErPt
@inproceedings{ neeser2023fsscore, title={{FS}score: A Machine Learning-based Synthetic Feasibility Score Leveraging Human Expertise}, author={Rebecca Neeser and Bruno Correia and Philippe Schwaller}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=peoYe7ErPt} }
Determining whether a molecule can be synthesized is crucial for many aspects of chemistry and drug discovery, allowing prioritization of experimental work and ranking molecules in de novo design tasks. Existing scoring approaches to assess synthetic feasibility struggle to extrapolate to out-of-distribution chemical spaces or fail to discriminate based on minor differences such as chirality that might be obvious to trained chemists. This work aims to address these limitations by introducing the Focused Synthesizability score~(FSscore), which learns to rank structures based on binary preferences using a graph attention network. First, a baseline trained on an extensive set of reactant-product pairs is established that subsequently is fine-tuned with expert human feedback on a chemical space of interest. Fine-tuning on focused datasets improves performance on these chemical scopes over the pre-trained model exhibiting moderate performance and generalizability. This enables distinguishing hard- from easy-to-synthesize molecules and improving the synthetic accessibility of generative model outputs. On very complex scopes with limited labels achieving satisfactory gains remains challenging. The FSscore showcases how human expert feedback can be utilized to optimize the assessment of synthetic feasibility for a variety of applications.
FSscore: A Machine Learning-based Synthetic Feasibility Score Leveraging Human Expertise
[ "Rebecca Manuela Neeser", "Bruno Correia", "Philippe Schwaller" ]
Workshop/AI4Science
2312.12737
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pO9lPxF8YA
@inproceedings{ wang2023bilevel, title={Bi-level Graphs for Cellular Pattern Discovery}, author={Zhenzhen Wang and Aleksander S. Popel and Jeremias Sulam}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=pO9lPxF8YA} }
The tumor microenvironment is widely recognized for its central role in driving cancer progression and influencing prognostic outcomes. Despite extensive research efforts dedicated to characterizing this complex and heterogeneous environment, considerable challenges persist. In this study, we introduce a novel data-driven approach for identifying tumor microenvironment patterns that, we show, are closely tied to patient prognoses. Our methodology relies on the construction of a bi-level graph model: (i) a cellular graph, which models the intricate tumor microenvironments, and (ii) a population graph that captures inter-patient similarities, given their respective cellular graphs, by means of a soft Weisfeiler-Lehman kernel. This systematic integration of information across different scales enables us to identify patient subgroups exhibiting unique prognoses while unveiling certain tumor microenvironment patterns that characterize them. We demonstrate our approach in a cohort of breast cancer patients, and identify crucial tumor microenvironment patterns associated with patient prognosis. Our study provides valuable insights into the prognostic implications of the breast tumor microenvironment, and this methodology holds the potential to analyze other cancers.
Bi-level Graphs for Cellular Pattern Discovery
[ "Zhenzhen Wang", "Aleksander S. Popel", "Jeremias Sulam" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oRP132De46
@inproceedings{ tripp2023retrofallback, title={Retro-fallback: retrosynthetic planning in an uncertain world}, author={Austin Tripp and Krzysztof Maziarz and Sarah Lewis and Marwin Segler and Jos{\'e} Miguel Hern{\'a}ndez-Lobato}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=oRP132De46} }
Retrosynthesis is the task of proposing a series of chemical reactions to create a desired molecule from simpler, buyable molecules. While previous works have proposed algorithms to find optimal solutions for a range of metrics (e.g. shortest, lowest-cost), these works generally overlook the fact that we have imperfect knowledge of the space of possible reactions, meaning plans created by the algorithm may not work in a laboratory. In this paper we propose a novel formulation of retrosynthesis in terms of stochastic processes to account for this uncertainty. We then propose a novel greedy algorithm called retro-fallback which maximizes the probability that at least one synthesis plan can be executed in the lab. Using in-silico benchmarks we demonstrate that retro-fallback generally produces better sets of synthesis plans than the popular MCTS and retro* algorithms. We encourage the reader to view the full version of this paper at https://arxiv.org/abs/2310.09270.
Retro-fallback: retrosynthetic planning in an uncertain world
[ "Austin Tripp", "Krzysztof Maziarz", "Sarah Lewis", "Marwin Segler", "José Miguel Hernández-Lobato" ]
Workshop/AI4Science
2310.09270
[ "https://github.com/austint/retro-fallback-iclr24" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oIMhS5n0Qz
@inproceedings{ soares2023a, title={A Framework for Toxic {PFAS} Replacement based on {GF}lowNet and Chemical Foundation Model}, author={Eduardo Soares and Flaviu Cipcigan and Dmitry Zubarev and Emilio Vital Brazil}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=oIMhS5n0Qz} }
Per- and polyfluoroalkyl substances (PFAS) are a broad class of molecules used in almost every sector of industry and consumer goods. PFAS exhibit highly desirable properties such as high durability, water repellance or high acidity, that are difficult to match. As a side effect, PFAS persist in the environment and have detrimental effect on human health. Epidemiological research has linked PFAS exposure to chronic health conditions, including dyslipidemia, cardiometabolic disorders, liver damage, and hypercholesterolemia. Recently, public health agencies significantly strengthed regulations on the use of PFAS. Therefore, alternatives are needed to maintain the pace of technological developments in multiple areas that traditionally relied on PFAS. To support the discovery of alternatives, we introduce MatGFN-PFAS, an AI system that generates PFAS replacements. We build MatGFN-PFAS using Generative Flow Networks (GFlowNets) for generation and a Chemical Language Model (MolFormer) for property prediction. We evaluate MatGFN-PFAS by exploring potential replacements of PFAS superacids, defined as molecules with negative pKa, that are critical for the semiconductor industry. It might be challenging to eliminate PFAS superacids entirely as a class due to the strong constraints on their functional performance. The proposed approach aims to account for this possibility and enables the generation of safer PFAS superacids as well. We evaluate two design strategies: 1) Using Tversky similarity to design molecules similar to a target PFAS and 2) Directly generating molecules with negative pKa and low toxicity. In this paper, we studied 6 PFAS molecules that have the structure defined as $R-CF_{2}OCF_{2}-R'$. For the given query PFAS SMILE $CC1CC(CC(F)(F)C(F)(F)OC(F)(F)C(F)(F)S(=O)(=O)O)OC1=O$, MatGFN-PFAS system was able to generate a candidate with very low toxicity, $LD50 = 7304.23$, strong acidity, $pKa = -1.92$, and high similarity score, $89.32 \%$, to the studied PFAS molecule. Results demonstrated that the proposed MatGFN-PFAS was able to consistently generate replacement molecules following all the constraints forehead mentioned. The resulting datasets for this ongoing study are available at https://ibm.box.com/v/MatGFN-PFAS-generated-datasets.
A Framework for Toxic PFAS Replacement based on GFlowNet and Chemical Foundation Model
[ "Eduardo Soares", "Flaviu Cipcigan", "Dmitry Zubarev", "Emilio Vital Brazil" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=mn3wfSnWYG
@inproceedings{ rossi2023fast, title={Fast and Scalable Inference of Dynamical Systems via Integral Matching}, author={Baptiste Rossi and Dimitris Bertsimas}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=mn3wfSnWYG} }
We present a novel approach to identifying parameters of nonlinear Ordinary Differential Equations (ODEs). This method, which is based on collocation methods, enables the direct identification of parameters from time series data by matching the integral of the dynamic with an interpolation of the trajectory. This method is distinct from existing literature in that it does not require ODE solvers or an estimate of the time derivative. Furthermore, batching strategies, such as time subintervals and component of the state, are proposed to improve scalability, thus providing a fast and highly parallel method to evaluate gradients, and a faster convergence than adjoint methods. The effectiveness of the method is demonstrated on chaotic systems, with speed-ups of three orders of magnitude compared to adjoint methods, and its robustness to observational noise and data availability is assessed.
Fast and Scalable Inference of Dynamical Systems via Integral Matching
[ "Baptiste T Rossi", "Dimitris Bertsimas" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=maH0RApBap
@inproceedings{ fu2023mofdiff, title={{MOFD}iff: Coarse-grained Diffusion for Metal-Organic Framework Design}, author={Xiang Fu and Tian Xie and Andrew Rosen and Tommi Jaakkola and Jake Smith}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=maH0RApBap} }
Metal-organic frameworks (MOFs) are of immense interest in applications such as gas storage and carbon capture due to their exceptional porosity and tunable chemistry. Their modular nature has enabled the use of template-based methods to generate hypothetical MOFs by combining molecular building blocks in accordance with known network topologies. However, the ability of these methods to identify top-performing MOFs is often hindered by the limited diversity of the resulting chemical space. In this work, we propose MOFDiff: a coarse-grained (CG) diffusion model that generates CG MOF structures through a denoising diffusion process over the coordinates and identities of the building blocks. The all-atom MOF structure is then determined through a novel assembly algorithm. As the diffusion model generates 3D MOF structures by predicting scores in E(3), we employ equivariant graph neural networks that respect the permutational and roto-translational symmetries. We comprehensively evaluate our model's capability to generate valid and novel MOF structures and its effectiveness in designing outstanding MOF materials for carbon capture applications with molecular simulations.
MOFDiff: Coarse-grained Diffusion for Metal-Organic Framework Design
[ "Xiang Fu", "Tian Xie", "Andrew Scott Rosen", "Tommi Jaakkola", "Jake Allen Smith" ]
Workshop/AI4Science
2310.10732
[ "" ]
https://huggingface.co/papers/2310.10732
1
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=mUQrw0rIZN
@inproceedings{ collins2023rapid, title={Rapid Prediction of Two-dimensional Airflow in an Operating Room using Scientific Machine Learning}, author={Gary Collins and Alexander New and Ryan A. Darragh and Brian E. Damit and Christopher D. Stiles}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=mUQrw0rIZN} }
We consider the problem of using scientific machine learning (SciML) to rapidly predict solutions to systems of nonlinear partial differential equations (PDEs) defined over complex geometries. In particular, we focus on modeling how airflow in operating rooms (ORs) is affected as the position of an object within the OR varies. We develop data-driven and physics-informed operator-learning models based on the deep operator network (DeepONet) architecture. The DeepONet models are able to accurately and rapidly predict airflow solutions to novel parameter configurations, and they surpass the accuracy of a random forest (RF) baseline. Interestingly, we find that physics-informed regularization (PIR) does not enhance model accuracy, partially because of misspecification of the physical prior compared to the data’s governing equations. Existing SciML models struggle in predicting flow when complex geometries determine localized behavior.
Rapid Prediction of Two-Dimensional Airflow in an Operating Room using Scientific Machine Learning
[ "Gary Lynn Collins", "Alexander New", "Ryan A. Darragh", "Brian E. Damit", "Christopher D. Stiles" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=mJQHG9dGKe
@inproceedings{ dahlinger2023latent, title={Latent Task-Specific Graph Network Simulators}, author={Philipp Dahlinger and Niklas Freymuth and Tai Hoang and Michael Volpp and Gerhard Neumann}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=mJQHG9dGKe} }
Simulating dynamic physical interactions is a critical challenge across multiple scientific domains, with applications ranging from robotics to material science. For mesh-based simulations, Graph Network Simulators (GNSs) pose an efficient alternative to traditional physics-based simulators. Their inherent differentiability and speed make them particularly well-suited for inverse design problems. Yet, adapting to new tasks from limited available data is an important aspect for real-world applications that current methods struggle with. We frame mesh-based simulation as a meta-learning problem and use a recent Bayesian meta-learning method to improve GNSs adaptability to new scenarios by leveraging context data and handling uncertainties. Our approach, latent task-specific graph network simulator, uses non-amortized task posterior approximations to sample latent descriptions of unknown system properties. Additionally, we leverage movement primitives for efficient full trajectory prediction, effectively addressing the issue of accumulating errors encountered by previous auto-regressive methods. We validate the effectiveness of our approach through various experiments, performing on par with or better than established baseline methods. Movement primitives further allow us to accommodate various types of context data, as demonstrated through the utilization of point clouds during inference. By combining GNSs with meta-learning, we bring them closer to real-world applicability, particularly in scenarios with smaller datasets.
Latent Task-Specific Graph Network Simulators
[ "Philipp Dahlinger", "Niklas Freymuth", "Tai Hoang", "Michael Volpp", "Gerhard Neumann" ]
Workshop/AI4Science
2311.05256
[ "https://github.com/philippdahlinger/ltsgns_ai4science" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=mISayy7DPI
@inproceedings{ imel2023citationsimilarity, title={Citation-Similarity Relationships in Astrophysics Literature}, author={Nathaniel Imel and Zachary Hafen}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=mISayy7DPI} }
We report a novel observation about which scientific publications are cited more frequently: those that are more textually similar to pre-existing publications. Using bag-of-word document embeddings, we analyze quantitative trends for a large sample of publication abstracts in the field of astrophysics ($N \sim 300,000$). When new publications are ranked by how many similar publications already exist in their neighborhood, the median number of citations per year that the upper 50$^{\rm th}$ percentile receives is $\sim 1.6$ times the median of the lower 50$^{\rm th}$ percentile. When new publications are ranked by an alternative metric of dissimilarity to neighbors, the median citations per year that the upper 50$^{\rm th}$ percentile receives is $\sim 0.74$ times the median of the lower 50$^{\rm th}$ percentile. We discuss a number of hypotheses that could explain these citation-similarity relationships relevant to the science of science.
Citation-Similarity Relationships in Astrophysics Literature
[ "Nathaniel Imel", "Zachary Hafen" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=kZQ0iFqhEQ
@inproceedings{ na2023electronderived, title={Electron-Derived Molecular Representation Learning for Real-World Molecular Physics}, author={Gyoung S. Na and Chanyoung Park}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=kZQ0iFqhEQ} }
Various representation learning methods for molecular structures have been devised to accelerate data-driven drug and materials discovery. However, the representation capabilities of existing methods are essentially limited to atom-level information, which is not sufficient to describe real-world molecular physics. Although electron-level information can provide fundamental knowledge about chemical compounds beyond the atom-level information, obtaining the electron-level information in real-world molecules is computationally impractical and sometimes infeasible. We propose a new method for learning electron-derived molecular representations without additional computation costs by transferring pre-calculated electron-level information about small molecules to large molecules of our interest. The proposed method achieved state-of-the-art prediction accuracy on extensive benchmark datasets containing experimentally observed molecular physics.
Electron-Derived Molecular Representation Learning for Real-World Molecular Physics
[ "Gyoung S. Na", "Chanyoung Park" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=kSqEwvApQy
@inproceedings{ xu2023coupling, title={Coupling Semi-supervised Learning with Reinforcement Learning for Better Decision Making --- An application to Cryo-{EM} Data Collection}, author={Ziping Xu and Quanfu Fan and Yilai Li and Emma Lee and John Cohn and Ambuj Tewari and Seychelle Vos and Michael Cianfrocco}, booktitle={NeurIPS 2023 AI for Science Workshop}, year={2023}, url={https://openreview.net/forum?id=kSqEwvApQy} }
We consider a semi-supervised Reinforcement Learning (RL) approach that takes inputs from a perception model. Performance of such an approach can be significantly limited by the quality of the perception model in the low labeled data regime. We propose a novel iterative framework that simultaneously couples and improves the training of both RL and the perception model. The perception model takes pseudo labels generated from the trajectories of a trained RL agent believing that the decision-model can correct errors made by the perception model. We apply the framework to cryo-electron microscopy (cryo-EM) data collection, whose goal is to find as many high-quality micrographs taken by cryo-electron microscopy as possible by navigating at different magnification levels. Our proposed method significantly outperforms various baseline methods in terms of both RL rewards and the accuracy of the perception model. We further provide some theoretical insights into the benefits of coupling the decision model and the perception model by showing that RL-generated pseudo labels are biased towards localization which aligns with the underlying data generating mechanism. Our iterative framework that couples both sides of the semi-supervised RL can be applied to a wide range of sequential decision-making tasks when the labeled data is limited.
Coupling Semi-supervised Learning with Reinforcement Learning for Better Decision Making — An application to Cryo-EM Data Collection
[ "Ziping Xu", "Quanfu Fan", "Yilai Li", "Emma Rose Lee", "John Maxwell Cohn", "Ambuj Tewari", "Seychelle M Vos", "Michael Cianfrocco" ]
Workshop/AI4Science
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster