bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=dHmAhYu89E
@inproceedings{ mccarter2023inverse, title={Inverse distance weighting attention}, author={Calvin McCarter}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=dHmAhYu89E} }
We report the effects of replacing the scaled dot-product (within softmax) attention with the negative-log of Euclidean distance. This form of attention simplifies to inverse distance weighting interpolation. Used in simple one hidden layer networks and trained with vanilla cross-entropy loss on classification problems, it tends to produce a key matrix containing prototypes and a value matrix with corresponding logits. We also show that the resulting interpretable networks can be augmented with manually-constructed prototypes to perform low-impact handling of special cases.
Inverse distance weighting attention
[ "Calvin McCarter" ]
Workshop/AMHN
2310.18805
[ "https://github.com/calvinmccarter/idw-attention" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=byxEgvdtwO
@inproceedings{ sch{\"a}fl2023modern, title={Modern Hopfield Networks as Memory for Iterative Learning on Tabular Data}, author={Bernhard Sch{\"a}fl and Lukas Gruber and Angela Bitto-Nemling and Sepp Hochreiter}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=byxEgvdtwO} }
While Deep Learning excels in structured data as encountered in vision and natural language processing, it failed to meet its expectations on tabular data. For tabular data, Support Vector Machines (SVMs), Random Forests, and Gradient Boosting are the best performing techniques. We suggest "Hopular", a novel Deep Learning architecture for medium- and small-sized datasets, where each layer is equipped with continuous modern Hopfield networks. Hopular's novelty is that every layer can directly access the original input as well as the whole training set via stored data in the Hopfield networks. Therefore, Hopular can step-wise update its current model and the resulting prediction at every layer like standard iterative learning algorithms. In experiments on small-sized tabular datasets with less than 1,000 samples, Hopular surpasses Gradient Boosting, Random Forests, SVMs, and in particular several Deep Learning methods. In experiments on medium-sized tabular data with about 10,000 samples, Hopular outperforms XGBoost, CatBoost, LightGBM and a state-of-the art Deep Learning method designed for tabular data. Thus, Hopular is a strong alternative to these methods on tabular data.
Modern Hopfield Networks as Memory for Iterative Learning on Tabular Data
[ "Bernhard Schäfl", "Lukas Gruber", "Angela Bitto-Nemling", "Sepp Hochreiter" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=bv2szxARh2
@inproceedings{ negri2023random, title={Random Feature Hopfield Networks generalize retrieval to previously unseen examples}, author={Matteo Negri and Clarissa Lauditi and Gabriele Perugini and Carlo Lucibello and Enrico Maria Malatesta}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=bv2szxARh2} }
It has been recently shown that, when an Hopfield Network stores examples generated as superposition of random features, new attractors appear in the model corresponding to such features. In this work we expand that result to superpositions of a finite number of features and we show numerically that the network remains capable of learning the features. Furthermore, we reveal that the network also develops attractors corresponding to previously unseen examples generated with the same set of features. We support this result with a simple signal-to-noise argument and we conjecture a phase diagram.
Random Feature Hopfield Networks generalize retrieval to previously unseen examples
[ "Matteo Negri", "Clarissa Lauditi", "Gabriele Perugini", "Carlo Lucibello", "Enrico Maria Malatesta" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=bNBMnQXRJU
@inproceedings{ davydov2023retrieving, title={Retrieving \$k\$-Nearest Memories with Modern Hopfield Networks}, author={Alexander Davydov and Sean Jaffe and Ambuj Singh and Francesco Bullo}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=bNBMnQXRJU} }
Modern continuous Hopfield networks (MCHNs) are a variant of Hopfield networks that have greater storage capacity and have been shown to have connections to the attention mechanism in transformers. In this paper, we propose a variant of MCHNs, which we call k-Hopfield layers, which is the first Hopfield-type network that retrieves the k-nearest memories to a given input. k-Hopfield layers are differentiable and may serve as (i) a soft approach to k-nearest neighbors, (ii) an augmented form of memory in deep learning architectures and (iii) an alternative to multihead attention in transformers. We empirically demonstrate that increasing k aids in correctly reconstructing a corrupted input. We show that using a k-Hopfield layer as a replacement to multihead attention demonstrates comparable performance in small vision transformers while requiring fewer parameters.
Retrieving k-Nearest Memories with Modern Hopfield Networks
[ "Alexander Davydov", "Sean Jaffe", "Ambuj Singh", "Francesco Bullo" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=XnAZwqF0iv
@inproceedings{ dohmatob2023a, title={A Different Route to Exponential Storage Capacity}, author={Elvis Dohmatob}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=XnAZwqF0iv} }
Recent developments have sought to overcome the inherent limitations of traditional associative memory models, like Hopfield networks, where storage capacity scales linearly with input dimension. In this paper, we present a new extension of Hopfield networks that grants precise control over inter-neuron interactions while allowing control of the level of connectivity within the network. This versatile framework encompasses a variety of designs, including classical Hopfield networks, models with polynomial activation functions, and simplicial Hopfield networks as particular cases. Remarkably, a specific instance of our construction, resulting in a new self-attention mechanism, is characterized by quasi-exponential storage capacity and a sparse network structure, aligning with biological plausibility.
A Different Route to Exponential Storage Capacity
[ "Elvis Dohmatob" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=XTOD2M980W
@inproceedings{ karuvally2023variable, title={Variable Memory: Beyond the Fixed Memory Assumption in Memory Modeling}, author={Arjun Karuvally and Hava T Siegelmann}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=XTOD2M980W} }
Memory models play a pivotal role in elucidating the mechanisms through which biological and artificial neural networks store and retrieve information. Traditionally, these models assume that memories are pre-determined, fixed before inference, and stored within synaptic interactions. Yet, neural networks can also dynamically store memories available only during inference within their activity. This capacity to bind and manipulate information as variables enhances the generalization capabilities of neural networks. Our research introduces and explores the concept of "variable memories." This approach extends the conventional sequence memory models, enabling information binding directly in network activity. By adopting this novel memory perspective, we unveil the underlying computational processes in the learned weights of RNNs on simple algorithmic tasks -- a fundamental question in the mechanistic understanding of neural networks. Our results underscore the imperative to evolve memory models beyond the fixed memory assumption towards more dynamic and flexible memory systems to further our understanding of neural information processing.
Variable Memory: Beyond the Fixed Memory Assumption in Memory Modeling
[ "Arjun Karuvally", "Hava T Siegelmann" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WjHYgEfXiV
@inproceedings{ belhadi2023biologicallyinspired, title={Biologically-inspired adaptive learning in the Hopfield-network based self-optimization model}, author={Aisha Belhadi}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=WjHYgEfXiV} }
A significant portion of the recent growth of artificial intelligence can be attributed to the development of deep learning systems, going hand in hand with the accumulation of Big Data. It therefore makes sense that most often, these systems are based on supervised or reinforcement learning using massive datasets, and reward or error-based rules for training. Though these techniques have achieved impressive levels of accuracy and functionality, rivaling human cognition in some areas, they seem to work very differently from living systems that can learn, make associations and adapt with very sparse data, efficient use of energy and comparatively minimal training iterations. In the world of machine learning, Hopfield networks, with an architecture that allows for unsupervised learning, an associative memory, scaling, and modularity, offer an alternative way of looking at artificial intelligence, that has the potential to hew closer to biological forms of learning. This work distills some mechanisms of adaptation in biological systems, including metaplasticity, homeostasis, and inhibition, and proposes ways in which these features can be incorporated into Hopfield networks through adjustments to the learning rate, modularity, and activation rule. The overall aim is to develop deep learning tools that can recapitulate the advantages of biological systems, and to have a computational method that can plausibly model a wide range of living and adaptive systems of varying levels of complexity.
Biologically-inspired adaptive learning in the Hopfield-network based self-optimization model
[ "Aisha Belhadi" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WWTOAKAczk
@inproceedings{ mansingh2023how, title={How Robust Are Energy-Based Models Trained With Equilibrium Propagation?}, author={Siddharth Mansingh and Michal Kucer and Garrett T. Kenyon and Juston Moore and Michael Teti}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=WWTOAKAczk} }
Deep neural networks (DNNs) are easily fooled by adversarial perturbations that are imperceptible to humans. Adversarial training, a process where adversarial examples are added to the training set, is the current state-of-the-art defense against adversarial attacks, but it lowers the model's accuracy on clean inputs, is computationally expensive, and offers less robustness to natural noise. In contrast, energy-based models (EBMs), which were designed for efficient implementation in neuromorphic hardware and physical systems, incorporate feedback connections from each layer to the previous layer, yielding a recurrent, deep-attractor architecture which we hypothesize should make them naturally robust. Our work is the first to explore the robustness of EBMs to both natural corruptions and adversarial attacks, which we do using the CIFAR-10 and CIFAR-100 datasets. We demonstrate that EBMs are more robust than transformers and display comparable robustness to adversarially-trained DNNs on white-box, black-box, and natural perturbations without sacrificing clean accuracy, and without the need for adversarial training or additional training techniques.
How Robust Are Energy-Based Models Trained With Equilibrium Propagation?
[ "Siddharth Mansingh", "Michal Kucer", "Garrett T. Kenyon", "Juston Moore", "Michael Teti" ]
Workshop/AMHN
2401.11543
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Vmndp6HnfR
@inproceedings{ goemaere2023accelerating, title={Accelerating Hierarchical Associative Memory: A Deep Equilibrium Approach}, author={C{\'e}dric Goemaere and Johannes Deleu and Thomas Demeester}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=Vmndp6HnfR} }
Hierarchical Associative Memory models have recently been proposed as a versatile extension of continuous Hopfield networks. In order to facilitate future research on such models, especially at scale, we focus on increasing their simulation efficiency on digital hardware. In particular, we propose two strategies to speed up memory retrieval in these models, which corresponds to their use at inference, but is equally important during training. First, we show how they can be cast as Deep Equilibrium Models, which allows using faster and more stable solvers. Second, inspired by earlier work, we show that alternating optimization of the even and odd layers accelerates memory retrieval by a factor close to two. Combined, these two techniques allow for a much faster energy minimization, as shown in our proof-of-concept experimental results. The code is available at https://github.com/cgoemaere/hamdeq.
Accelerating Hierarchical Associative Memory: A Deep Equilibrium Approach
[ "Cédric Goemaere", "Johannes Deleu", "Thomas Demeester" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VOSrMFgWdL
@inproceedings{ bhandarkar2023sequential, title={Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-winner Modern Hopfield Network}, author={Shaunak Bhandarkar and James Lloyd McClelland}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=VOSrMFgWdL} }
Many autoassociative memory models rely on a localist framework, using a neuron or slot for each memory. However, neuroscience research suggests that memories depend on sparse, distributed representations over neurons with sparse connectivity. Accordingly, we extend a canonical localist memory model---the modern Hopfield network (MHN)---to a distributed variant called the K-winner modern Hopfield network, equating the number of synaptic parameters (weights) in the localist and K-winner variants. We study both models' abilities to reconstruct once-presented patterns organized into long presentation sequences, updating the parameters of the best-matching memory neuron (or k best neurons) as each new pattern is presented. We find that K-winner MHN's exhibit superior retention of older memories.
Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-winner Modern Hopfield Network
[ "Shaunak Bhandarkar", "James Lloyd McClelland" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=TNw5KrKppB
@inproceedings{ hoover2023energy, title={Energy Transformer}, author={Benjamin Hoover and Yuchen Liang and Bao Pham and Rameswar Panda and Hendrik Strobelt and Duen Horng Chau and Mohammed J Zaki and Dmitry Krotov}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=TNw5KrKppB} }
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory. Attention is the power-house driving modern deep learning successes, but it lacks clear theoretical foundations. Energy-based models allow a principled approach to discriminative and generative tasks, but the design of the energy functional is not straightforward. At the same time, Dense Associative Memory models or Modern Hopfield Networks have a well-established theoretical foundation, and allow an intuitive design of the energy function. We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function, which is responsible for representing the relationships between the tokens. In this work, we introduce the theoretical foundations of ET, explore its empirical capabilities using the image completion task, and obtain strong quantitative results on the graph anomaly detection and graph classification tasks.
Energy Transformer
[ "Benjamin Hoover", "Yuchen Liang", "Bao Pham", "Rameswar Panda", "Hendrik Strobelt", "Duen Horng Chau", "Mohammed J Zaki", "Dmitry Krotov" ]
Workshop/AMHN
2302.07253
[ "https://github.com/zhuergou/energy-transformer-for-graph-anomaly-detection" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=SiTNMzCwQ4
@inproceedings{ herron2023modulating, title={Modulating interactions to control dynamics of neural networks}, author={Lukas Herron and Pablo Sartori and BingKan Xue}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=SiTNMzCwQ4} }
Sequential retrieval of stored patterns is a fundamental task that can be performed by neural networks. Previous models of sequential retrieval belong to a general class in which the components of the network are controlled by a slow feedback ("input modulation"). In contrast, we introduce a new class of models in which the feedback modifies the interactions among the components ("interaction modulation"). In particular, we study a model in which the symmetric interactions are modulated. We show that this model is not only capable of retrieving dynamic sequences, but it does so more robustly than a canonical model of input modulation. Our model allows retrieval of patterns with different activity levels, is robust to feedback noise, and has a large dynamic capacity. Our results suggest that interaction modulation may be a new paradigm for controlling network dynamics.
Modulating interactions to control dynamics of neural networks
[ "Lukas Herron", "Pablo Sartori", "BingKan Xue" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=RZmsvaEATv
@inproceedings{ hwang2023generalizable, title={Generalizable Relational Inference with Cognitive Maps in a Hippocampal Model and in Primates}, author={Jaedong Hwang and Sujaya Neupane and Mehrdad Jazayeri and Ila R Fiete}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=RZmsvaEATv} }
We investigate the role of cognitive maps and hippocampal-entorhinal architecture in a mental navigation (MNAV) task by conducting experiment in humans, monkeys and neural network models. Humans can generalize their mental navigation performance to untrained start-target landmark pairs in a given landmark sequence and also rapidly adapt to new sequences. The model uses a continuous-time recurrent neural network (CTRNN) for action decisions and a hippocampal-entorhinal model network, MESH (Memory network with Scaffold and Heteroassociation), for encoding and learning maps. The model is first trained on a navigation-to-sample (NTS) task and tested on MNAV task where no sensory feedback is available, across five different environments (i.e. landmark sequences). The CTRNN with MESH solves MNAV task by reconstructing the next image via path integration and vastly outperforms the model with CTRNN alone. In both NTS and MNAV tasks, MESH-CTRNN model shows better generalization to untrained pairs within each environment and faster adaptation to new environments. Like humans, monkeys also exhibit generalization to untrained landmark pairs in MNAV task. We compared the neural dynamics in monkeys' entorhinal cortex to the dynamics of CTRNN and found behaviorally relevant periodic signals in both. The study demonstrates the importance of hippocampal cognitive maps in enabling data-efficient and generalizable learning in the brain.
Generalizable Relational Inference with Cognitive Maps in a Hippocampal Model and in Primates
[ "Jaedong Hwang", "Sujaya Neupane", "Mehrdad Jazayeri", "Ila R Fiete" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=O5Se9wGYbh
@inproceedings{ joshi2023modern, title={Modern Hopfield Network with Local Learning Rules for Class Generalization}, author={Shruti A Joshi and Giri Prashanth and Maksim Bazhenov}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=O5Se9wGYbh} }
The Modern Hopfield Network (MHN) model, recently introduced as an extension of Hopfield networks, allows for the memory capacity to scale non-linearly with the size of the network. In previous works, MHNs have been used to store inputs in its connections and reconstruct them from partial inputs. In this work, we examine if MHN can be used for classical classification tasks that require generalization to unseen data from same class. We developed a Modern Hopfield Network based classifier with the number of hidden neurons equal to number of classes in the input data and local learning that is able to perform at the accuracy as MLP on several vision tasks (classification on MNIST, Fashion-MNIST and CIFAR-10). Our approach allows us to perform classification, pattern completion, noise robustness and examining the representation of individual classes within the same network. We identify that temperature determines both accuracy and noise robustness. Overall, in this preliminary report, we propose a simple framework for class generalization using MHN and demonstrates the feasibility of using MHN for machine learning tasks that require generalization.
Modern Hopfield Network with Local Learning Rules for Class Generalization
[ "Shruti A Joshi", "Giri Prashanth", "Maksim Bazhenov" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=MuANyzcyrS
@inproceedings{ wang2023rapid, title={Rapid Learning without Catastrophic Forgetting in the Morris Water Maze}, author={Raymond Wang and Jaedong Hwang and Akhilan Boopathy and Ila R Fiete}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=MuANyzcyrS} }
Machine learning models typically struggle to swiftly adapt to novel tasks while maintaining proficiency on previously trained tasks. This contrasts starkly with animals, which demonstrate these capabilities easily. The differences between ML models and animals must stem from particular neural architectures and representations for memory and memory-policy interactions. We propose a new task that requires rapid and continual learning, the sequential Morris Water Maze (sWM). Drawing inspiration from biology, we show that 1) a content-addressable heteroassociative memory based on the entorhinal-hippocampal circuit with grid cells that retain knowledge across diverse environments, and 2) a spatially invariant convolutional network architecture for rapid adaptation across unfamiliar environments together perform rapid learning, good generalization, and continual learning without forgetting. Our model simultaneously outperforms ANN baselines from both the continual and few-shot learning contexts. It retains knowledge of past environments while rapidly acquiring the skills to navigate new ones, thereby addressing the seemingly opposing challenges of quick knowledge transfer and sustaining proficiency in previously learned tasks.
Rapid Learning without Catastrophic Forgetting in the Morris Water Maze
[ "Raymond Wang", "Jaedong Hwang", "Akhilan Boopathy", "Ila R Fiete" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=M7yGTXajq5
@inproceedings{ marin-llobet2023hopfieldenhanced, title={Hopfield-Enhanced Deep Neural Networks for Artifact-Resilient Brain State Decoding}, author={Arnau Marin-Llobet and Arnau Manasanch and Maria V. Sanchez Vives}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=M7yGTXajq5} }
The study of brain states, ranging from highly synchronous to asynchronous neuronal patterns like the sleep-wake cycle, is fundamental for assessing the brain's spatiotemporal dynamics and their close connection to behavior. However, the development of new techniques to accurately identify them still remains a challenge, as these are often compromised by the presence of noise, artifacts, and suboptimal recording quality. In this study, we propose a two-stage computational framework combining Hopfield Networks for artifact data preprocessing with Convolutional Neural Networks (CNNs) for classification of brain states in rat neural recordings under different levels of anesthesia. To evaluate the robustness of our framework, we deliberately introduced noise artifacts into the neural recordings. We evaluated our hybrid Hopfield-CNN pipeline by benchmarking it against two comparative models: a standalone CNN handling the same noisy inputs, and another CNN trained and tested on artifact-free data. Performance across various levels of data compression and noise intensities showed that our framework can effectively mitigate artifacts, allowing the model to reach parity with the clean-data CNN at lower noise levels. Although this study mainly benefits small-scale experiments, the findings highlight the necessity for advanced deep learning and Hopfield Network models to improve scalability and robustness in diverse real-world settings.
Hopfield-Enhanced Deep Neural Networks for Artifact-Resilient Brain State Decoding
[ "Arnau Marin-Llobet", "Arnau Manasanch", "Maria V. Sanchez Vives" ]
Workshop/AMHN
2311.03421
[ "https://github.com/arnaumarin/hdnn-artifactbrainstate" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=KwZ43TkKUL
@inproceedings{ stomfai2023multidimensional, title={Multidimensional Hopfield Networks for clustering}, author={Gergely Stomfai and {\L}ukasz Sienkiewicz and Barbara Rychalska}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=KwZ43TkKUL} }
We present the Multidimensional Hopfield Network (DHN), a natural generalisation of the Hopfield Network. In our theoretical investigations we focus on DHNs with a certain activation function and provide energy functions for them. We conclude that these DHNs are convergent in finite time, and are equivalent to greedy methods that aim to find graph clusterings of locally minimal cuts. We also show that the general framework of DHNs encapsulates several previously known algorithms used for generating graph embeddings and clusterings. Namely, the Cleora graph embedding algorithm, the Louvain method, and the Newman's method can be cast as DHNs with appropriate activation function and update rule. Motivated by these findings we provide a generalisation of Newman's method to the multidimensional case.
Multidimensional Hopfield Networks for clustering
[ "Gergely Stomfai", "Łukasz Sienkiewicz", "Barbara Rychalska" ]
Workshop/AMHN
2310.07239
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=KEnMXCcB5C
@inproceedings{ wang2023statisticsguided, title={Statistics-guided Associative Memories}, author={Hongzhi Wang and Satyananda Kashyap and Niharika Shimona D'Souza and Tanveer Syeda-mahmood}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=KEnMXCcB5C} }
Content-associative memories such as Hopfield networks have been studied as a good mathematical model of the auto-associative features in the CA3 region of the hippocampal memory system. Modern Hopfield networks (MHN) are generalizations of the classical Hopfield networks with revised energy functions and update rules to expand storage to exponential capacity. However, they are not yet practical due to spurious metastable states leading to recovery errors during memory recall. In this work, we present a fresh perspective on associative memories using joint co-occurrence statistics, and show that accurate recovery of patterns is possible from a partially-specified query using the maximum likelihood principle. In our formulation, memory retrieval is addressed via estimating the joint conditional probability of the retrieved information given the observed associative information. Unlike previous models that have considered independence of features, we do recovery under the maximal dependency assumption to obtain an upper bound on the joint probability of occurrence of features. We show that this new approximation substantially improves associative memory retrieval accuracy on popular benchmark datasets.
Statistics-guided Associative Memories
[ "Hongzhi Wang", "Satyananda Kashyap", "Niharika Shimona D'Souza", "Tanveer Syeda-mahmood" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=KCB7lcoo9f
@inproceedings{ serricchio2023daydreaming, title={Daydreaming Hopfield Networks and their surprising effectiveness on correlated data}, author={Ludovica Serricchio and Claudio Chilin and Dario Bocchi and Raffaele Marino and Matteo Negri and Chiara Cammarota and Federico Ricci-Tersenghi}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=KCB7lcoo9f} }
In order to improve the storage capacity of the Hopfield model, we develop a version of the dreaming algorithm, called daydreaming, that is not destructive and that converges asymptotically to a stationary coupling matrix. When trained on random uncorrelated examples, the model shows optimal performance in terms of the size of the basins of attraction of stored examples and the quality of reconstruction. We also train the daydreaming algorithm on correlated data obtained via the random-features model and argue that it exploits the correlations to increase even further the storage capacity and the size of the basins of attraction.
Daydreaming Hopfield Networks and their surprising effectiveness on correlated data
[ "Ludovica Serricchio", "Claudio Chilin", "Dario Bocchi", "Raffaele Marino", "Matteo Negri", "Chiara Cammarota", "Federico Ricci-Tersenghi" ]
Workshop/AMHN
2405.08777
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=JhTn1Lt04U
@inproceedings{ hofmann2023hopfield, title={Hopfield Boosting for Out-of-Distribution Detection}, author={Claus Hofmann and Simon Lucas Schmid and Bernhard Lehner and Daniel Klotz and Sepp Hochreiter}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=JhTn1Lt04U} }
Out-of-distribution (OOD) detection is crucial for real-world machine learning. Outlier exposure methods, which use auxiliary outlier data, can significantly enhance OOD detection. We present Hopfield Boosting, a boosting technique employing modern Hopfield energy (MHE) to refine the boundary between in-distribution (ID) and OOD data. Our method focuses on challenging outlier examples near the decision boundary, achieving a 40% improvement in FPR95 on CIFAR-10, setting a new OOD detection state-of-the-art with outlier exposure.
Hopfield Boosting for Out-of-Distribution Detection
[ "Claus Hofmann", "Simon Lucas Schmid", "Bernhard Lehner", "Daniel Klotz", "Sepp Hochreiter" ]
Workshop/AMHN
[ "https://github.com/ml-jku/hopfield-boosting" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=Fhx7nVoCQW
@inproceedings{ bai2023saliencyguided, title={Saliency-Guided Hidden Associative Replay for Continual Learning}, author={Guangji Bai and Qilong Zhao and Xiaoyang Jiang and Liang Zhao}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=Fhx7nVoCQW} }
Continual Learning (CL) is a burgeoning domain in next-generation AI, focusing on training neural networks over a sequence of tasks akin to human learning. Amongst various strategies, replay-based methods have emerged as preeminent, echoing biological memory mechanisms. However, these methods are memory-intensive, often preserving entire data samples—an approach inconsistent with humans' selective memory retention of salient experiences. While some recent works have explored the storage of only significant portions of data in episodic memory, the inherent nature of partial data necessitates innovative retrieval mechanisms. Addressing these nuances, this paper presents the **S**aliency-Guided **H**idden **A**ssociative **R**eplay for **C**ontinual Learning (**SHARC**). This novel framework synergizes associative memory with replay-based strategies. SHARC primarily archives salient data segments via sparse memory encoding. Importantly, by harnessing associative memory paradigms, it introduces a content-focused memory retrieval mechanism, promising swift and near-perfect recall, bringing CL a step closer to authentic human memory processes. Extensive experimental results demonstrate the effectiveness of our proposed method for various continual learning tasks. Anonymous code can be found at: https://anonymous.4open.science/r/SHARC-6319.
Saliency-Guided Hidden Associative Replay for Continual Learning
[ "Guangji Bai", "Qilong Zhao", "Xiaoyang Jiang", "Liang Zhao" ]
Workshop/AMHN
2310.04334
[ "https://github.com/baithebest/sharc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=EJmgk8vXMQ
@inproceedings{ xie2023skip, title={Skip Connections Increase the Capacity of Associative Memories in Variable Binding Mechanisms}, author={Yi Xie and Yichen Li and Akshay Rangamani}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=EJmgk8vXMQ} }
The flexibility of intelligent behavior is fundamentally attributed to the ability to separate and assign structural information from content in sensory inputs. Variable binding is the atomic computation that underlies this ability. In this work, we investigate the implementation of variable binding via pointers of assemblies of neurons, which are sets of excitatory neurons that fire together. The Assembly Calculus is a framework that describes a set of operations to create and modify assemblies of neurons. We focus on the $\texttt{project}$ (which creates assemblies) and $\texttt{reciprocal-project}$ (which performs variable binding) operations and study the capacity of networks in terms of the number of assemblies that can be reliably created and retrieved. We find that assembly calculus networks implemented through Hebbian plasticity resemble associative memories in their structure and behavior. However, for networks with $N$ neurons per brain area, the capacity of variable binding networks ($0.01N$) is an order of magnitude lower than the capacity of assembly creation networks ($0.22N$). To alleviate this drop in capacity, we propose a $\textit{skip connection}$ between the input and variable assembly, which boosts the capacity to a similar order of magnitude ($0.1N$) as the $\texttt{Project}$ operation, while maintaining its biological plausibility.
Skip Connections Increase the Capacity of Associative Memories in Variable Binding Mechanisms
[ "Yi Xie", "Yichen Li", "Akshay Rangamani" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=E6RCLm6mqr
@inproceedings{ abudy2023minimum, title={Minimum Description Length Hopfield Networks}, author={Matan Abudy and Nur Lan and Emmanuel Chemla and Roni Katzir}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=E6RCLm6mqr} }
Associative memory architectures are designed for memorization but also offer, through their retrieval method, a form of generalization to unseen inputs: stored memories can be seen as prototypes from this point of view. Focusing on Modern Hopfield Networks (MHN), we show that a large memorization capacity undermines the generalization opportunity. We offer a solution to better optimize this tradeoff. It relies on Minimum Description Length (MDL) to determine during training which memories to store, as well as how many of them.
Minimum Description Length Hopfield Networks
[ "Matan Abudy", "Nur Lan", "Emmanuel Chemla", "Roni Katzir" ]
Workshop/AMHN
2311.06518
[ "https://github.com/matanabudy/mdl-hn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=B1BL9go65H
@inproceedings{ hoover2023memory, title={Memory in Plain Sight: A Survey of the Uncanny Resemblances between Diffusion Models and Associative Memories}, author={Benjamin Hoover and Hendrik Strobelt and Dmitry Krotov and Judy Hoffman and Zsolt Kira and Duen Horng Chau}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=B1BL9go65H} }
Diffusion Models (DMs) have recently set state-of-the-art on many generation benchmarks. However, there are myriad ways to describe them mathematically, which makes it difficult to develop a simple understanding of how they work. In this submission, we provide a concise overview of DMs from the perspective of dynamical systems and Ordinary Differential Equations (ODEs) which exposes a mathematical connection to the highly related yet often overlooked class of energy-based models, called Associative Memories (AMs). Energy-based AMs are a theoretical framework that behave much like denoising DMs, but they enable us to directly compute a Lyapunov energy function on which we can perform gradient descent to denoise data. We finally identify the similarities and differences between AMs and DMs, discussing new research directions revealed by the extent of their similarities.
Memory in Plain Sight: A Survey of the Uncanny Resemblances between Diffusion Models and Associative Memories
[ "Benjamin Hoover", "Hendrik Strobelt", "Dmitry Krotov", "Judy Hoffman", "Zsolt Kira", "Duen Horng Chau" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=AXiMq2k4cb
@inproceedings{ koulischer2023exploring, title={Exploring the Temperature-Dependent Phase Transition in Modern Hopfield Networks}, author={Felix Koulischer and C{\'e}dric Goemaere and Tom Van Der Meersch and Johannes Deleu and Thomas Demeester}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=AXiMq2k4cb} }
The recent discovery of a connection between Transformers and Modern Hopfield Networks (MHNs) has reignited the study of neural networks from a physical energy-based perspective. This paper focuses on the pivotal effect of the inverse temperature hyperparameter $\beta$ on the distribution of energy minima of the MHN. To achieve this, the distribution of energy minima is tracked in a simplified MHN in which equidistant normalised patterns are stored. This network demonstrates a phase transition at a critical temperature $\beta_{\text{c}}$, from a single global attractor towards highly pattern specific minima as $\beta$ is increased. Importantly, the dynamics are not solely governed by the hyperparameter $\beta$ but are instead determined by an effective inverse temperature $\beta_{\text{eff}}$ which also depends on the distribution and size of the stored patterns. Recognizing the role of hyperparameters in the MHN could, in the future, aid researchers in the domain of Transformers to optimise their initial choices, potentially reducing the necessity for time and energy expensive hyperparameter fine-tuning.
Exploring the Temperature-Dependent Phase Transition in Modern Hopfield Networks
[ "Felix Koulischer", "Cédric Goemaere", "Tom Van Der Meersch", "Johannes Deleu", "Thomas Demeester" ]
Workshop/AMHN
2311.18434
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=463RlISt9t
@inproceedings{ rasul2023probabilistic, title={Probabilistic Forecasting via Modern Hopfield Networks}, author={Kashif Rasul and Pablo Vicente and Anderson Schneider and Alexander M{\"a}rz}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=463RlISt9t} }
Hopfield networks, originally introduced as associative memory models, have shown promise in pattern recognition, optimization problems, and tabular datasets. However, their application to time series data has been limited. We introduce a temporal version that leverages the associative memory properties of the Hopfield architecture while accounting for temporal dependencies present in time series data. Our results suggest that the proposed model demonstrates competitive performance compared to state-of-the-art probabilistic forecasting models.
Probabilistic Forecasting via Modern Hopfield Networks
[ "Kashif Rasul", "Pablo Vicente", "Anderson Schneider", "Alexander März" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2DS1BDhRz3
@inproceedings{ shan2023errorcorrecting, title={Error-correcting columnar networks: high-capacity memory under sparse connectivity}, author={Haozhe Shan and Ludovica Bachschmid-Romano and Haim Sompolinsky}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=2DS1BDhRz3} }
Neurons with recurrent connectivity can store memory patterns as attractor states in their dynamics, forming a plausible basis for associative memory in the brain. Classic theoretical results on fully connected recurrent neural networks (RNNs) with binary neurons and Hebbian learning rules state that they can store at most $O\left(N\right)$ memories, where $N$ is the number of neurons. However, under the physiological constraint that neurons are sparsely connected, this capacity is dramatically reduced to $O(K)$, where $K$ is the average degree of connectivity (estimated to be $O(10^{3}\sim10^{4})$ in the mammalian neocortex). This reduced capacity is orders-of magnitude smaller than experimental estimates of human memory capacity. In this work, we propose the error-correcting columnar network (ECCN) as a plausible model of how the brain realizes high-capacity memory storage despite sparse connectivity. In the ECCN, neurons are organized into ``columns'': in each memory, neurons from the same column encode the same feature(s), similar to columns in primary sensory areas. A column-synchronizing mechanism utilizes the redundancy of columnar codes to perform error correction. We analytically computed the memory capacity of the ECCN via a dynamical mean-field theory. The results show that for a fixed column size $M$, the capacity grows linearly with network size $N$ until it saturates at $\propto MK$. For optimal choice of $M$ for each $N$, the capacity is $\propto \sqrt{NK}$.
Error-correcting columnar networks: high-capacity memory under sparse connectivity
[ "Haozhe Shan", "Ludovica Bachschmid-Romano", "Haim Sompolinsky" ]
Workshop/AMHN
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yvWlYTAkl3
@inproceedings{ martin2023modelfree, title={Model-Free Preference Elicitation}, author={Carlos Martin and Craig Boutilier and Ofer Meshi}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=yvWlYTAkl3} }
Elicitation of user preferences is an effective way to improve the quality of recommendations, especially when there is little or no user history. In this setting, a recommendation system interacts with the user by asking questions and recording the responses. Various criteria have been proposed for optimizing the sequence of queries to improve understanding of user preferences, and thereby the quality of downstream recommendations. A compelling approach is \emph{expected value of information (EVOI)}, a Bayesian approach which computes the expected gain in user utility for possible queries. Previous work on EVOI has focused on probabilistic models of user preferences and responses to compute posterior utilities. By contrast, in this work, we explore model-free variants of EVOI which rely on function approximation to obviate the need for strong modeling assumptions. Specifically, we propose to learn a user response model and user utility model from existing data, which is often available in real-world systems, and to use these models in EVOI in place of the probabilistic models. We show promising empirical results on a preference elicitation task.
Model-Free Preference Elicitation
[ "Carlos Martin", "Craig Boutilier", "Ofer Meshi" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yuJEkWSkTN
@inproceedings{ zhang2023active, title={Active Learning for Iterative Offline Reinforcement Learning}, author={Lan Zhang and Luigi Franco Tedesco and Pankaj Rajak and Youcef Zemmouri and Hakan Brunzell}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=yuJEkWSkTN} }
Offline Reinforcement Learning (RL) has emerged as a promising approach to address real-world challenges where online interactions with the environment are limited, risky, or costly. Although, recent advancements produce high quality policies from offline data, currently, there is no systematic methodology to continue to improve them without resorting to online fine-tuning. This paper proposes to repurpose Offline RL to produce a sequence of improving policies, namely, Iterative Offline Reinforcement Learning (IORL). To produce such sequence, IORL has to cope with imbalanced offline datasets and to perform controlled environment exploration. Specifically, we introduce ”Return-based Sampling” as means to selectively prioritize experience from high-return trajectories and active learning driven ”Dataset Uncertainty Sampling” to probe state-actions inversely proportional to density in the dataset.We demonstrate that our proposed approach produces policies that achieve monotonically increasing average returns, from 65.4 to 140.2, in the Atari environment.
Active Learning for Iterative Offline Reinforcement Learning
[ "Lan Zhang", "Luigi Franco Tedesco", "Pankaj Rajak", "Youcef Zemmouri", "Hakan Brunzell" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yPlkx5u4cg
@inproceedings{ go2023transferable, title={Transferable Candidate Proposal with Bounded Uncertainty}, author={Kyeongryeol Go and Kye-Hyeon Kim}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=yPlkx5u4cg} }
From an empirical perspective, the subset chosen through active learning cannot guarantee an advantage over random sampling when transferred to another model. While it underscores the significance of verifying transferability, experimental design from previous works often neglected that the informativeness of a data subset can change over model configurations. To tackle this issue, we introduce a new experimental design, coined as Candidate Proposal, to find transferable data candidates from which active learning algorithms choose the informative subset. Correspondingly, a data selection algorithm is proposed, namely Transferable candidate proposal with Bounded Uncertainty (TBU), which constrains the pool of transferable data candidates by filtering out the presumably redundant data points based on uncertainty estimation. We verified the validity of TBU in image classification benchmarks, including CIFAR-10/100 and SVHN. When transferred to different model configurations, TBU consistency improves performance in existing active learning algorithms. Our code is available at https://github.com/gokyeongryeol/TBU.
Transferable Candidate Proposal with Bounded Uncertainty
[ "Kyeongryeol Go", "Kye-Hyeon Kim" ]
Workshop/ReALML
2312.04604
[ "https://github.com/gokyeongryeol/tbu" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=y7FZ6KXEvl
@inproceedings{ park2023sequentially, title={Sequentially Adaptive Experimentation for Learning Optimal Options subject to Unobserved Contexts}, author={Hongju Park and Mohamad Kazem Shirani Faradonbeh}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=y7FZ6KXEvl} }
Contextual bandits constitute a classical framework for interactive learning of best decisions subject to context information. In this setting, the goal is to sequentially learn arms of highest reward subject to the contextual information, while the unknown reward parameters of each arm need to be learned by experimenting it. Accordingly, a fundamental problem is that of balancing such experimentation (i.e., pulling different arms to learn the parameters), versus sticking with the best arm learned so far, in order to maximize rewards. To study this problem, the existing literature mostly considers perfectly observed contexts. However, the setting of partially observed contexts remains unexplored to date, despite being theoretically more general and practically more versatile. We study bandit policies for learning to select optimal arms based on observations, which are noisy linear functions of the unobserved context vectors. Our theoretical analysis shows that adaptive experiments based on samples from the posterior distribution efficiently learn optimal arms. Specifically, we establish regret bounds that grow logarithmically with time. Extensive simulations for real-world data are presented as well to illustrate this efficacy.
Sequentially Adaptive Experimentation for Learning Optimal Options subject to Unobserved Contexts
[ "Hongju Park", "Mohamad Kazem Shirani Faradonbeh" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xrz7hNsLNd
@inproceedings{ banerjee2023decentralized, title={Decentralized and Asynchronous Multi-Agent Active Search and Tracking when Targets Outnumber Agents}, author={Arundhati Banerjee and Jeff Schneider}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=xrz7hNsLNd} }
Multi-agent multi-target tracking has a wide range of applications, including wildlife patrolling, security surveillance or environment monitoring. Such algorithms often assume that agents are pre-assigned to monitor disjoint partitions of the environment, reducing the burden of exploration. This limits applicability when there are fewer agents than targets, since agents are unable to continuously follow the targets in their fields of view. Multi-agent tracking algorithms additionally assume a central controller and synchronous inter-agent communication. Instead, we focus on the setting of decentralized multi-agent, multi-target, simultaneous active search-*and*-tracking with asynchronous inter-agent communication. Our proposed algorithm DecSTER uses a sequential monte carlo implementation of the probability hypothesis density filter for posterior inference combined with Thompson sampling for decentralized multi-agent decision making. We compare different action selection policies, focusing on scenarios where targets outnumber agents. In simulation, DecSTER outperforms baselines in terms of the Optimal Sub-Pattern Assignment (OSPA) metric for different numbers of targets and varying teamsizes.
Decentralized and Asynchronous Multi-Agent Active Search and Tracking when Targets Outnumber Agents
[ "Arundhati Banerjee", "Jeff Schneider" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xfj5jjpOaL
@inproceedings{ cook2023semiparametric, title={Semiparametric Efficient Inference in Adaptive Experiments}, author={Thomas Cook and Alan Mishler and Aaditya Ramdas}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=xfj5jjpOaL} }
We consider the problem of efficient inference of the Average Treatment Effect in a sequential experiment where the policy governing the assignment of subjects to treatment or control can change over time. We first provide a central limit theorem for the Adaptive Augmented Inverse-Probability Weighted estimator, which is semiparametric efficient, under weaker assumptions than those previously made in the literature. This central limit theorem enables efficient inference at fixed sample sizes. We then consider a sequential inference setting, deriving both asymptotic and nonasymptotic confidence sequences that are considerably tighter than previous methods. These anytime-valid methods enable inference under data-dependent stopping times (sample sizes). Additionally, we use propensity score truncation techniques from the recent off-policy estimation literature to reduce the finite sample variance of our estimator without affecting the asymptotic variance. Empirical results demonstrate that our methods yield narrower confidence sequences than those previously developed in the literature while maintaining time-uniform error control.
Semiparametric Efficient Inference in Adaptive Experiments
[ "Thomas Cook", "Alan Mishler", "Aaditya Ramdas" ]
Workshop/ReALML
2311.18274
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=wxpxPL3RkP
@inproceedings{ schachtsiek2023class, title={Class Balanced Dynamic Acquisition for Domain Adaptive Semantic Segmentation using Active Learning}, author={Marc Schachtsiek and Simone Rossi and Thomas Hannagan}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=wxpxPL3RkP} }
Domain adaptive active learning is leading the charge in label-efficient training of neural networks. For semantic segmentation, state-of-the-art models jointly use two criteria of uncertainty and diversity to select training labels, combined with a pixel-wise acquisition strategy. However, we show that such methods currently suffer from a class imbalance issue which degrades their performance for larger active learning budgets. We then introduce Class Balanced Dynamic Acquisition (CBDA), a novel active learning method that mitigates this issue, especially in high-budget regimes. The more balanced labels increase minority class performance, which in turn allows the model to outperform the previous baseline by 0.6, 1.7, and 2.4 mIoU for budgets of 5%, 10%, and 20%, respectively. Additionally, the focus on minority classes leads to improvements of the minimum class performance of 0.5, 2.9, and 4.6 IoU respectively. The top-performing model even exceeds the fully supervised baseline, showing that a more balanced label than the entire ground truth can be beneficial.
Class Balanced Dynamic Acquisition for Domain Adaptive Semantic Segmentation using Active Learning
[ "Marc Schachtsiek", "Simone Rossi", "Thomas Hannagan" ]
Workshop/ReALML
2311.14146
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=wtDzsitgO8
@inproceedings{ poiani2023pure, title={Pure Exploration under Mediators{\textquoteright} Feedback}, author={Riccardo Poiani and Alberto Maria Metelli and Marcello Restelli}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=wtDzsitgO8} }
Stochastic multi-armed bandits are a sequential-decision-making framework, where, at each interaction step, the learner selects an arm and observes a stochastic reward. Within the context of best-arm identification (BAI) problems, the goal of the agent lies in finding the optimal arm, i.e., the one with the highest expected reward, as accurately and efficiently as possible. Nevertheless, the sequential interaction protocol of classical BAI problems, where the agent has complete control over the arm being pulled at each round, does not effectively model several decision-making problems of interest (e.g., off-policy learning, human feedback). For this reason, in this work, we propose a novel strict generalization of the classical BAI problem that we refer to as best-arm identification under mediators’ feedback (BAI-MF). More specifically, we consider the scenario in which the learner has access to a set of mediators, each of which selects the arms on the agent’s behalf according to a stochastic and possibly unknown policy. The mediator, then, communicates back to the agent the pulled arm together with the observed reward. In this setting, the agent’s goal lies in sequentially choosing which mediator to query to identify with high probability the optimal arm while minimizing the identification time, i.e., the sample complexity. To this end, we first derive and analyze a statistical lower bound on the sample complexity specific to our general mediator feedback scenario. Then, we propose a sequential decision-making strategy for discovering the best arm; as our theory verifies, this algorithm matches the lower bound both almost surely and in expectation.
Pure Exploration under Mediators’ Feedback
[ "Riccardo Poiani", "Alberto Maria Metelli", "Marcello Restelli" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=wXCqXdKaO8
@inproceedings{ ramesh2023distributionally, title={{DISTRIBUTIONALLY} {ROBUST} {MODEL}-{BASED} {REINFORCEMENT} {LEARNING} {WITH} {LARGE} {STATE} {SPACES}}, author={Shyam Sundhar Ramesh and Pier Giuseppe Sessa and Yifan Hu and Andreas Krause and Ilija Bogunovic}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=wXCqXdKaO8} }
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment. To overcome these issues, we study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback–Leibler, chi-square, and total variation uncertainty sets. We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics, leveraging access to a generative model (i.e., simulator). We further demonstrate the statistical sample complexity of the proposed method for different uncertainty sets. These complexity bounds are independent of the number of states and extend beyond linear dynamics, ensuring the effectiveness of our approach in identifying near-optimal distributionally-robust policies. The proposed method can be further combined with other model-free distributionally robust reinforcement learning methods to obtain a near-optimal robust policy. Experimental results demonstrate the robustness of our algorithm to distributional shifts and its superior performance in terms of the number of samples needed.
DISTRIBUTIONALLY ROBUST MODEL-BASED REINFORCEMENT LEARNING WITH LARGE STATE SPACES
[ "Shyam Sundhar Ramesh", "Pier Giuseppe Sessa", "Yifan Hu", "Andreas Krause", "Ilija Bogunovic" ]
Workshop/ReALML
2309.02236
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=vbPRv4KwfG
@inproceedings{ ament2023sustainable, title={Sustainable Concrete via Bayesian Optimization}, author={Sebastian Ament and Andrew Christopher Witte and Nishant Garg and Julius Kusuma}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=vbPRv4KwfG} }
Eight percent of global carbon dioxide emissions can be attributed to the production of cement, the main component of concrete, which is also the dominant source of CO2 emissions in the construction of data centers. The discovery of lower-carbon concrete formulae is therefore of high significance for sustainability. However, experimenting with new concrete formulae is time consuming and labor intensive, as one usually has to wait to record the concrete’s 28-day compressive strength, a quantity whose measurement can by its definition not be accelerated. This provides an opportunity for experimental design methodology like Bayesian Optimization (BO) to accelerate the search for strong and sustainable concrete formulae. Herein, we 1) propose modeling steps that make concrete strength amenable to be predicted accurately by a Gaussian process model with relatively few measurements, 2) formulate the search for sustainable concrete as a multi-objective optimization problem, and 3) leverage the proposed model to carry out multi-objective BO with real-world strength measurements of the algorithmically proposed mixes. Our experimental results show improved trade-offs between the mixtures’ global warming potential (GWP) and their associated compressive strengths, compared to mixes based on current industry practices. Our methods are open-sourced at github.com/facebookresearch/SustainableConcrete.
Sustainable Concrete via Bayesian Optimization
[ "Sebastian Ament", "Andrew Christopher Witte", "Nishant Garg", "Julius Kusuma" ]
Workshop/ReALML
2310.18288
[ "https://github.com/facebookresearch/sustainableconcrete" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=vBwfTUDTtz
@inproceedings{ matsuura2023active, title={Active Model Selection: A Variance Minimization Approach}, author={Mitsuru Matsuura and Satoshi Hara}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=vBwfTUDTtz} }
The cost of labeling is a significant challenge in practical machine learning. This issue arises not only during the learning phase but also at the model evaluation phase, as there is a need for a substantial amount of labeled test data in addition to the training data. In this study, we address the challenge of active model selection with the goal of minimizing labeling costs for choosing the best-performing model from a set of model candidates. Based on an appropriate test loss estimator, we propose an adaptive labeling strategy that can estimate the difference of test losses with small variance, thereby enabling the estimation of the best model using fewer labeling cost. Experimental results on real-world datasets confirm that our method efficiently selects the best model.
Active Model Selection: A Variance Minimization Approach
[ "Mitsuru Matsuura", "Satoshi Hara" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=utVPf9dRgy
@inproceedings{ shao2023preferenceguided, title={Preference-Guided Bayesian Optimization for Control Policy Learning: Application to Personalized Plasma Medicine}, author={Ketong Shao and Diego Romeres and Ankush Chakrabarty and Ali Mesbah}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=utVPf9dRgy} }
This paper investigates the adaptation of control policies for personalized dose delivery in plasma medicine using preference-learning based Bayesian optimization. Preference learning empowers users to incorporate their preferences or domain expertise during the exploration of optimal control policies, which often results in fast attainment of personalized treatment outcomes. We establish that, compared to multi-objective Bayesian optimization (BO), preference-guided BO offers statistically faster convergence and computes solutions that better reflect user preferences. Moreover, it enables users to actively provide feedback during the policy search procedure, which helps to focus the search in sub-regions of the search space likely to contain preferred local optima. Our findings highlight the suitability of preference-learning-based BO for adapting control policies in plasma treatments, where both user preferences and swift convergence are of paramount importance.
Preference-Guided Bayesian Optimization for Control Policy Learning: Application to Personalized Plasma Medicine
[ "Ketong Shao", "Diego Romeres", "Ankush Chakrabarty", "Ali Mesbah" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u6NK3Jm4ka
@inproceedings{ wyrwal2023residual, title={Residual Deep Gaussian Processes on Manifolds for Geometry-aware Bayesian Optimization on Hyperspheres}, author={Kacper Wyrwal and Viacheslav Borovitskiy}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=u6NK3Jm4ka} }
Gaussian processes (GPs) are a widely-used model class for approximating unknown functions, especially useful in tasks such as Bayesian optimisation, where accurate uncertainty estimates are key. Deep Gaussian processes (DGPs) are a multi-layered generalisation of GPs, which promises improved performance at modelling complex functions. Some of the problems where GPs and DGPs may be utilised involve data on manifolds like hyperspheres. Recent work has recognised this, generalising scalar-valued and vector-valued Matérn GPs to a broad class of Riemannian manifolds. Despite that, an appropriate analogue of DGP for Riemannian manifolds is missing. We introduce a new model, residual manifold DGP, and a suitable doubly stochastic variational inference technique that helps train and deploy it on hyperspheres. Through examination on stylised examples, we highlight the usefulness of residual deep manifold GPs on regression tasks and in Bayesian optimisation.
Residual Deep Gaussian Processes on Manifolds for Geometry-aware Bayesian Optimization on Hyperspheres
[ "Kacper Wyrwal", "Viacheslav Borovitskiy" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u2eV6JA0nY
@inproceedings{ shrestha2023exploratory, title={Exploratory Training: When Annotators Learn About Data}, author={Rajesh Shrestha and Omeed Habibelahian and Arash Termehchy and Paolo Papotti}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=u2eV6JA0nY} }
ML systems often present examples and solicit labels from users to learn a target model, i.e., active learning. However, due to the complexity of the underlying data, users may not initially have a perfect understanding of the effective model and do not know the accurate labeling. For example, a user who is training a model for detecting noisy or abnormal values may not perfectly know the properties of typical and clean values in the data. Users may improve their knowledge about the data and target model as they observe examples during training. As users gradually learn about the data and model, they may revise their labeling strategies. Current systems assume that users always provide correct labeling with potentially a fixed and small chance of annotation mistakes. Nonetheless, if the trainer revises its belief during training, such mistakes become significant and non-stationarity. Hence, current systems consume incorrect labels and may learn inaccurate models. In this paper, we build theoretical underpinnings and design algorithms to develop systems that collaborate with users to learn the target model accurately and efficiently. At the core of our proposal, a game-theoretic framework models the joint learning of user and system to reach a desirable eventual stable state, where both user and system share the same belief about the target model. We extensively evaluate our system using user studies over various real-world datasets and show that our algorithms lead to accurate results with a smaller number of interactions compared to existing methods.
Exploratory Training: When Annotators Learn About Data
[ "Rajesh Shrestha", "Omeed Habibelahian", "Arash Termehchy", "Paolo Papotti" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=t3PzfH98Mq
@inproceedings{ wang-henderson2023graph, title={Graph Neural Bayesian Optimization for Virtual Screening}, author={Miles Wang-Henderson and Bartu Soyuer and Parnian Kassraie and Andreas Krause and Ilija Bogunovic}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=t3PzfH98Mq} }
Virtual screening is an essential component of early-stage drug and materials discovery. This is challenged by the increasingly intractable size of virtual libraries and the high cost of evaluating properties. We propose GNN-SS, a Graph Neural Network (GNN) powered Bayesian Optimization (BO) algorithm. GNN-SS utilizes random sub-sampling to reduce the computational complexity of the BO problem, and diversifies queries for training the model. We further introduce data-independent projections to efficiently model second-order random feature interactions, and improve uncertainty estimates. GNN-SS is computationally light, sample-efficient, and rapidly narrows the search space by leveraging the generalization ability of GNNs. Our algorithm achieves state-of-the-art performance among screening methods for the Practical Molecular Optimization benchmark.
Graph Neural Bayesian Optimization for Virtual Screening
[ "Miles Wang-Henderson", "Bartu Soyuer", "Parnian Kassraie", "Andreas Krause", "Ilija Bogunovic" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=sTkValZrOS
@inproceedings{ audiffren2023zooming, title={Zooming Optimistic Optimization Method to solve the Threshold Estimation Problem}, author={Julien Audiffren}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=sTkValZrOS} }
This paper introduces a new global optimization algorithm that solves the threshold estimation problem. In this active learning problem, underlying many empirical neuroscience and psychophysics experiments, the objective is to estimate the input values that would produce the desired output value from an unknown, noisy, non-decreasing response function. Compared to previous approaches, ZOOM (Zooming Optimistic Optimization Method) offers the best of both worlds: ZOOM is model-agnostic, benefits from stronger theoretical guarantees and faster convergence rate, but also quickly jumps between arms, offering strong performance even for small sampling budgets.
Zooming Optimistic Optimization Method to solve the Threshold Estimation Problem
[ "Julien Audiffren" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qwrnFONObA
@inproceedings{ mcinerney2023hessianfree, title={Hessian-Free Laplace in Bayesian Deep Learning}, author={James McInerney and Nathan Kallus}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=qwrnFONObA} }
The Laplace approximation (LA) of the Bayesian posterior is a Gaussian distribution centered at the maximum a posteriori estimate. Its appeal in Bayesian deep learning stems from the ability to quantify uncertainty post-hoc (i.e., after standard network parameter optimization), the ease of sampling from the approximate posterior, and the analytic form of model evidence. Uncertainty in turn can direct experimentation. However, an important computational bottleneck of LA is the necessary step of calculating and inverting the Hessian matrix of the log posterior. The Hessian may be approximated in a variety of ways, with quality varying with a number of factors including the network, dataset, and inference task. In this paper, we propose an alternative algorithm that sidesteps Hessian calculation and inversion. The Hessian-free Laplace (HFL) approximation uses curvature of both the log posterior and network prediction to estimate its variance. Two point estimates are required: the standard maximum a posteriori parameters and the optimal parameter under a loss regularized by the network prediction. We show that under standard assumptions of LA in Bayesian deep learning, HFL targets the same variance as LA, and this is empirically explored in small-scale simulated experiments comparing against the exact Hessian.
Hessian-Free Laplace in Bayesian Deep Learning
[ "James McInerney", "Nathan Kallus" ]
Workshop/ReALML
2403.10671
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=obBbfvg5d0
@inproceedings{ khajehnejad2023on, title={On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay}, author={Moein Khajehnejad and Forough Habibollahi and Alon Loeffler and Brett Joseph Kagan and Adeel Razi}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=obBbfvg5d0} }
In this study, we characterize complex network dynamics in live in vitro neuronal systems during two distinct activity states: spontaneous rest state and engagement in a real-time (closed-loop) game environment using the DishBrain system. First, we embed the spiking activity of these channels in a lower-dimensional space using various representation learning methods and then extract a subset of representative channels. Next, by analyzing these low-dimensional representations, we explore the patterns of macroscopic neuronal network dynamics during learning. Remarkably, our findings indicate that just using the low-dimensional embedding of representative channels is sufficient to differentiate the neuronal culture during the Rest and Gameplay. Notably, our investigation shows dynamic changes in the connectivity patterns within the same region and across multiple regions on the multi-electrode array only during Gameplay. These findings underscore the plasticity of neuronal networks in response to external stimuli and highlight the potential for modulating connectivity in a controlled environment. The ability to distinguish between neuronal states using reduced-dimensional representations points to the presence of underlying patterns that could be pivotal for real-time monitoring and manipulation of neuronal cultures. Additionally, this provides insight into how biological based information processing systems rapidly adapt and learn and may lead to new improved algorithms.
On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay
[ "Moein Khajehnejad", "Forough Habibollahi", "Alon Loeffler", "Brett Joseph Kagan", "Adeel Razi" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=n9zR0sMY4c
@inproceedings{ vishwakarma2023humanintheloop, title={Human-in-the-Loop Out-of-Distribution Detection with False Positive Rate Control}, author={Harit Vishwakarma and Heguang Lin and Ramya Vinayak}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=n9zR0sMY4c} }
Robustness to Out-of-Distribution (OOD) samples is essential for successful deployment of machine learning models in the open world. Since it is not possible to have a priori access to variety of OOD data before deployment, several recent works have focused on designing scoring functions to quantify OOD uncertainty. These methods often find a threshold that achieves 95% true positive rate (TPR) on the In-Distribution (ID) data used for training and use this threshold for detecting OOD samples. However, this can lead to very high FPR as seen in a comprehensive evaluation in the Open-OOD benchmark, the FPR can range between 60 to 96% on several ID and OOD dataset combinations. In contrast, practical systems deal with a variety of OOD samples on the fly and critical applications, e.g., medical diagnosis, demand guaranteed control of the false positive rate (FPR). To meet these challenges, we propose a mathematically grounded framework for human-in-the-loop OOD detection, wherein expert feedback is used to update the threshold. This allows the system to adapt to variations in the OOD data while adhering to the quality constraints. We propose an algorithm that uses any time valid confidence intervals based on the Law of Iterated Logarithm (LIL). Our theoretical results show that the system meets FPR constraints while minimizing the human feedback for point that are in-distribution. Another key feature of the system is that it can work with any existing post-hoc OOD uncertainty-quantification methods. We evaluate our system empirically on a mixture of benchmark OOD datasets in image classification task on CIFAR-10 and CIFAR-100 as in distribution datasets and show that our method can maintain FPR at most 5% while maximizing TPR.
Human-in-the-Loop Out-of-Distribution Detection with False Positive Rate Control
[ "Harit Vishwakarma", "Heguang Lin", "Ramya Vinayak" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=kPWO1v0slD
@inproceedings{ blau2023crossentropy, title={Cross-Entropy Estimators for Sequential Experiment Design with Reinforcement Learning}, author={Tom Blau and Iadine Chades and Amir Dezfouli and Daniel M Steinberg and Edwin V. Bonilla}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=kPWO1v0slD} }
Reinforcement learning can learn amortised design policies for designing sequences of experiments. However, current methods rely on contrastive estimators of expected information gain, which require an exponential number of contrastive samples to achieve an unbiased estimation. We propose the use of an alternative lower bound estimator, based on the cross-entropy of the joint model distribution and a flexible proposal distribution. This proposal distribution approximates the true posterior of the model parameters given the experimental history and the design policy. Our method requires no contrastive samples, can achieve more accurate estimates of high information gains, allows learning of superior design policies, and is compatible with implicit probabilistic models. We assess our algorithm's performance in various tasks, including continuous and discrete designs and explicit and implicit likelihoods.
Cross-Entropy Estimators for Sequential Experiment Design with Reinforcement Learning
[ "Tom Blau", "Iadine Chades", "Amir Dezfouli", "Daniel M Steinberg", "Edwin V. Bonilla" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=k2kBjcKVal
@inproceedings{ char2023correlated, title={Correlated Trajectory Uncertainty for Adaptive Sequential Decision Making}, author={Ian Char and Youngseog Chung and Rohan Shah and Willie Neiswanger and Jeff Schneider}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=k2kBjcKVal} }
One of the great challenges with decision making tasks on real world systems is the fact that data is sparse and acquiring additional data is expensive. In these cases, it is often crucial to make a model of the environment to assist in making decisions. At the same time, limited data means that learned models are erroneous, making it just as important to equip the model with good predictive uncertainties. In the context of learning sequential decision making policies, these uncertainties can prove useful for informing which data to collect for the greatest improvement in policy performance \citep{mehta2021experimental, mehta2022exploration} or informing the policy about unsure regions of state and action space to avoid during test time \citep{yu2020mopo}. Additionally, assuming that realistic samples of the environment can be drawn, an adaptable policy can be trained that attempts to make optimal decisions for any given possible instance of the environment \citep{ghosh2022offline, chen2021offline}. In this work, we examine the so-called ``probabilistic neural network'' (PNN) model that is ubiquitous in model-based reinforcement learning (MBRL) works. We argue that while PNN models may have good marginal uncertainties, they form a distribution of non-smooth transition functions. Not only are these samples unrealistic and may hamper adaptability, but we also assert that this leads to poor uncertainty estimates when predicting multiple step trajectory estimates. To address this issue, we propose a simple sampling method that can be implemented on top of pre-existing models.We evaluate our sampling technique on a number of environments, including a realistic nuclear fusion task, and find that, not only do smooth transition function samples produce more calibrated uncertainties, but they also lead to better downstream performance for an adaptive policy.
Correlated Trajectory Uncertainty for Adaptive Sequential Decision Making
[ "Ian Char", "Youngseog Chung", "Rohan Shah", "Willie Neiswanger", "Jeff Schneider" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=juq0ZUWOoY
@inproceedings{ li2023efficient, title={Efficient and scalable reinforcement learning via Hypermodel}, author={Yingru Li and Jiawei Xu and Zhi-Quan Luo}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=juq0ZUWOoY} }
Data-efficient reinforcement learning(RL) requires deep exploration. Thompson sampling is a principled method for deep exploration in reinforcement learning. However, Thompson sampling need to track the degree of uncertainty by maintaining the posterior distribution of models, which is computationally feasible only in simple environments with restrictive assumptions. A key problem in modern RL is how to develop data and computation efficient algorithm that is scalable to large-scale complex environments. We develop a principled framework, called HyperFQI, to tackle both the computation and data efficiency issues. HyperFQI can be regarded as approximate Thompson sampling for reinforcement learning based on hypermodel. Hypermodel in this context serves as the role for uncertainty estimation of action-value function. HyperFQI demonstrates its ability for efficient and scalable deep exploration in DeepSea benchmark with large state space. HyperFQI also achieves super-human performance in Atari benchmark with 2M interactions with low computation costs. We also give a rigorous performance analysis for the proposed method, justifying its computation and data efficiency. To the best of knowledge, this is the first principled RL algorithm that is provably efficient and also practically scalable to complex environments such as Arcade learning environment that requires deep networks for pixel-based control.
Efficient and scalable reinforcement learning via Hypermodel
[ "Yingru Li", "Jiawei Xu", "Zhi-Quan Luo" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=iK8FzJvQMH
@inproceedings{ novitasari2023alas, title={{ALAS}: Active Learning for Autoconversion Rates Prediction from Satellite Data}, author={Maria Carolina Novitasari and Johannes Quaas and Miguel R. D. Rodrigues}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=iK8FzJvQMH} }
High-resolution simulations, such as the ICOsahedral Non-hydrostatic Large-Eddy Model (ICON-LEM), provide valuable insights into the complex interactions among aerosols, clouds, and precipitation, which are the major contributors to climate change uncertainty. However, due to its exorbitant computational costs, it can only be employed for a limited period and geographical area. To address this, we propose a more cost-effective method powered by emerging machine learning approach to better understand the intricate dynamics of the climate system. Our approach involves active learning techniques -- by leveraging high-resolution climate simulation as the oracle and an abundant amount of unlabeled data drawn from satellite observations -- to predict autoconversion rates, a crucial step in precipitation formation, while significantly reducing the need for a large number of labeled instances. In this study, we present novel methods: custom query strategy fusion for labeling instances, WiFi and MeFi, along with active feature selection based on SHAP, designed to tackle real-world challenges due to its simplicity and practicality in application, specifically focusing on the prediction of autoconversion rates.
ALAS: Active Learning for Autoconversion Rates Prediction from Satellite Data
[ "Maria Carolina Novitasari", "Johannes Quaas", "Miguel R. D. Rodrigues" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=hzJq3WVGd9
@inproceedings{ kang2023nearequivalence, title={Near-equivalence between bounded regret and delay robustness in interactive decision making}, author={Enoch H. Kang and Panganamala Kumar}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=hzJq3WVGd9} }
Interactive decision making, encompassing bandits, contextual bandits, and reinforcement learning, has recently been of interest to theoretical studies of experimentation design and recommender system algorithm research. Recently, it has been shown that the well-known Graves-Lai constant being zero is a necessary and sufficient condition for achieving bounded (or constant) regret in interactive decision making. As this condition may be a strong requirement for many applications, the practical usefulness of pursuing bounded regret has been questioned. In this paper, we show that the condition of the Graves-Lai constant being zero is also necessary to achieve delay model robustness when reward delays are unknown (i.e., when feedbacks are anonymous). Here, model robustness is measured in terms of $\epsilon$-robustness, one of the most widely used and one of the least adversarial robustness concepts in the robust statistics literature. In particular, we show that $\epsilon$-robustness cannot be achieved for a consistent (i.e., uniformly sub-polynomial regret) algorithm however small the nonzero $\epsilon$ value is when the Grave-Lai constant is not zero. While this is a strongly negative result, we also provide a positive result for linear rewards models (Linear contextual bandits, Reinforcement learning with linear MDP) that the Grave-Lai constant being zero is also sufficient for achieving bounded regret without any knowledge of delay models, i.e., the best of both the efficiency world and the delay robustness world.
Near-equivalence between bounded regret and delay robustness in interactive decision making
[ "Enoch H. Kang", "Panganamala Kumar" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=hrFfR1WZgi
@inproceedings{ savage2023expertguided, title={Expert-guided Bayesian Optimisation for Human-in-the-loop Experimental Design of Known Systems}, author={Tom Savage and Antonio Del rio chanona}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=hrFfR1WZgi} }
Domain experts often possess valuable physical insights that are overlooked in fully automated decision-making processes such as Bayesian optimisation. In this article we apply high-throughput (batch) Bayesian optimisation alongside anthropological decision theory to enable domain experts to influence the selection of optimal experiments. Our methodology exploits the hypothesis that humans are better at making discrete choices than continuous ones and enables experts to influence critical early decisions. At each iteration we solve an augmented multi-objective optimisation problem across a number of alternate solutions, maximising both the sum of their utility function values and the determinant of their covariance matrix, equivalent to their total variability. By taking the solution at the knee point of the Pareto front, we return a set of alternate solutions at each iteration that have both high utility values and are reasonably distinct, from which the expert selects one for evaluation. We demonstrate that even in the case of an uninformed practitioner, our algorithm recovers the regret of standard Bayesian optimisation.
Expert-guided Bayesian Optimisation for Human-in-the-loop Experimental Design of Known Systems
[ "Tom Savage", "Antonio Del rio chanona" ]
Workshop/ReALML
2312.02852
[ "https://github.com/trsav/hitl-bo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=gkChsof0Rg
@inproceedings{ ochiai2023active, title={Active Testing of Binary Classification Model Using Level Set Estimation}, author={Takuma Ochiai and Keiichiro Seno and Kota Matsui and Satoshi Hara}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=gkChsof0Rg} }
In this study, we propose a method for estimating the test loss in binary classification model with minimal labeling of the test data. The central idea of the proposed method is to reduce the problem of test loss estimation to the problem of level set estimation for the loss function. This reduction allows us to achieve sequential test loss estimation through iterative labeling using active learning methods for level set estimation. Through multiple dataset experiments, we confirmed that the proposed method is effective for evaluating binary classification models and allows for test loss estimation with fewer labeled samples compared to existing methods.
Active Testing of Binary Classification Model Using Level Set Estimation
[ "Takuma Ochiai", "Keiichiro Seno", "Kota Matsui", "Satoshi Hara" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ePglZTbdeI
@inproceedings{ yin2023nonparametric, title={Nonparametric Discrete Choice Experiments with Machine Learning Guided Adaptive Design}, author={Mingzhang Yin and Ruijiang Gao and Weiran Lin and Steven M. Shugan}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=ePglZTbdeI} }
Designing products to meet consumers' preferences is essential for a business's success. We propose Gradient-based Survey (GBS), a discrete choice experiment for multiattribute product design. The experiment elicits consumer preferences through a sequence of paired comparisons for partial profiles. GBS adaptively constructs paired comparison questions based on the respondents' previous choices. Unlike the traditional random utility maximization paradigm, GBS is robust to model misspecification by not requiring a parametric utility model. Cross-pollinating the machine learning and experiment design, GBS is scalable to products with hundreds of attributes and can design personalized products for heterogeneous consumers. We demonstrate the advantage of GBS in accuracy and sample efficiency compared to the existing parametric and nonparametric methods in simulations.
Nonparametric Discrete Choice Experiments with Machine Learning Guided Adaptive Design
[ "Mingzhang Yin", "Ruijiang Gao", "Weiran Lin", "Steven M. Shugan" ]
Workshop/ReALML
2310.12026
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=eLIk3m5C79
@inproceedings{ bakker2023active, title={Active Learning Policies for Solving Inverse Problems}, author={Tim Bakker and Thomas Hehn and Tribhuvanesh Orekondy and Arash Behboodi and Fabio Valerio Massoli}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=eLIk3m5C79} }
In recent years, solving inverse problems for black-box simulators has become a point of focus for the machine learning community due to their ubiquity in science and engineering scenarios. In such settings, the simulator describes a forward process $f: (\psi, x) \rightarrow y$ from simulator parameters $\psi$ and input data $x$ to observations y, and the goal of the inverse problem is to optimise $\psi$ to minimise some observation loss. Simulator gradients are often unavailable or prohibitively expensive to obtain, making optimisation of these simulators particularly challenging. Moreover, in many applications, the goal is to solve a family of related inverse problems. Thus, starting optimisation ab-initio/from-scratch may be infeasible if the forward model is expensive to evaluate. In this paper, we propose a novel method for solving classes of similar inverse problems. We learn an active learning policy that guides the training of a surrogate and use the gradients of this surrogate to optimise the simulator parameters with gradient descent. After training the policy, downstream inverse problem optimisations require up to 90\% fewer forward model evaluations than the baseline.
Active Learning Policies for Solving Inverse Problems
[ "Tim Bakker", "Thomas Hehn", "Tribhuvanesh Orekondy", "Arash Behboodi", "Fabio Valerio Massoli" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=eHTXFqa7pl
@inproceedings{ chen2023physicsenhanced, title={Physics-Enhanced Multi-fidelity Learning for Optical Surface Imprint}, author={Yongchao Chen}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=eHTXFqa7pl} }
Human fingerprints serve as one unique and powerful characteristic for each person, from which policemen can recognize the identity. Similar to humans, many natural bodies and intrinsic mechanical qualities can also be uniquely identified from surface characteristics. To measure the elasto-plastic properties of one material, one formally sharp indenter is pushed into the measured body under constant force and retracted, leaving a unique residual imprint of the minute size from several micrometers to nanometers. However, one great challenge is how to map the optical image of this residual imprint into the real wanted mechanical properties, i.e., the tensile force curve. In this paper, we propose a novel method to use multi-fidelity neural networks (MFNN) to solve this inverse problem. We first actively train the NN model via pure simulation data, and then bridge the sim-to-real gap via transfer learning. The most innovative part is that we use NN to dig out the unknown physics and also implant the known physics into the transfer learning framework, thus highly improving the model stability and decreasing the data requirement. This work serves as one great example of applying machine learning into the real experimental research, especially under the constraints of data limitation and fidelity variance.
Physics-Enhanced Multi-fidelity Learning for Optical Surface Imprint
[ "Yongchao Chen" ]
Workshop/ReALML
2311.10278
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=dfUF5EbJUj
@inproceedings{ qin2023generalized, title={Generalized Objectives in Adaptive Experiments: The Frontier between Regret and Speed}, author={Chao Qin and Daniel Russo}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=dfUF5EbJUj} }
This paper formulates a generalized model of multi-armed bandit experiments that accommodates both cumulative regret minimization and best-arm identification objectives. We identify the optimal instance-dependent scaling of the cumulative cost across experimentation and deployment, which is expressed in the familiar form uncovered by Lai and Robbins (1985). We show that the nature of asymptotically efficient algorithms is nearly independent of the cost functions, emphasizing a remarkable universality phenomenon. Balancing various cost considerations is reduced to an appropriate choice of exploitation rate. Additionally, we explore the Pareto frontier between the length of experiment and the cumulative regret across experimentation and deployment. A notable and universal feature is that even a slight reduction in the exploitation rate (from one to a slightly lower value results) in a substantial decrease in the experiment's length, accompanied by only a minimal increase in the cumulative regret.
Generalized Objectives in Adaptive Experiments: The Frontier between Regret and Speed
[ "Chao Qin", "Daniel Russo" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=brPrxb9Zz3
@inproceedings{ nguyen2023expt, title={Ex{PT}: Scaling Foundation Models for Experimental Design via Synthetic Pretraining}, author={Tung Nguyen and Sudhanshu Agrawal and Aditya Grover}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=brPrxb9Zz3} }
Experimental design is a fundamental problem in many science and engineering fields. In this problem, sample efficiency is crucial due to the time, money, and safety costs of real-world design evaluations. Existing approaches either rely on active data collection or access to large, labeled datasets of past experiments, making them impractical in many real-world scenarios. In this work, we address the more challenging yet realistic setting of few-shot experimental design, where only a few labeled data points of input designs and their corresponding values are available. We approach this problem as a conditional generation task, where a model conditions on a few labeled examples and the desired output to generate an optimal input design. To this end, we present Pretrained Transformers for Experimental Design (ExPT), which uses a novel combination of synthetic pretraining with in-context learning to enable few-shot generalization. In ExPT, we only assume knowledge of a finite collection of unlabelled data points from the input domain and pretrain a transformer neural network to optimize diverse synthetic functions defined over this domain. Unsupervised pretraining allows ExPT to adapt to any design task at test time in an in-context fashion by conditioning on a few labeled data points from the target task and generating the candidate optima. We evaluate ExPT on few-shot experimental design in challenging domains and demonstrate its superior generality and performance compared to existing methods.
ExPT: Scaling Foundation Models for Experimental Design via Synthetic Pretraining
[ "Tung Nguyen", "Sudhanshu Agrawal", "Aditya Grover" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=YrAXARes9d
@inproceedings{ tian2023autodex, title={Aut{ODE}x: Automated Optimal Design of Experiments Platform with Data- and Time-Efficient Multi-Objective Optimization}, author={Yunsheng Tian and Pavle Vanja Konakovic and Beichen Li and Ane Zuniga and Michael Foshey and Timothy Erps and Wojciech Matusik and Mina Konakovic Lukovic}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=YrAXARes9d} }
We introduce AutODEx, an automated machine learning platform for optimal design of experiments to expedite solution discovery with optimal objective trade-offs. We implement state-of-the-art multi-objective Bayesian optimization (MOBO) algorithms in a unified and flexible framework for optimal design of experiments, along with efficient asynchronous batch strategies extended to MOBO to harness experiment parallelization. For users with little or no experience with coding or machine learning, we provide an intuitive graphical user interface (GUI) to help quickly visualize and guide the experiment design. For experienced researchers, our modular code structure serves as a testbed to quickly customize, develop, and evaluate their own MOBO algorithms. Extensive benchmark experiments against other MOBO packages demonstrate \platname's competitive and stable performance. Furthermore, we showcase \platname's real-world utility by autonomously guiding hardware experiments with minimal human involvement.
AutODEx: Automated Optimal Design of Experiments Platform with Data- and Time-Efficient Multi-Objective Optimization
[ "Yunsheng Tian", "Pavle Vanja Konakovic", "Beichen Li", "Ane Zuniga", "Michael Foshey", "Timothy Erps", "Wojciech Matusik", "Mina Konakovic Lukovic" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=YRxd1szajS
@inproceedings{ ren2023accelerated, title={Accelerated High-Entropy Alloys Discovery for Electrocatalysis via Robotic-Aided Active Learning}, author={Zhichu Ren and Zhen Zhang and Yunsheng Tian and Ju Li}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=YRxd1szajS} }
This work explores the accelerated discovery of High-Entropy Alloys electrocatalysts using a novel carbothermal shock fabrication method, underpinned by an active learning approach. A high-throughput robotic platform, integrating a BoTorch-based active learning module with an Opentrons liquid handling robot and a 7-axis robotic arm, expedites the iterative experimental cycles. The recent integration of large language models leverages ChatGPT’s API, facilitating voice-driven interactions between researchers and the automation setup, further enhancing the autonomous workflow under experimental materials science scenarios. Initial optimization efforts for green hydrogen production catalyst yield promising results, showcasing the efficacy of the active learning framework in navigating the complex materials design space of HEAs. This study also emphasizes the crucial need for consistency and reproducibility in real-world experiments to fully harness the potential of active learning in materials science explorations.
Accelerated High-Entropy Alloys Discovery for Electrocatalysis via Robotic-Aided Active Learning
[ "Zhichu Ren", "Zhen Zhang", "Yunsheng Tian", "Ju Li" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Xw8KTnFpLA
@inproceedings{ dovonon2023longrun, title={Long-run Behaviour of Multi-fidelity Bayesian Optimisation}, author={Gbetondji Jean-Sebastien Dovonon and Jakob Zeitler}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=Xw8KTnFpLA} }
Multi-fidelity Bayesian Optimisation (MFBO) has been shown to generally converge faster than single-fidelity Bayesian Optimisation (SFBO) (\cite{poloczek2017multi}). Inspired by recent benchmark papers, we are investigating the long-run behaviour of MFBO, based on observations in the literature that it might under-perform in certain scenarios (\cite{mikkola2023multi}, \cite{eggensperger2021hpobench}). An under-performance of MBFO in the long-run could significantly undermine its application to many research tasks, especially when we are not able to identify when the under-performance begins, and other BO algorithms would have performed better. We create a simple benchmark study, showcase empirical results and discuss scenarios, concluding with inconclusive results.
Long-run Behaviour of Multi-fidelity Bayesian Optimisation
[ "Gbetondji Jean-Sebastien Dovonon", "Jakob Zeitler" ]
Workshop/ReALML
2312.12633
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Xu8d36bb5c
@inproceedings{ vishwakarma2023understanding, title={Understanding Threshold-based Auto-labeling: The Good, the Bad, and the Terra Incognita}, author={Harit Vishwakarma and Heguang Lin and Frederic Sala and Ramya Vinayak}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=Xu8d36bb5c} }
Creating large-scale high-quality labeled datasets is a major bottleneck in supervised machine learning workflows. Threshold-based auto-labeling (TBAL), where validation data obtained from humans is used to find a confidence threshold above which the data is machine-labeled, reduces reliance on manual annotation. TBAL is emerging as a widely-used solution in practice. Given the long shelf-life and diverse usage of the resulting datasets, understanding when the data obtained by such auto-labeling systems can be relied on is crucial. This is the first work to analyze TBAL systems and derive sample complexity bounds on the amount of human-labeled validation data required for guaranteeing the quality of machine-labeled data. Our results provide two crucial insights. First, reasonable chunks of unlabeled data can be automatically and accurately labeled by seemingly bad models. Second, a hidden downside of TBAL systems is potentially prohibitive validation data usage. Together, these insights describe the promise and pitfalls of using such systems. We validate our theoretical guarantees with extensive experiments on synthetic and real datasets.
Understanding Threshold-based Auto-labeling: The Good, the Bad, and the Terra Incognita
[ "Harit Vishwakarma", "Heguang Lin", "Frederic Sala", "Ramya Vinayak" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WWqJWiyQ2D
@inproceedings{ ikram2023probabilistic, title={Probabilistic Generative Modeling for Procedural Roundabout Generation for Developing Countries}, author={Zarif Ikram and Ling Pan and Dianbo Liu}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=WWqJWiyQ2D} }
Due to limited resources and fast economic growth, designing optimal transportation road networks with traffic simulation and validation in a cost-effective manner is vital for developing countries, where extensive manual testing is expensive and often infeasible. Current rule-based road design generators lack diversity, a key feature for design robustness. Generative Flow Networks (GFlowNets) learn stochastic policies to sample from an unnormalized reward distribution, thus generating high-quality solutions while preserving their diversity. In this work, we formulate the problem of linking incident roads to the circular junction of a roundabout by a Markov decision process, and we leverage GFlowNets as the Junction-Art road generator. We compare our method with related methods and our empirical results show that our method achieves better diversity while preserving a high validity score.
Probabilistic Generative Modeling for Procedural Roundabout Generation for Developing Countries
[ "Zarif Ikram", "Ling Pan", "Dianbo Liu" ]
Workshop/ReALML
2310.03687
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WDLXX4NJSK
@inproceedings{ shen2023efficient, title={Efficient Variational Sequential Information Control}, author={Jianwei Shen and Jason Pacheco}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=WDLXX4NJSK} }
We develop a family of fast variational methods for sequential control in dynamical settings where an agent is incentivized to maximize information gain. We consider the case of optimal control in continuous nonlinear dynamical systems that prohibit exact evaluation of the mutual information (MI) reward. Our approach couples efficient message-passing inference with variational bounds on the MI objective under Gaussian projections. We also develop a Gaussian mixture approximation that enables exact MI evaluation under constraints on the component covariances. We validate our methodology in nonlinear systems with superior and faster control compared to standard particle-based methods. We show our approach improves the accuracy and efficiency of one-shot robotic learning with intrinsic MI rewards.
Efficient Variational Sequential Information Control
[ "Jianwei Shen", "Jason Pacheco" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UWJpUpG8Cv
@inproceedings{ kassraie2023anytime, title={Anytime Model Selection in Linear Bandits}, author={Parnian Kassraie and Nicolas Emmenegger and Andreas Krause and Aldo Pacchiano}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=UWJpUpG8Cv} }
Model selection in the context of bandit optimization is a challenging problem, as it requires balancing exploration and exploitation not only for action selection, but also for model selection. One natural approach is to rely on online learning algorithms that treat different models as experts. Existing methods, however, scale poorly ($\mathrm{poly}M$) with the number of models $M$ in terms of their regret. We develop \alexp, an anytime algorithm, which has an exponentially improved ($\log M$) dependence on $M$ for its regret. We neither require knowledge of the horizon $n$, nor rely on an initial purely exploratory stage. Our approach utilizes a novel time-uniform analysis of the Lasso, by defining a self-normalized martingale sequence based on the empirical process error, establishing a new connection between interactive learning and high-dimensional statistics.
Anytime Model Selection in Linear Bandits
[ "Parnian Kassraie", "Nicolas Emmenegger", "Andreas Krause", "Aldo Pacchiano" ]
Workshop/ReALML
2307.12897
[ "https://github.com/lasgroup/alexp" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ThMSXaolvn
@inproceedings{ hern{\'a}ndez-garc{\'\i}a2023multifidelity, title={Multi-Fidelity Active Learning with {GF}lowNets}, author={Alex Hern{\'a}ndez-Garc{\'\i}a and Nikita Saxena and Moksh Jain and Cheng-Hao Liu and Yoshua Bengio}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=ThMSXaolvn} }
Many relevant scientific and engineering problems present challenges where current machine learning methods cannot yet efficiently leverage the available data and resources. For example, certain relevant problems involve exploring very large, structured and high-dimensional spaces, and where querying a high fidelity, black-box objective function is very expensive. Progress in machine learning methods that can efficiently tackle such problems would help accelerate currently crucial areas such as drug and materials discovery. In this paper, we propose a multi-fidelity active learning algorithm with GFlowNets as a sampler, to efficiently discover diverse, high-scoring candidates where multiple approximations of the black-box function are available at lower fidelity and cost. Our evaluation on molecular discovery tasks show that multi-fidelity active learning with GFlowNets can discover high-scoring candidates at a fraction of the budget of its single-fidelity counterpart while maintaining diversity, unlike RL-based alternatives. These results open new avenues for multi-fidelity active learning to accelerate scientific discovery and engineering design.
Multi-Fidelity Active Learning with GFlowNets
[ "Alex Hernández-García", "Nikita Saxena", "Moksh Jain", "Cheng-Hao Liu", "Yoshua Bengio" ]
Workshop/ReALML
2306.11715
[ "https://github.com/nikita-0209/mf-al-gfn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=SuSPkCI0qP
@inproceedings{ fowler2023learning, title={Learning in Clinical Trial Settings}, author={Zoe Fowler and Kiran Premdat Kokilepersaud and Mohit Prabhushankar and Ghassan AlRegib}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=SuSPkCI0qP} }
This paper presents an approach to active learning that considers the non-independent and identically distributed (non-i.i.d.) structure of a clinical trial setting. There exists two types of clinical trials: retrospective and prospective. Retrospective clinical trials analyze data after treatment has been performed; prospective clinical trials collect data as treatment is ongoing. Traditional active learning approaches are often unrealistic in practice and assume the dataset is i.i.d. when selecting training samples; however, in the case of clinical trials, treatment results in a dependency between the data collected at the current and past visits. Thus, we propose prospective active learning to overcome the limitations present in traditional active learning methods, where we condition on the time data was collected. We compare our proposed method to the traditional active learning paradigm, which we refer to as retrospective in nature, on one clinical trial dataset and one non-clinical trial dataset. We show that in clinical trial settings, our proposed method outperforms retrospective active learning.
Learning in Clinical Trial Settings
[ "Zoe Fowler", "Kiran Premdat Kokilepersaud", "Mohit Prabhushankar", "Ghassan AlRegib" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=SmvTEe9iSG
@inproceedings{ sinaga2023preferential, title={Preferential Heteroscedastic Bayesian Optimization with Informative Noise Priors}, author={Marshal Arijona Sinaga and Julien Martinelli and Samuel Kaski}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=SmvTEe9iSG} }
Preferential Bayesian optimization (PBO) is a sample-efficient framework for optimizing a black-box function by utilizing human preferences between two candidate solutions as a proxy. Conventional PBO relies on homoscedastic noise to model human preference structure. However, such noise fails to accurately capture the varying levels of human aleatoric uncertainty among different pairs of candidates. For instance, a chemist with solid expertise in glucose-related molecules may easily compare two compounds and struggle for alcohol-related molecules. Furthermore, PBO ignores this uncertainty when searching for a new candidate, consequently underestimating the risk associated with human uncertainty. To address this, we propose heteroscedastic noise models to learn human preference structure. Moreover, we integrate the preference structure with the acquisition functions that account for aleatoric uncertainty. The noise models assign noise based on the distance of a specific input to a predefined set of reliable inputs known as \emph{anchors}. We empirically evaluate the proposed approach on a range of synthetic black-box functions, demonstrating a consistent improvement over homoscedastic PBO.
Preferential Heteroscedastic Bayesian Optimization with Informative Noise Priors
[ "Marshal Arijona Sinaga", "Julien Martinelli", "Samuel Kaski" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ScOvmGz4xH
@inproceedings{ bal2023optimistic, title={Optimistic Games for Combinatorial Bayesian Optimization with Applications to Protein Design}, author={Melis Ilayda Bal and Pier Giuseppe Sessa and Mojmir Mutny and Andreas Krause}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=ScOvmGz4xH} }
Bayesian optimization (BO) is a powerful framework to optimize black box expensive-to-evaluate functions via sequential interactions. In several important problems (e.g. drug discovery, circuit design, neural architecture search, etc.), though, such functions are defined over $\textit{combinatorial and unstructured}$ spaces. This makes existing BO algorithms not feasible due to the intractable maximization of the acquisition function to find informative evaluation points. To address this issue, we propose $\textbf{GameOpt}$, a novel game-theoretical approach to combinatorial BO. $\textbf{GameOpt}$ establishes a cooperative game between the different optimization variables and computes informative points to be game $\textit{equilibria}$ of the acquisition function. These are stable configurations from which no variable has an incentive to deviate -- analogous to local optima in continuous domains. Crucially, this allows us to efficiently break down the complexity of the combinatorial domain into individual decision sets, making $\textbf{GameOpt}$ scalable to large combinatorial spaces. We demonstrate the application of $\textbf{GameOpt}$ to the challenging $\textit{protein design}$ problem and validate its performance on two real-world protein datasets. Each protein can take up to $20^{X}$ possible configurations, where $X$ is the length of a protein, making standard BO methods unusable. Instead, our approach iteratively selects informative protein configurations and very quickly discovers highly active protein variants compared to other baselines.
Optimistic Games for Combinatorial Bayesian Optimization with Applications to Protein Design
[ "Melis Ilayda Bal", "Pier Giuseppe Sessa", "Mojmir Mutny", "Andreas Krause" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QIwA1zUd2t
@inproceedings{ nam2023npcnis, title={{NPC}-{NIS}: Navigating Semiconductor Process Corners with Neural Importance Sampling}, author={Hong Chul Nam and Chanwoo Park}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=QIwA1zUd2t} }
Traditional corner case analysis in semiconductor circuit design typically involves the use of predetermined semiconductor process parameters, including Fast, Typical, and Slow corners for PMOS and NMOS devices, frequently yielding overly conservative designs due to the utilization of fixed, and potentially non-representative, process parameter values for circuit simulations. Identifying the worst cases of circuit FoMs within typical semiconductor process variation ranges presents a considerable challenge, especially given the complexities associated with accurately sampling rare semiconductor events. In response, we introduce NPC-NIS, a model specifically developed for estimating rare cases in semiconductor circuit analysis, leveraging a learnable importance sampling strategy. We model the distribution of process parameters that exhibit the worst FoMs within a realistic range. This adaptable framework dynamically identifies and addresses rare semiconductor cases within typical process variation ranges, enhancing our circuit design optimization capabilities under realistic conditions. Our empirical results validate the effectiveness of the Neural Importance Sampling (NIS) approach in identifying and mitigating rare semiconductor scenarios, thereby contributing to the development of more robust and reliable semiconductor circuit designs and connecting traditional semiconductor corner case analysis with realworld semiconductor applications.
NPC-NIS: Navigating Semiconductor Process Corners with Neural Importance Sampling
[ "Hong Chul Nam", "Chanwoo Park" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=P7PuMQEKbF
@inproceedings{ che2023planning, title={Planning Contextual Adaptive Experiments with Model Predictive Control}, author={Ethan Che and Jimmy Wang and Hongseok Namkoong}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=P7PuMQEKbF} }
Implementing adaptive experimentation methods in the real world often encounters a multitude of operational difficulties, including batched/delayed feedback, non-stationary environments, and constraints on treatment allocations. To improve the flexibility of adaptive experimentation, we propose a Bayesian, optimization-based framework founded on model-predictive control (MPC) for the linear contextual bandit setting. While we focus on simple regret minimization, the framework can flexibly incorporate multiple objectives along with constraints, batches, personalized and non-personalized policies, as well as predictions of future context arrivals. Most importantly, it maintains this flexibility while guaranteeing improvement over non-adaptive A/B testing across all time horizons, and empirically outperforms standard policies such as Thompson Sampling. Overall, this framework offers a way to guide adaptive designs across the varied demands of modern large-scale experiments.
Planning Contextual Adaptive Experiments with Model Predictive Control
[ "Ethan Che", "Jimmy Wang", "Hongseok Namkoong" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=OWz37WOETP
@inproceedings{ martinelli2023learning, title={Learning relevant contextual variables within Bayesian optimization}, author={Julien Martinelli and Ayush Bharti and Armi Tiihonen and Louis Filstroff and S. T. John and Sabina J. Sloman and Patrick Rinke and Samuel Kaski}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=OWz37WOETP} }
Contextual Bayesian Optimization (CBO) efficiently optimizes black-box, expensive-to- evaluate functions with respect to design variables, while simultaneously integrating relevant contextual information regarding the environment, such as experimental conditions. However, the relevance of contextual variables is not necessarily known beforehand. Moreover, contextual variables can sometimes be optimized themselves, an overlooked setting by current CBO algorithms. Optimizing contextual variables may be costly, which raises the question of determining a minimal relevant subset. We address this problem using a novel method, Sensitivity-Analysis-Driven Contextual BO (SADCBO). We learn the relevance of context variables by sensitivity analysis of the posterior surrogate model, whilst minimizing the cost of optimization by leveraging recent developments on early stopping for BO. We empirically evaluate our proposed SADCBO against alternatives on both synthetic and real-world experiments, and demonstrate a consistent improvement across examples.
Learning relevant contextual variables within Bayesian optimization
[ "Julien Martinelli", "Ayush Bharti", "Armi Tiihonen", "Louis Filstroff", "S. T. John", "Sabina J. Sloman", "Patrick Rinke", "Samuel Kaski" ]
Workshop/ReALML
2305.14120
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=JVKQ5ovWgN
@inproceedings{ lau2023pinnacle, title={{PINNACLE}: {PINN} Adaptive ColLocation and Experimental points selection}, author={Gregory Kang Ruey Lau and Apivich Hemachandra and See-Kiong Ng and Bryan Kian Hsiang Low}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=JVKQ5ovWgN} }
Physics-Informed Neural Networks (PINNs), which incorporate PDEs as soft constraints, train with a composite loss function that contains multiple training point types: different types of collocation points chosen during training to enforce each PDE and initial/boundary conditions, and experimental points which are usually costly to obtain via experiments or simulations. Training PINNs using this loss function is challenging as it typically requires selecting large numbers of points of different types, each with different training dynamics. Unlike past works that focused on the selection of either collocation or experimental points, this work introduces PINN Adaptive ColLocation and Experimental points selection (PINNACLE), the first algorithm that jointly optimizes the selection of all training point types, while automatically adjusting the proportion of collocation point types as training progresses. PINNACLE uses information on the interactions among training point types, which had not been considered before, based on an analysis of PINN training dynamics via the Neural Tangent Kernel (NTK). We theoretically show that the criterion used by PINNACLE is related to the PINN generalization error, and empirically demonstrate that PINNACLE is able to outperform existing point selection methods for forward, inverse, and transfer learning problems.
PINNACLE: PINN Adaptive ColLocation and Experimental points selection
[ "Gregory Kang Ruey Lau", "Apivich Hemachandra", "See-Kiong Ng", "Bryan Kian Hsiang Low" ]
Workshop/ReALML
2404.07662
[ "https://github.com/apivich-h/pinnacle" ]
https://huggingface.co/papers/2404.07662
0
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=JJrwnclCFZ
@inproceedings{ sorourifar2023accelerating, title={Accelerating Black-Box Molecular Property Optimization by Adaptively Learning Sparse Subspaces}, author={Farshud Sorourifar and Thomas Banker and Joel Paulson}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=JJrwnclCFZ} }
Molecular property optimization (MPO) problems are inherently challenging since they are formulated over discrete, unstructured spaces and the labeling process involves expensive simulations or experiments, which fundamentally limits the amount of available data. Bayesian optimization (BO), which is a powerful and popular framework for efficient optimization of noisy, black-box objective functions (e.g., measured property values), thus is a potentially attractive framework for MPO. To apply BO to MPO problems, one must select a structured molecular representation that enables construction of a probabilistic surrogate model. Many molecular representations have been developed, however, they are all high-dimensional, which introduces important challenges in the BO process – mainly because the curse of dimensionality makes it difficult to define and perform inference over a suitable class of surrogate models. This challenge has been recently addressed by learning a lower-dimensional encoding of a SMILE or graph representation of a molecule in an unsupervised manner and then performing BO in the encoded space. In this work, we show that such methods have a tendency to “get stuck,” which we hypothesize occurs since the mapping from the encoded space to property values is not necessarily well-modeled by a Gaussian process. We argue for an alternative approach that combines numerical molecular descriptors with a sparse axis-aligned Gaussian process model, which is capable of rapidly identifying sparse subspaces that are most relevant to modeling the unknown property function. We demonstrate that our proposed method substantially outperforms existing MPO methods on a variety of benchmark and real-world problems. Specifically, we show that our method can routinely find near-optimal molecules out of a set of more than > 100k alternatives within 100 or fewer expensive queries.
Accelerating Black-Box Molecular Property Optimization by Adaptively Learning Sparse Subspaces
[ "Farshud Sorourifar", "Thomas Banker", "Joel Paulson" ]
Workshop/ReALML
2401.01398
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=GUKt7ENgSr
@inproceedings{ ding2023ever, title={Ever Evolving Evaluator ({EV}3): Towards Flexible and Reliable Meta-Optimization for Knowledge Distillation}, author={Li Ding and Masrour Zoghi and Guy Tennenholtz and Maryam Karimzadehgan}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=GUKt7ENgSr} }
We introduce EV3, a novel meta-optimization framework designed to efficiently train scalable machine learning models through an intuitive explore-assess-adapt protocol. In each iteration of EV3, we explore various model parameter updates, assess them using pertinent evaluation methods, and then adapt the model based on the optimal updates and previous progress history. EV3 offers substantial flexibility without imposing stringent constraints like differentiability on the key objectives relevant to the tasks of interest, allowing for exploratory updates with intentionally-biased gradients and through a diversity of losses and optimizers. Additionally, the assessment phase provides reliable safety controls to ensure robust generalization, and can dynamically prioritize tasks in scenarios with multiple objectives. With inspiration drawn from evolutionary algorithms, meta-learning, and neural architecture search, we investigate an application of EV3 to knowledge distillation. Our experimental results illustrate EV3's capability to safely explore the modeling landscape, while hinting at its potential applicability across numerous domains due to its inherent flexibility and adaptability. Finally, we provide a JAX implementation of EV3, along with source code for experiments, available at: https://github.com/google-research/google-research/tree/master/ev3.
Ever Evolving Evaluator (EV3): Towards Flexible and Reliable Meta-Optimization for Knowledge Distillation
[ "Li Ding", "Masrour Zoghi", "Guy Tennenholtz", "Maryam Karimzadehgan" ]
Workshop/ReALML
2310.18893
[ "https://github.com/google-research/google-research" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=GKq0Vco2TW
@inproceedings{ mishra2023provablyconvergent, title={Provably-Convergent Bayesian Source Seeking with Mobile Agents in Multimodal Fields}, author={Vivek Mishra and Raul Astudillo and Peter I. Frazier and Fumin Zhang}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=GKq0Vco2TW} }
We consider source-seeking tasks, where the goal is to locate a source using a mobile agent that gathers potentially noisy measurements from the emitted signal. Such tasks are prevalent, for example, when searching radioactive or chemical sources using mobile sensors that track wind-carried particles. In this work, we propose an iterative Bayesian algorithm for source seeking, especially well-suited for challenging environments characterized by multimodal signal intensity and noisy observations. At each step, this algorithm computes a Bayesian posterior distribution characterizing the source's location using prior physical knowledge of the observation process and the accumulated data. Subsequently, it decides where the agent should move and observe next by following a search strategy that implicitly considers paths to the source's most likely location under the posterior. We show that the trajectory of an agent executing the proposed algorithm converges to the source's location asymptotically with probability one. We validate the algorithm's convergence through simulated experiments of an agent seeking a chemical plume in a turbulent environment.
Provably-Convergent Bayesian Source Seeking with Mobile Agents in Multimodal Fields
[ "Vivek Mishra", "Raul Astudillo", "Peter I. Frazier", "Fumin Zhang" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=G6ujG6LaKV
@inproceedings{ n{\'e}meth2023computeefficient, title={Compute-Efficient Active Learning}, author={G{\'a}bor N{\'e}meth and Tamas Matuszka}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=G6ujG6LaKV} }
Active learning, a powerful paradigm in machine learning, aims at reducing labeling costs by selecting the most informative samples from an unlabeled dataset. However, traditional active learning process often demands extensive computational resources, hindering scalability and efficiency. In this paper, we address this critical issue by presenting a novel method designed to alleviate the computational burden associated with active learning on massive datasets. To achieve this goal, we introduce a simple, yet effective method-agnostic framework that outlines how to strategically choose and annotate data points, optimizing the process for efficiency while maintaining model performance. Through case studies, we demonstrate the effectiveness of our proposed method in reducing computational costs while maintaining or, in some cases, even surpassing baseline model outcomes. Code is available at https://github.com/aimotive/Compute-Efficient-Active-Learning
Compute-Efficient Active Learning
[ "Gábor Németh", "Tamas Matuszka" ]
Workshop/ReALML
2401.07639
[ "https://github.com/aimotive/Compute-Efficient-Active-Learning" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=FpMRFG3z2Q
@inproceedings{ song2023circuitvae, title={Circuit{VAE}: Efficient and Scalable Latent Circuit Optimization}, author={Jialin Song and Aidan Swope and Robert Kirby and Rajarshi Roy and Saad Godil and Jonathan Raiman and Bryan Catanzaro}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=FpMRFG3z2Q} }
Automatically designing fast and space-efficient digital circuits is challenging because circuits are discrete, must exactly implement the desired logic, and are costly to simulate. We address these challenges with CircuitVAE, a search algorithm that embeds computation graphs in a continuous space and optimizes a learned surrogate of physical simulation by gradient descent. By carefully controlling overfitting of the simulation surrogate and ensuring diverse exploration, our algorithm is highly sample-efficient, yet gracefully scales to large problem instances and high sample budgets. We test CircuitVAE by designing binary adders across a large range of sizes, IO timing constraints, and sample budgets. Our method excels at designing large circuits, where other algorithms struggle: compared to reinforcement learning and genetic algorithms, CircuitVAE typically finds 64-bit adders which are smaller and faster using less than half the sample budget. We also find CircuitVAE can design state-of-the-art adders in a real-world chip, demonstrating that our method can outperform commercial tools in a realistic setting.
CircuitVAE: Efficient and Scalable Latent Circuit Optimization
[ "Jialin Song", "Aidan Swope", "Robert Kirby", "Rajarshi Roy", "Saad Godil", "Jonathan Raiman", "Bryan Catanzaro" ]
Workshop/ReALML
2406.09535
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=FDoS9cHQTp
@inproceedings{ novitasari2023unleashing, title={Unleashing the Autoconversion Rates Forecasting: Evidential Regression from Satellite Data}, author={Maria Carolina Novitasari and Johannes Quaas and Miguel R. D. Rodrigues}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=FDoS9cHQTp} }
High-resolution simulations such as the ICOsahedral Non-hydrostatic Large-Eddy Model (ICON-LEM) can be used to understand the interactions between aerosols, clouds, and precipitation processes that currently represent the largest source of uncertainty involved in determining the radiative forcing of climate change. Nevertheless, due to the exceptionally high computing cost required, this simulation-based approach can only be employed for a short period of time within a limited area. Despite the fact that machine learning can solve this problem, the related model uncertainties may make it less reliable. To address this, we developed a neural network (NN) model powered with evidential learning to assess the data and model uncertainties applied to satellite observation data. Our study focuses on estimating the rate at which small droplets (cloud droplets) collide and coalesce to become larger droplets (raindrops) – autoconversion rates -- since this is one of the key processes in the precipitation formation of liquid clouds, hence crucial to better understanding cloud responses to anthropogenic aerosols. The results of estimating the autoconversion rates demonstrate that the model performs reasonably well, with the inclusion of both aleatoric and epistemic uncertainty estimation, which improves the credibility of the model and provides useful insights for future improvement.
Unleashing the Autoconversion Rates Forecasting: Evidential Regression from Satellite Data
[ "Maria Carolina Novitasari", "Johannes Quaas", "Miguel R. D. Rodrigues" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=F6jSo0PIKy
@inproceedings{ agarwal2023towards, title={Towards Scalable Identification of Brick Kilns from Satellite Imagery with Active Learning}, author={Aditi Agarwal and Suraj Jaiswal and Madhav Kanda and Dhruv Patel and Rishabh Mondal and Vannsh Jani and Zeel B Patel and Nipun Batra and Sarath Guttikunda}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=F6jSo0PIKy} }
Air pollution is a leading cause of death globally, especially in south-east Asia. Brick production contributes significantly to air pollution. However, unlike other sources such as power plants, brick production is unregulated and thus hard to monitor. Traditional survey-based methods for kiln identification are time and resource-intensive. Similarly, it is time-consuming for air quality experts to annotate satellite imagery manually. Recently, computer vision machine learning models have helped reduce labeling costs, but they need sufficiently large labeled imagery. In this paper, we propose scalable methods using active learning to accurately detect brick kilns with minimal manual labeling effort. Through this work, we have identified more than 700 new brick kilns across the Indo-Gangetic region: a highly populous and polluted region spanning 0.4 million square kilometers in India. In addition, we have deployed our model as a web application for automatically identifying brick kilns given a specific area by the user.
Towards Scalable Identification of Brick Kilns from Satellite Imagery with Active Learning
[ "Aditi Agarwal", "Suraj Jaiswal", "Madhav Kanda", "Dhruv Patel", "Rishabh Mondal", "Vannsh Jani", "Zeel B Patel", "Nipun Batra", "Sarath Guttikunda" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Ej4YjxAvcp
@inproceedings{ gajjar2023improved, title={Improved Bounds for Agnostic Active Learning of Single Index Models}, author={Aarshvi Gajjar and Xingyu Xu and Christopher Musco and Chinmay Hegde}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=Ej4YjxAvcp} }
We study active learning for single index models of the form $F({\mathbf{x}}) = f(\langle {\mathbf{w}}, {\mathbf{x}}\rangle)$, where $f:\mathbb{R} \to \mathbb{R}$ and ${\mathbf{x},\mathbf{w}} \in \mathbb{R}^d$. Such functions are important in scientific computing, where they are used to construct surrogate models for partial differential equations (PDEs) and to approximate high-dimensional Quantities of Interest. In these applications, collecting function samples requires solving a partial differential equation, so sample-efficient active learning methods translate to reduced computational cost. Our work provides two main results. First, when $f$ is known and Lipschitz, we show that $\tilde{O}(d)$ samples collected via \emph{statistical leverage score sampling} are sufficient to find an optimal single index model for a given target function, even in the challenging and practically important agnostic (adversarial noise) setting. This result is optimal up to logarithmic factors and improves quadratically on a recent $\tilde{O}(d^{2})$ bound of \citet{gajjar2023active}. Second, we show that $\tilde{O}(d^{3/2})$ samples suffice in the more difficult non-parametric setting when $f$ is \emph{unknown}, which is the also best result known in this general setting.
Improved Bounds for Agnostic Active Learning of Single Index Models
[ "Aarshvi Gajjar", "Xingyu Xu", "Christopher Musco", "Chinmay Hegde" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=E8zSTm2bGu
@inproceedings{ folch2023practical, title={Practical Path-based Bayesian Optimization}, author={Jose Pablo Folch and James A C Odgers and Shiqiang Zhang and Robert Matthew Lee and Behrang Shafei and David Walz and Calvin Tsay and Mark van der Wilk and Ruth Misener}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=E8zSTm2bGu} }
There has been a surge in interest in data-driven experimental design with applications to chemical engineering and drug manufacturing. Bayesian optimization (BO) has proven to be adaptable to such cases, since we can model the reactions of interest as expensive black-box functions. Sometimes, the cost of this black-box functions can be separated into two parts: (a) the cost of the experiment itself, and (b) the cost of changing the input parameters. In this short paper, we extend the SnAKe algorithm to deal with both types of costs simultaneously. We further propose extensions to the case of a maximum allowable input change, as well as to the multi-objective setting.
Practical Path-based Bayesian Optimization
[ "Jose Pablo Folch", "James A C Odgers", "Shiqiang Zhang", "Robert Matthew Lee", "Behrang Shafei", "David Walz", "Calvin Tsay", "Mark van der Wilk", "Ruth Misener" ]
Workshop/ReALML
2312.00622
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=D0fdIDnsWZ
@inproceedings{ hellan2023datadriven, title={Data-driven Prior Learning for Bayesian Optimisation}, author={Sigrid Passano Hellan and Christopher G. Lucas and Nigel H. Goddard}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=D0fdIDnsWZ} }
Transfer learning for Bayesian optimisation has generally assumed a strong similarity between optimisation tasks, with at least a subset having similar optimal inputs. This assumption can reduce computational costs, but it is violated in a wide range of optimisation problems where transfer learning may nonetheless be useful. We replace this assumption with a weaker one only requiring the shape of the optimisation landscape to be similar, and analyse the recent method Prior Learning for Bayesian Optimisation — PLeBO — in this setting. By learning priors for the hyperparameters of the Gaussian process surrogate model we can better approximate the underlying function, especially for few function evaluations. We validate the learned priors and compare to a breadth of transfer learning approaches, using synthetic data and a recent air pollution optimisation problem as benchmarks. We show that PLeBO and prior transfer find good inputs in fewer evaluations.
Data-driven Prior Learning for Bayesian Optimisation
[ "Sigrid Passano Hellan", "Christopher G. Lucas", "Nigel H. Goddard" ]
Workshop/ReALML
2311.14653
[ "https://github.com/sighellan/plebo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=CzdSCFOG1n
@inproceedings{ mishler2023active, title={Active Learning with Missing Not At Random Outcomes}, author={Alan Mishler and Mohsen Ghassemi and Alec Koppel and Sumitra Ganesh}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=CzdSCFOG1n} }
When outcomes in training data are missing not at random (MNAR), predictors that are trained on that data can be arbitrarily biased. In some cases, however, batches of missing outcomes can be recovered at some cost, giving rise to a pool-based active learning setting. Previous active learning approaches implicitly treat all labeled data as having come from the same distribution, whereas in the MNAR setting, the training data and the initial unlabeled pool have different distributions. We propose MNAR-Aware Active Learning (MAAL), an active learning procedure that takes this into account and takes advantage of information that the missingness indicator carries about the outcome. We additionally consider acquisition functions that are attuned to the MNAR setting. Experiments on a large set of classification benchmark datasets demonstrate the benefits of our proposed approach over standard active and passive learning approaches.
Active Learning with Missing Not At Random Outcomes
[ "Alan Mishler", "Mohsen Ghassemi", "Alec Koppel", "Sumitra Ganesh" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=BctsZxNsfO
@inproceedings{ bruns-smith2023robust, title={Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders}, author={David Bruns-Smith and Angela Zhou}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=BctsZxNsfO} }
Offline reinforcement learning is important in domains such as medicine, economics, and e-commerce where online experimentation is costly, dangerous or unethical, and where the true model is unknown. We study robust policy evaluation and policy optimization in the presence of sequentially-exogenous unobserved confounders under a sensitivity model. We propose and analyze orthogonalized robust fitted-Q-iteration that uses closed-form solutions of the robust Bellman operator to derive a loss minimization problem for the robust Q function, and adds a bias-correction to quantile estimation. Our algorithm enjoys the computational ease of fitted-Q-iteration and statistical improvements (reduced dependence on quantile estimation error) from orthogonalization. We provide sample complexity bounds, insights, and show effectiveness both in simulations and on real-world longitudinal healthcare data of treating sepsis. In particular, our model of sequential unobserved confounders yields an online Markov decision process, rather than partially observed Markov decision process: we illustrate how this can enable warm-starting optimistic reinforcement learning algorithms with valid robust bounds from observational data.
Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders
[ "David Bruns-Smith", "Angela Zhou" ]
Workshop/ReALML
2302.00662
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=A1RVn1m3J3
@inproceedings{ rankovi{\'c}2023bochemian, title={BoChemian: Large Language Model Embeddings for Bayesian Optimization of Chemical Reactions}, author={Bojana Rankovi{\'c} and Philippe Schwaller}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=A1RVn1m3J3} }
This paper explores the integration of Large Language Models (LLM) embeddings with Bayesian Optimization (BO) in the domain of chemical reaction optimization with the showcase study on Buchwald-Hartwig reactions. By leveraging LLMs, we can transform textual chemical procedures into an informative feature space suitable for Bayesian optimization. Our findings show that even out-of-the-box open-source LLMs can map chemical reactions for optimization tasks, highlighting their latent specialized knowledge. The results motivate the consideration of further model specialization through adaptive fine-tuning within the bo framework for on-the-fly optimization. This work serves as a foundational step toward a unified computational framework that synergizes textual chemical descriptions with machine-driven optimization, aiming for more efficient and accessible chemical research. The code is available at: https://github.com/schwallergroup/bochemian.
BoChemian: Large Language Model Embeddings for Bayesian Optimization of Chemical Reactions
[ "Bojana Ranković", "Philippe Schwaller" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7juV7SKVvM
@inproceedings{ das2023textitless, title={{\textbackslash}textit\{Less But Better\}{\textbackslash}{\textbackslash} Towards better {\textbackslash}textit\{{AQ}\} Monitoring by learning {\textbackslash}{\textbackslash} Inducing Points for Multi-Task Gaussian Processes}, author={Progyan Das and Mihir Agarwal}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=7juV7SKVvM} }
Air pollution is a pressing global issue affecting both human health and environmental sustainability. The high financial burden of conventional Air Quality (AQ) monitoring stations and their sparse spatial distribution necessitate advanced inferencing techniques for effective regulation and public health policies. We introduce a comprehensive framework employing Variational Multi-Output Gaussian Processes (VMOGP) with a Spectral Mixture (SM) kernel designed to model and predict multiple AQ indicators, particularly $PM_{2.5}$ and Carbon Monoxide ($CO$). Our method unifies the strengths of Multi-Output Gaussian Processes (MOGPs) and Variational Multi-Task Gaussian Processes (VMTGP) to capture intricate spatio-temporal correlations among air pollutants, thus delivering enhanced robustness and accuracy over Single-Output Gaussian Processes (SOGPs) and state-of-the-art neural attention-based methods. Importantly, by analyzing the variational distribution of auxiliary inducing points, we identify high-information geographical locales for optimized AQ monitoring frameworks. Through extensive empirical evaluations, we demonstrate superior performance in both accuracy and uncertainty quantification. Our methodology promises significant implications for urban planning, adaptive station placement, and public health policy formulation.
Less But Better Towards better AQ Monitoring by learning Inducing Points for Multi-Task Gaussian Processes
[ "Progyan Das", "Mihir Agarwal" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7iybUXjQgp
@inproceedings{ akengin2023actsort, title={ActSort: An active-learning accelerated cell sorting algorithm for large-scale calcium imaging datasets}, author={Hakki Orhun Akengin and Mehmet Anil Aslihak and Yiqi Jiang and Yang Li and Oscar Hernandez and Hakan Inan and Christopher Miranda and Marta Blanco Pozo and Fatih Dinc and Mark Schnitzer}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=7iybUXjQgp} }
Due to rapid progress in optical imaging technologies, contemporary neural calcium imaging studies can monitor the dynamics of 10,000 or more neurons at once in the brains of awake behaving mammals. After automated extraction of the neurons' putative locations, a typical experiment involves extensive human labor to cull false-positive cells from the data, a process called \emph{cell sorting.} Efforts to automate cell sorting via the use of trained models either employ pre-trained, suboptimal classifiers or require reduced but still substantial human labor to train dataset-specific classifiers. In this workshop paper, we introduce an active-learning accelerated cell-sorting paradigm, termed ActSort, which establishes an online feedback loop between the human annotator and the cell classifier. To test this paradigm, we designed a benchmark by curating large-scale calcium imaging datasets from 5 mice, with approximately 40,000 cell candidates in total. Each movie was annotated by 4 (out of 6 total) human annotators, yielding about 160,000 total annotations. With this approach, we tested two active learning strategies, discriminative active learning (DAL) and confidence-based active learning (CAL). To create a baseline representing the traditional strategy, we performed random and first-to-last annotations, in which cells are annotated in either a random order or the order they are received from the cell-extraction algorithm. Our analysis revealed that, even when using the active learning-derived results of $<5\%$ of the human-annotated cells, CAL surpassed human performance levels in both precision and recall. In comparison, the first-to-last strategy required $80\%$ of the cells to be annotated to achieve the same mark. By decreasing the human labor needed from hours to minutes while also enabling more accurate predictions than a typical human annotator, ActSort overcomes a bottleneck in neuroscience research and enables rapid pre-processing of large-scale brain-imaging datasets.
ActSort: An active-learning accelerated cell sorting algorithm for large-scale calcium imaging datasets
[ "Hakki Orhun Akengin", "Mehmet Anil Aslihak", "Yiqi Jiang", "Yang Li", "Oscar Hernandez", "Hakan Inan", "Christopher Miranda", "Marta Blanco Pozo", "Fatih Dinc", "Mark Schnitzer" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7R23KXdqGV
@inproceedings{ yu2023actively, title={Actively learning a Bayesian matrix fusion model with deep side information}, author={Yangyang Yu and Jordan W. Suchow}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=7R23KXdqGV} }
High-dimensional deep neural network representations of images and concepts can be aligned to predict human annotations of diverse stimuli. However, such alignment requires the costly collection of behavioral responses, such that, in practice, the deep-feature spaces are only ever sparsely sampled. Here, we propose an active learning approach to adaptively sample experimental stimuli to efficiently learn a Bayesian matrix factorization model with deep side information. We observe a significant efficiency gain over a passive baseline. Furthermore, with a sequential batched sampling strategy, the algorithm is applicable not only to small datasets collected from traditional laboratory experiments but also to settings where large-scale crowdsourced data collection is needed to accurately align the high-dimensional deep feature representations derived from pre-trained networks. This provides cost-effective solutions for collecting and generating quality-assured predictions in large-scale behavioral and cognitive studies.
Actively learning a Bayesian matrix fusion model with deep side information
[ "Yangyang Yu", "Jordan W. Suchow" ]
Workshop/ReALML
2306.05331
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6hkY6dYtBA
@inproceedings{ stretcu2023agile, title={Agile Modeling: From Concept to Classifier in Minutes}, author={Otilia Stretcu and Edward Vendrow and Kenji Hata and Krishnamurthy Viswanathan and Vittorio Ferrari and Sasan Tavakkol and Wenlei Zhou and Aditya Avinash and Enming Luo and Neil Gordon Alldrin and Mohammadhossein Bateni and Gabriel Berger and Andrew Bunner and Chun-Ta Lu and Javier A Rey and Giulia DeSalvo and Ranjay Krishna and Ariel Fuxman}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=6hkY6dYtBA} }
The application of computer vision methods to nuanced, subjective concepts is growing. While crowdsourcing has served the vision community well for most objective tasks (such as labeling a "zebra"), it now falters on tasks where there is substantial subjectivity in the concept (such as identifying "gourmet tuna"). However, empowering any user to develop a classifier for their concept is technically difficult: users are neither machine learning experts nor have the patience to label thousands of examples. In reaction, we introduce the problem of Agile Modeling: the process of turning any subjective visual concept into a computer vision model through real-time user-in-the-loop interactions. We instantiate an Agile Modeling prototype for image classification and show through a user study (N=14) that users can create classifiers with minimal effort in under 30 minutes. We compare this user driven process with the traditional crowdsourcing paradigm and find that the crowd's notion often differs from that of the user's, especially as the concepts become more subjective. Finally, we scale our experiments with simulations of users training classifiers for ImageNet21k categories to further demonstrate the efficacy of the approach.
Agile Modeling: From Concept to Classifier in Minutes
[ "Otilia Stretcu", "Edward Vendrow", "Kenji Hata", "Krishnamurthy Viswanathan", "Vittorio Ferrari", "Sasan Tavakkol", "Wenlei Zhou", "Aditya Avinash", "Enming Luo", "Neil Gordon Alldrin", "Mohammadhossein Bateni", "Gabriel Berger", "Andrew Bunner", "Chun-Ta Lu", "Javier A Rey", "Giulia DeSalvo", "Ranjay Krishna", "Ariel Fuxman" ]
Workshop/ReALML
2302.12948
[ "" ]
https://huggingface.co/papers/2302.12948
3
1
0
18
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=5br5UllmBy
@inproceedings{ chaudhari2023learning, title={Learning Models and Evaluating Policies with Offline Off-Policy Data under Partial Observability}, author={Shreyas Chaudhari and Philip S. Thomas and Bruno Castro da Silva}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=5br5UllmBy} }
Models in reinforcement learning are often estimated from offline data, which in many real-world scenarios is subject to partial observability. In this work, we study the challenges that emerge from using models estimated from partially-observable offline data for policy evaluation. Notably, a complete definition of the models includes dependence on the data-collecting policy. To address this issue, we introduce a method for model estimation that incorporates importance weighting in the model learning process. The off-policy samples are reweighted to be reflective of their probabilities under a different policy, such that the resultant model is a consistent estimator of the off-policy model and provides consistent estimates of the expected off-policy return. This is a crucial step towards the reliable and responsible use of models learned under partial observability, particularly in scenarios where inaccurate policy evaluation can have catastrophic consequences. We empirically demonstrate the efficacy of our method and its resilience to common approximations such as weight clipping on a range of domains with diverse types of partial observability.
Learning Models and Evaluating Policies with Offline Off-Policy Data under Partial Observability
[ "Shreyas Chaudhari", "Philip S. Thomas", "Bruno Castro da Silva" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=4X33gHxHf1
@inproceedings{ zhang2023labelbench, title={LabelBench: A Comprehensive Framework for Benchmarking Adaptive Label-Efficient Learning}, author={Jifan Zhang and Yifang Chen and Gregory Canal and Arnav Mohanty Das and Gantavya Bhatt and Yinglun Zhu and Stephen Mussmann and Simon Shaolei Du and Jeff Bilmes and Kevin Jamieson and Robert D Nowak}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=4X33gHxHf1} }
Labeled data are critical to modern machine learning applications, but obtaining labels can be expensive. To mitigate this cost, machine learning methods, such as transfer learning, semi-supervised learning and active learning, aim to be $\text{\textit{label-efficient}}$: achieving high predictive performance from relatively few labeled examples. While obtaining the best label-efficiency in practice often requires combinations of these techniques, existing benchmark and evaluation frameworks do not capture a concerted combination of all such techniques. This paper addresses this deficiency by introducing LabelBench, a new computationally-efficient framework for joint evaluation of multiple label-efficient learning techniques. As an application of LabelBench, we introduce a novel benchmark of state-of-the-art active learning methods in combination with semi-supervised learning for fine-tuning pretrained vision transformers. Our benchmark demonstrates significantly better label-efficiencies than previously reported in active learning. LabelBench's modular codebase is open-sourced for the broader community to contribute label-efficient learning methods and benchmarks. The repository can be found at: https://github.com/EfficientTraining/LabelBench.
LabelBench: A Comprehensive Framework for Benchmarking Adaptive Label-Efficient Learning
[ "Jifan Zhang", "Yifang Chen", "Gregory Canal", "Arnav Mohanty Das", "Gantavya Bhatt", "Yinglun Zhu", "Stephen Mussmann", "Simon Shaolei Du", "Jeff Bilmes", "Kevin Jamieson", "Robert D Nowak" ]
Workshop/ReALML
2306.09910
[ "https://github.com/efficienttraining/labelbench" ]
https://huggingface.co/papers/2306.09910
0
0
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=3UuSQNVHS6
@inproceedings{ bankes2023reducr, title={{REDUCR}: Robust Data Downsampling Using Class Priority Reweighting}, author={William Bankes and George Hughes and Ilija Bogunovic and Zi Wang}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=3UuSQNVHS6} }
Modern machine learning models are becoming increasingly expensive to train for real-world image and text classification tasks, where massive web-scale data is collected in a streaming fashion. To reduce the training cost, online batch selection techniques have been developed to choose the most informative datapoints. However, these techniques can suffer from poor worst-class generalization performance due to class imbalance and distributional shifts. This work introduces REDUCR, a robust and efficient data downsampling method that uses class priority reweighting. REDUCR reduces the training data while preserving worst-class generalization performance. REDUCR assigns priority weights to datapoints in a class-aware manner using an online learning algorithm. We demonstrate the data efficiency and robust performance of REDUCR on vision and text classification tasks. On web-scraped datasets with imbalanced class distributions, REDUCR achieves significant test accuracy boosts for the worst-performing class (but also on average), surpassing state-of-the-art methods by around 14%.
REDUCR: Robust Data Downsampling Using Class Priority Reweighting
[ "William Bankes", "George Hughes", "Ilija Bogunovic", "Zi Wang" ]
Workshop/ReALML
2312.00486
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2aOKjoPwT4
@inproceedings{ kokubun2023local, title={Local Acquisition Function for Active Level Set Estimation}, author={Yuta Kokubun and Kota Matsui and Kentaro Kutsukake and Wataru Kumagai and Takafumi Kanamori}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=2aOKjoPwT4} }
In this paper, we propose a new acquisition function based on local search for active super-level set estimation. Conventional acquisition functions for level set estimation problems are considered to struggle with problems where the threshold is high, and many points in the upper-level set have function values close to the threshold. The proposed method addresses this issue by effectively switching between two acquisition functions: one rapidly finds local level set and the other performs global exploration. The effectiveness of the proposed method is evaluated through experiments with synthetic and real-world datasets.
Local Acquisition Function for Active Level Set Estimation
[ "Yuta Kokubun", "Kota Matsui", "Kentaro Kutsukake", "Wataru Kumagai", "Takafumi Kanamori" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=0CPnNCOFiI
@inproceedings{ tifrea2023improving, title={Improving class and group imbalanced classification with uncertainty-based active learning}, author={Alexandru Tifrea and John Hill and Fanny Yang}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=0CPnNCOFiI} }
Recent experimental and theoretical analyses have revealed that uncertainty-based active learning algorithms (U-AL) are often not able to improve the average accuracy compared to even the simple baseline of passive learning (PL). However, we show in this work that U-AL is a competitive method in problems with severe data imbalance, when instead of the \emph{average} accuracy, the focus is the \emph{worst-subpopulation} accuracy. We show in extensive experiments that U-AL outperforms algorithms that explicitly aim to improve worst-subpopulation performance such as reweighting. We provide insights that explain the good performance of U-AL and show a theoretical result that is supported by our experimental observations.
Improving class and group imbalanced classification with uncertainty-based active learning
[ "Alexandru Tifrea", "John Hill", "Fanny Yang" ]
Workshop/ReALML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zBQYr8O2gT
@inproceedings{ kapusniak2023learning, title={Learning Genomic Sequence Representations using Graph Neural Networks over De Bruijn Graphs}, author={Kacper Kapusniak and Manuel Burger and Gunnar Ratsch and Amir Joudaki}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=zBQYr8O2gT} }
The rapid expansion of genomic sequence data calls for new methods to achieve robust sequence representations. Existing techniques often neglect intricate structural details, emphasizing mainly contextual information. To address this, we developed k-mer embeddings that merge contextual and structural string information by enhancing De Bruijn graphs with structural similarity connections. Subsequently, we crafted a self-supervised method based on Contrastive Learning that employs a heterogeneous Graph Convolutional Network encoder and constructs positive pairs based on node similarities. Our embeddings consistently outperform prior techniques for Edit Distance Approximation and Closest String Retrieval tasks.
Learning Genomic Sequence Representations using Graph Neural Networks over De Bruijn Graphs
[ "Kacper Kapusniak", "Manuel Burger", "Gunnar Ratsch", "Amir Joudaki" ]
Workshop/GLFrontiers
2312.03865
[ "https://github.com/ratschlab/genomic-gnn" ]
https://huggingface.co/papers/2312.03865
0
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=wz6l6yLTv1
@inproceedings{ guerranti2023on, title={On the Adversarial Robustness of Graph Contrastive Learning Methods}, author={Filippo Guerranti and Zinuo Yi and Anna Starovoit and Rafiq Mazen Kamel and Simon Geisler and Stephan G{\"u}nnemann}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=wz6l6yLTv1} }
Contrastive learning (CL) has emerged as a powerful framework for learning representations of images and text in a self-supervised manner while enhancing model robustness against adversarial attacks. More recently, researchers have extended the principles of contrastive learning to graph-structured data, giving birth to the field of graph contrastive learning (GCL). However, whether GCL methods can deliver the same advantages in adversarial robustness as their counterparts in the image and text domains remains an open question. In this paper, we introduce a comprehensive robustness evaluation protocol tailored to assess the robustness of GCL models. We subject these models to adaptive adversarial attacks targeting the graph structure, specifically in the evasion scenario. We evaluate node and graph classification tasks using diverse real-world datasets and attack strategies. With our work, we aim to offer insights into the robustness of GCL methods and hope to open avenues for potential future research directions.
On the Adversarial Robustness of Graph Contrastive Learning Methods
[ "Filippo Guerranti", "Zinuo Yi", "Anna Starovoit", "Rafiq Mazen Kamel", "Simon Geisler", "Stephan Günnemann" ]
Workshop/GLFrontiers
2311.17853
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=wjVKHEPoU2
@inproceedings{ jang2023a, title={A Simple and Scalable Representation for Graph Generation}, author={Yunhui Jang and Seul Lee and Sungsoo Ahn}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=wjVKHEPoU2} }
Recently, there has been a surge of interest in employing neural networks for graph generation, a fundamental statistical learning problem with critical applications like molecule design and community analysis. However, most approaches encounter significant limitations when generating large-scale graphs. This is due to their requirement to output the full adjacency matrices whose size grows quadratically with the number of nodes. In response to this challenge, we introduce a new, simple, and scalable graph representation named gap encoded edge list (GEEL) that has a small representation size that aligns with the number of edges. In addition, GEEL significantly reduces the vocabulary size by incorporating the gap encoding and bandwidth restriction schemes. GEEL can be autoregressively generated with the incorporation of node positional encoding, and we further extend GEEL to deal with attributed graphs by designing a new grammar. Our findings reveal that the adoption of this compact representation not only enhances scalability but also bolsters performance by simplifying the graph generation process. We conduct a comprehensive evaluation across ten non-attributed and two molecular graph generation tasks, demonstrating the effectiveness of GEEL.
A Simple and Scalable Representation for Graph Generation
[ "Yunhui Jang", "Seul Lee", "Sungsoo Ahn" ]
Workshop/GLFrontiers
2312.02230
[ "https://github.com/yunhuijang/geel" ]
https://huggingface.co/papers/2312.02230
1
0
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=wFJIkt2WAa
@inproceedings{ gao2023double, title={Double Equivariance for Inductive Link Prediction for Both New Nodes and New Relation Types}, author={Jianfei Gao and Yangze Zhou and Jincheng Zhou and Bruno Ribeiro}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=wFJIkt2WAa} }
The task of inductive link prediction in discrete attributed multigraphs (e.g., knowledge graphs, multilayer networks, heterogeneous networks, etc.) generally focuses on test predictions with solely new nodes but not both new nodes and new relation types. In this work, we formally define the task of predicting (completely) new nodes and new relation types in test as a doubly inductive link prediction task and introduce a theoretical framework for the solution. We start by defining the concept of double permutation-equivariant representations that are equivariant to permutations of both node identities and edge relation types. We then propose a general blueprint to design neural architectures that impose a structural representation of relations that can inductively generalize from training nodes and relations to arbitrarily new test nodes and relations without the need for adaptation, side information, or retraining. We also introduce the concept of distributionally double equivariant positional embeddings designed to perform the same task. Finally, we empirically demonstrate the capability of the two proposed models on a set of novel real-world benchmarks, showcasing relative performance gains of up to 41.40% on predicting new relations types compared to baselines.
Double Equivariance for Inductive Link Prediction for Both New Nodes and New Relation Types
[ "Jianfei Gao", "Yangze Zhou", "Jincheng Zhou", "Bruno Ribeiro" ]
Workshop/GLFrontiers
2302.01313
[ "https://github.com/purdueminds/isdea" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=tHQdZ74NLe
@inproceedings{ bhaila2023local, title={Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach}, author={Karuna Bhaila and Wen Huang and Yongkai Wu and Xintao Wu}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=tHQdZ74NLe} }
Graph Neural Networks have achieved tremendous success in modeling complex graph data in a variety of applications. However, there are limited studies investigating privacy protection in GNNs. In this work, we propose a learning framework that can provide node-level privacy, while incurring low utility loss. We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy, and apply randomization mechanisms to perturb both feature and label data before being collected by a central server for model training. Specifically, we investigate the application of randomization mechanisms in high-dimensional feature settings and propose an LDP protocol with strict privacy guarantees. Based on frequency estimation in statistical analysis of randomized data, we develop reconstruction methods to approximate features and labels from perturbed data. We also formulate this learning framework to utilize frequency estimates of graph clusters to supervise the training procedure at a sub-graph level. Extensive experiments on real-world and semi-synthetic datasets demonstrate the validity of our proposed model.
Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach
[ "Karuna Bhaila", "Wen Huang", "Yongkai Wu", "Xintao Wu" ]
Workshop/GLFrontiers
2309.08569
[ "https://github.com/karuna-bhaila/rgnn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster