categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2405.13956
null
null
http://arxiv.org/pdf/2405.13956v2
2024-05-28T04:44:06Z
2024-05-22T19:45:01Z
Attention as an RNN
The advent of Transformers marked a significant breakthrough in sequence modelling, providing a highly performant architecture capable of leveraging GPU parallelism. However, Transformers are computationally expensive at inference time, limiting their applications, particularly in low-resource settings (e.g., mobile and embedded devices). Addressing this, we (1) begin by showing that attention can be viewed as a special Recurrent Neural Network (RNN) with the ability to compute its textit{many-to-one} RNN output efficiently. We then (2) show that popular attention-based models such as Transformers can be viewed as RNN variants. However, unlike traditional RNNs (e.g., LSTMs), these models cannot be updated efficiently with new tokens, an important property in sequence modelling. Tackling this, we (3) introduce a new efficient method of computing attention's textit{many-to-many} RNN output based on the parallel prefix scan algorithm. Building on the new attention formulation, we (4) introduce textbf{Aaren}, an attention-based module that can not only (i) be trained in parallel (like Transformers) but also (ii) be updated efficiently with new tokens, requiring only constant memory for inferences (like traditional RNNs). Empirically, we show Aarens achieve comparable performance to Transformers on $38$ datasets spread across four popular sequential problem settings: reinforcement learning, event forecasting, time series classification, and time series forecasting tasks while being more time and memory-efficient.
[ "['Leo Feng' 'Frederick Tung' 'Hossein Hajimirsadeghi'\n 'Mohamed Osama Ahmed' 'Yoshua Bengio' 'Greg Mori']" ]
null
null
2405.13957
null
null
http://arxiv.org/pdf/2405.13957v1
2024-05-22T19:46:45Z
2024-05-22T19:46:45Z
Exploring the Relationship Between Feature Attribution Methods and Model Performance
Machine learning and deep learning models are pivotal in educational contexts, particularly in predicting student success. Despite their widespread application, a significant gap persists in comprehending the factors influencing these models' predictions, especially in explainability within education. This work addresses this gap by employing nine distinct explanation methods and conducting a comprehensive analysis to explore the correlation between the agreement among these methods in generating explanations and the predictive model's performance. Applying Spearman's correlation, our findings reveal a very strong correlation between the model's performance and the agreement level observed among the explanation methods.
[ "['Priscylla Silva' 'Claudio T. Silva' 'Luis Gustavo Nonato']" ]
null
null
2405.13960
null
null
http://arxiv.org/pdf/2405.13960v1
2024-05-22T19:55:33Z
2024-05-22T19:55:33Z
Learning To Play Atari Games Using Dueling Q-Learning and Hebbian Plasticity
In this work, an advanced deep reinforcement learning architecture is used to train neural network agents playing atari games. Given only the raw game pixels, action space, and reward information, the system can train agents to play any Atari game. At first, this system uses advanced techniques like deep Q-networks and dueling Q-networks to train efficient agents, the same techniques used by DeepMind to train agents that beat human players in Atari games. As an extension, plastic neural networks are used as agents, and their feasibility is analyzed in this scenario. The plasticity implementation was based on backpropagation and the Hebbian update rule. Plastic neural networks have excellent features like lifelong learning after the initial training, which makes them highly suitable in adaptive learning environments. As a new analysis of plasticity in this context, this work might provide valuable insights and direction for future works.
[ "['Md Ashfaq Salehin']" ]
null
null
2405.13961
null
null
http://arxiv.org/pdf/2405.13961v1
2024-05-22T19:57:11Z
2024-05-22T19:57:11Z
SADDLe: Sharpness-Aware Decentralized Deep Learning with Heterogeneous Data
Decentralized training enables learning with distributed datasets generated at different locations without relying on a central server. In realistic scenarios, the data distribution across these sparsely connected learning agents can be significantly heterogeneous, leading to local model over-fitting and poor global model generalization. Another challenge is the high communication cost of training models in such a peer-to-peer fashion without any central coordination. In this paper, we jointly tackle these two-fold practical challenges by proposing SADDLe, a set of sharpness-aware decentralized deep learning algorithms. SADDLe leverages Sharpness-Aware Minimization (SAM) to seek a flatter loss landscape during training, resulting in better model generalization as well as enhanced robustness to communication compression. We present two versions of our approach and conduct extensive experiments to show that SADDLe leads to 1-20% improvement in test accuracy compared to other existing techniques. Additionally, our proposed approach is robust to communication compression, with an average drop of only 1% in the presence of up to 4x compression.
[ "['Sakshi Choudhary' 'Sai Aparna Aketi' 'Kaushik Roy']" ]
null
null
2405.13962
null
null
http://arxiv.org/pdf/2405.13962v1
2024-05-22T19:58:13Z
2024-05-22T19:58:13Z
Learning heavy-tailed distributions with Wasserstein-proximal-regularized $α$-divergences
In this paper, we propose Wasserstein proximals of $alpha$-divergences as suitable objective functionals for learning heavy-tailed distributions in a stable manner. First, we provide sufficient, and in some cases necessary, relations among data dimension, $alpha$, and the decay rate of data distributions for the Wasserstein-proximal-regularized divergence to be finite. Finite-sample convergence rates for the estimation in the case of the Wasserstein-1 proximal divergences are then provided under certain tail conditions. Numerical experiments demonstrate stable learning of heavy-tailed distributions -- even those without first or second moment -- without any explicit knowledge of the tail behavior, using suitable generative models such as GANs and flow-based models related to our proposed Wasserstein-proximal-regularized $alpha$-divergences. Heuristically, $alpha$-divergences handle the heavy tails and Wasserstein proximals allow non-absolute continuity between distributions and control the velocities of flow-based algorithms as they learn the target distribution deep into the tails.
[ "['Ziyu Chen' 'Hyemin Gu' 'Markos A. Katsoulakis' 'Luc Rey-Bellet'\n 'Wei Zhu']" ]
null
null
2405.13964
null
null
http://arxiv.org/pdf/2405.13964v2
2024-05-26T15:32:47Z
2024-05-22T20:00:19Z
Design Editing for Offline Model-based Optimization
Offline model-based optimization (MBO) aims to maximize a black-box objective function using only an offline dataset of designs and scores. A prevalent approach involves training a conditional generative model on existing designs and their associated scores, followed by the generation of new designs conditioned on higher target scores. However, these newly generated designs often underperform due to the lack of high-scoring training data. To address this challenge, we introduce a novel method, Design Editing for Offline Model-based Optimization (DEMO), which consists of two phases. In the first phase, termed pseudo-target distribution generation, we apply gradient ascent on the offline dataset using a trained surrogate model, producing a synthetic dataset where the predicted scores serve as new labels. A conditional diffusion model is subsequently trained on this synthetic dataset to capture a pseudo-target distribution, which enhances the accuracy of the conditional diffusion model in generating higher-scoring designs. Nevertheless, the pseudo-target distribution is susceptible to noise stemming from inaccuracies in the surrogate model, consequently predisposing the conditional diffusion model to generate suboptimal designs. We hence propose the second phase, existing design editing, to directly incorporate the high-scoring features from the offline dataset into design generation. In this phase, top designs from the offline dataset are edited by introducing noise, which are subsequently refined using the conditional diffusion model to produce high-scoring designs. Overall, high-scoring designs begin with inheriting high-scoring features from the second phase and are further refined with a more accurate conditional diffusion model in the first phase. Empirical evaluations on 7 offline MBO tasks show that DEMO outperforms various baseline methods.
[ "['Ye Yuan' 'Youyuan Zhang' 'Can Chen' 'Haolun Wu' 'Zixuan Li' 'Jianmo Li'\n 'James J. Clark' 'Xue Liu']" ]
null
null
2405.13965
null
null
http://arxiv.org/pdf/2405.13965v1
2024-05-22T20:04:52Z
2024-05-22T20:04:52Z
Unleashing the Power of Unlabeled Data: A Self-supervised Learning Framework for Cyber Attack Detection in Smart Grids
Modern power grids are undergoing significant changes driven by information and communication technologies (ICTs), and evolving into smart grids with higher efficiency and lower operation cost. Using ICTs, however, comes with an inevitable side effect that makes the power system more vulnerable to cyber attacks. In this paper, we propose a self-supervised learning-based framework to detect and identify various types of cyber attacks. Different from existing approaches, the proposed framework does not rely on large amounts of well-curated labeled data but makes use of the massive unlabeled data in the wild which are easily accessible. Specifically, the proposed framework adopts the BERT model from the natural language processing domain and learns generalizable and effective representations from the unlabeled sensing data, which capture the distinctive patterns of different attacks. Using the learned representations, together with a very small amount of labeled data, we can train a task-specific classifier to detect various types of cyber attacks. Meanwhile, real-world training datasets are usually imbalanced, i.e., there are only a limited number of data samples containing attacks. In order to cope with such data imbalance, we propose a new loss function, separate mean error (SME), which pays equal attention to the large and small categories to better train the model. Experiment results in a 5-area power grid system with 37 buses demonstrate the superior performance of our framework over existing approaches, especially when a very limited portion of labeled data are available, e.g., as low as 0.002%. We believe such a framework can be easily adopted to detect a variety of cyber attacks in other power grid scenarios.
[ "['Hanyu Zeng' 'Pengfei Zhou' 'Xin Lou' 'Zhen Wei Ng' 'David K. Y. Yau'\n 'Marianne Winslett']" ]
null
null
2405.13969
null
null
http://arxiv.org/pdf/2405.13969v1
2024-05-22T20:09:21Z
2024-05-22T20:09:21Z
Uncertainty-Aware DRL for Autonomous Vehicle Crowd Navigation in Shared Space
Safe, socially compliant, and efficient navigation of low-speed autonomous vehicles (AVs) in pedestrian-rich environments necessitates considering pedestrians' future positions and interactions with the vehicle and others. Despite the inevitable uncertainties associated with pedestrians' predicted trajectories due to their unobserved states (e.g., intent), existing deep reinforcement learning (DRL) algorithms for crowd navigation often neglect these uncertainties when using predicted trajectories to guide policy learning. This omission limits the usability of predictions when diverging from ground truth. This work introduces an integrated prediction and planning approach that incorporates the uncertainties of predicted pedestrian states in the training of a model-free DRL algorithm. A novel reward function encourages the AV to respect pedestrians' personal space, decrease speed during close approaches, and minimize the collision probability with their predicted paths. Unlike previous DRL methods, our model, designed for AV operation in crowded spaces, is trained in a novel simulation environment that reflects realistic pedestrian behaviour in a shared space with vehicles. Results show a 40% decrease in collision rate and a 15% increase in minimum distance to pedestrians compared to the state of the art model that does not account for prediction uncertainty. Additionally, the approach outperforms model predictive control methods that incorporate the same prediction uncertainties in terms of both performance and computational time, while producing trajectories closer to human drivers in similar scenarios.
[ "['Mahsa Golchoubian' 'Moojan Ghafurian' 'Kerstin Dautenhahn'\n 'Nasser Lashgarian Azad']" ]
null
null
2405.13972
null
null
http://arxiv.org/pdf/2405.13972v3
2024-06-09T22:10:42Z
2024-05-22T20:17:22Z
Infinite-Dimensional Feature Interaction
The past neural network design has largely focused on feature representation space dimension and its capacity scaling (e.g., width, depth), but overlooked the feature interaction space scaling. Recent advancements have shown shifted focus towards element-wise multiplication to facilitate higher-dimensional feature interaction space for better information transformation. Despite this progress, multiplications predominantly capture low-order interactions, thus remaining confined to a finite-dimensional interaction space. To transcend this limitation, classic kernel methods emerge as a promising solution to engage features in an infinite-dimensional space. We introduce InfiNet, a model architecture that enables feature interaction within an infinite-dimensional space created by RBF kernel. Our experiments reveal that InfiNet achieves new state-of-the-art, owing to its capability to leverage infinite-dimensional interactions, significantly enhancing model performance.
[ "['Chenhui Xu' 'Fuxun Yu' 'Maoliang Li' 'Zihao Zheng' 'Zirui Xu'\n 'Jinjun Xiong' 'Xiang Chen']" ]
null
null
2405.13975
null
null
http://arxiv.org/pdf/2405.13975v1
2024-05-22T20:20:14Z
2024-05-22T20:20:14Z
There is HOPE to Avoid HiPPOs for Long-memory State Space Models
State-space models (SSMs) that utilize linear, time-invariant (LTI) systems are known for their effectiveness in learning long sequences. However, these models typically face several challenges: (i) they require specifically designed initializations of the system matrices to achieve state-of-the-art performance, (ii) they require training of state matrices on a logarithmic scale with very small learning rates to prevent instabilities, and (iii) they require the model to have exponentially decaying memory in order to ensure an asymptotically stable LTI system. To address these issues, we view SSMs through the lens of Hankel operator theory, which provides us with a unified theory for the initialization and training of SSMs. Building on this theory, we develop a new parameterization scheme, called HOPE, for LTI systems that utilizes Markov parameters within Hankel operators. This approach allows for random initializations of the LTI systems and helps to improve training stability, while also provides the SSMs with non-decaying memory capabilities. Our model efficiently implements these innovations by nonuniformly sampling the transfer functions of LTI systems, and it requires fewer parameters compared to canonical SSMs. When benchmarked against HiPPO-initialized models such as S4 and S4D, an SSM parameterized by Hankel operators demonstrates improved performance on Long-Range Arena (LRA) tasks. Moreover, we use a sequential CIFAR-10 task with padded noise to empirically corroborate our SSM's long memory capacity.
[ "['Annan Yu' 'Michael W. Mahoney' 'N. Benjamin Erichson']" ]
null
null
2405.13976
null
null
http://arxiv.org/pdf/2405.13976v2
2024-05-26T15:20:51Z
2024-05-22T20:20:43Z
EchoSpike Predictive Plasticity: An Online Local Learning Rule for Spiking Neural Networks
The drive to develop artificial neural networks that efficiently utilize resources has generated significant interest in bio-inspired Spiking Neural Networks (SNNs). These networks are particularly attractive due to their potential in applications requiring low power and memory. This potential is further enhanced by the ability to perform online local learning, enabling them to adapt to dynamic environments. This requires the model to be adaptive in a self-supervised manner. While self-supervised learning has seen great success in many deep learning domains, its application for online local learning in multi-layer SNNs remains underexplored. In this paper, we introduce the "EchoSpike Predictive Plasticity" (ESPP) learning rule, a pioneering online local learning rule designed to leverage hierarchical temporal dynamics in SNNs through predictive and contrastive coding. We validate the effectiveness of this approach using benchmark datasets, demonstrating that it performs on par with current state-of-the-art supervised learning rules. The temporal and spatial locality of ESPP makes it particularly well-suited for low-cost neuromorphic processors, representing a significant advancement in developing biologically plausible self-supervised learning models for neuromorphic computing at the edge.
[ "['Lars Graf' 'Zhe Su' 'Giacomo Indiveri']" ]
null
null
2405.13977
null
null
http://arxiv.org/pdf/2405.13977v1
2024-05-22T20:24:41Z
2024-05-22T20:24:41Z
Removing Bias from Maximum Likelihood Estimation with Model Autophagy
We propose autophagy penalized likelihood estimation (PLE), an unbiased alternative to maximum likelihood estimation (MLE) which is more fair and less susceptible to model autophagy disorder (madness). Model autophagy refers to models trained on their own output; PLE ensures the statistics of these outputs coincide with the data statistics. This enables PLE to be statistically unbiased in certain scenarios where MLE is biased. When biased, MLE unfairly penalizes minority classes in unbalanced datasets and exacerbates the recently discovered issue of self-consuming generative modeling. Theoretical and empirical results show that 1) PLE is more fair to minority classes and 2) PLE is more stable in a self-consumed setting. Furthermore, we provide a scalable and portable implementation of PLE with a hypernetwork framework, allowing existing deep learning architectures to be easily trained with PLE. Finally, we show PLE can bridge the gap between Bayesian and frequentist paradigms in statistics.
[ "['Paul Mayer' 'Lorenzo Luzi' 'Ali Siahkoohi' 'Don H. Johnson'\n 'Richard G. Baraniuk']" ]
null
null
2405.13978
null
null
http://arxiv.org/pdf/2405.13978v1
2024-05-22T20:29:15Z
2024-05-22T20:29:15Z
Mitigating Interference in the Knowledge Continuum through Attention-Guided Incremental Learning
Continual learning (CL) remains a significant challenge for deep neural networks, as it is prone to forgetting previously acquired knowledge. Several approaches have been proposed in the literature, such as experience rehearsal, regularization, and parameter isolation, to address this problem. Although almost zero forgetting can be achieved in task-incremental learning, class-incremental learning remains highly challenging due to the problem of inter-task class separation. Limited access to previous task data makes it difficult to discriminate between classes of current and previous tasks. To address this issue, we propose `Attention-Guided Incremental Learning' (AGILE), a novel rehearsal-based CL approach that incorporates compact task attention to effectively reduce interference between tasks. AGILE utilizes lightweight, learnable task projection vectors to transform the latent representations of a shared task attention module toward task distribution. Through extensive empirical evaluation, we show that AGILE significantly improves generalization performance by mitigating task interference and outperforming rehearsal-based approaches in several CL scenarios. Furthermore, AGILE can scale well to a large number of tasks with minimal overhead while remaining well-calibrated with reduced task-recency bias.
[ "['Prashant Bhat' 'Bharath Renjith' 'Elahe Arani' 'Bahram Zonooz']" ]
null
null
2405.13980
null
null
http://arxiv.org/pdf/2405.13980v1
2024-05-22T20:33:09Z
2024-05-22T20:33:09Z
Rank Reduction Autoencoders -- Enhancing interpolation on nonlinear manifolds
The efficiency of classical Autoencoders (AEs) is limited in many practical situations. When the latent space is reduced through autoencoders, feature extraction becomes possible. However, overfitting is a common issue, leading to ``holes'' in AEs' interpolation capabilities. On the other hand, increasing the latent dimension results in a better approximation with fewer non-linearly coupled features (e.g., Koopman theory or kPCA), but it doesn't necessarily lead to dimensionality reduction, which makes feature extraction problematic. As a result, interpolating using Autoencoders gets harder. In this work, we introduce the Rank Reduction Autoencoder (RRAE), an autoencoder with an enlarged latent space, which is constrained to have a small pre-specified number of dominant singular values (i.e., low-rank). The latent space of RRAEs is large enough to enable accurate predictions while enabling feature extraction. As a result, the proposed autoencoder features a minimal rank linear latent space. To achieve what's proposed, two formulations are presented, a strong and a weak one, that build a reduced basis accurately representing the latent space. The first formulation consists of a truncated SVD in the latent space, while the second one adds a penalty term to the loss function. We show the efficiency of our formulations by using them for interpolation tasks and comparing the results to other autoencoders on both synthetic data and MNIST.
[ "['Jad Mounayer' 'Sebastian Rodriguez' 'Chady Ghnatios' 'Charbel Farhat'\n 'Francisco Chinesta']" ]
null
null
2405.13983
null
null
http://arxiv.org/pdf/2405.13983v1
2024-05-22T20:39:05Z
2024-05-22T20:39:05Z
DirectMultiStep: Direct Route Generation for Multi-Step Retrosynthesis
Traditional computer-aided synthesis planning (CASP) methods rely on iterative single-step predictions, leading to exponential search space growth that limits efficiency and scalability. We introduce a transformer-based model that directly generates multi-step synthetic routes as a single string by conditionally predicting each molecule based on all preceding ones. The model accommodates specific conditions such as the desired number of steps and starting materials, outperforming state-of-the-art methods on the PaRoutes dataset with a 2.2x improvement in Top-1 accuracy on the n$_1$ test set and a 3.3x improvement on the n$_5$ test set. It also successfully predicts routes for FDA-approved drugs not included in the training data, showcasing its generalization capabilities. While the current suboptimal diversity of the training set may impact performance on less common reaction types, our approach presents a promising direction towards fully automated retrosynthetic planning.
[ "['Yu Shee' 'Haote Li' 'Anton Morgunov' 'Victor Batista']" ]
null
null
2405.13987
null
null
http://arxiv.org/pdf/2405.13987v1
2024-05-22T20:50:17Z
2024-05-22T20:50:17Z
Analysis of Corrected Graph Convolutions
Machine learning for node classification on graphs is a prominent area driven by applications such as recommendation systems. State-of-the-art models often use multiple graph convolutions on the data, as empirical evidence suggests they can enhance performance. However, it has been shown empirically and theoretically, that too many graph convolutions can degrade performance significantly, a phenomenon known as oversmoothing. In this paper, we provide a rigorous theoretical analysis, based on the contextual stochastic block model (CSBM), of the performance of vanilla graph convolution from which we remove the principal eigenvector to avoid oversmoothing. We perform a spectral analysis for $k$ rounds of corrected graph convolutions, and we provide results for partial and exact classification. For partial classification, we show that each round of convolution can reduce the misclassification error exponentially up to a saturation level, after which performance does not worsen. For exact classification, we show that the separability threshold can be improved exponentially up to $O({log{n}}/{loglog{n}})$ corrected convolutions.
[ "['Robert Wang' 'Aseem Baranwal' 'Kimon Fountoulakis']" ]
null
null
2405.13992
null
null
http://arxiv.org/pdf/2405.13992v1
2024-05-22T20:56:34Z
2024-05-22T20:56:34Z
Learning Cut Generating Functions for Integer Programming
The branch-and-cut algorithm is the method of choice to solve large scale integer programming problems in practice. A key ingredient of branch-and-cut is the use of cutting planes which are derived constraints that reduce the search space for an optimal solution. Selecting effective cutting planes to produce small branch-and-cut trees is a critical challenge in the branch-and-cut algorithm. Recent advances have employed a data-driven approach to select optimal cutting planes from a parameterized family, aimed at reducing the branch-and-bound tree size (in expectation) for a given distribution of integer programming instances. We extend this idea to the selection of the best cut generating function (CGF), which is a tool in the integer programming literature for generating a wide variety of cutting planes that generalize the well-known Gomory Mixed-Integer (GMI) cutting planes. We provide rigorous sample complexity bounds for the selection of an effective CGF from certain parameterized families that provably performs well for any specified distribution on the problem instances. Our empirical results show that the selected CGF can outperform the GMI cuts for certain distributions. Additionally, we explore the sample complexity of using neural networks for instance-dependent CGF selection.
[ "['Hongyu Cheng' 'Amitabh Basu']" ]
null
null
2405.13993
null
null
http://arxiv.org/pdf/2405.13993v1
2024-05-22T20:56:51Z
2024-05-22T20:56:51Z
AutoLCZ: Towards Automatized Local Climate Zone Mapping from Rule-Based Remote Sensing
Local climate zones (LCZs) established a standard classification system to categorize the landscape universe for improved urban climate studies. Existing LCZ mapping is guided by human interaction with geographic information systems (GIS) or modelled from remote sensing (RS) data. GIS-based methods do not scale to large areas. However, RS-based methods leverage machine learning techniques to automatize LCZ classification from RS. Yet, RS-based methods require huge amounts of manual labels for training. We propose a novel LCZ mapping framework, termed AutoLCZ, to extract the LCZ classification features from high-resolution RS modalities. We study the definition of numerical rules designed to mimic the LCZ definitions. Those rules model geometric and surface cover properties from LiDAR data. Correspondingly, we enable LCZ classification from RS data in a GIS-based scheme. The proposed AutoLCZ method has potential to reduce the human labor to acquire accurate metadata. At the same time, AutoLCZ sheds light on the physical interpretability of RS-based methods. In a proof-of-concept for New York City (NYC) we leverage airborne LiDAR surveys to model 4 LCZ features to distinguish 10 LCZ types. The results indicate the potential of AutoLCZ as promising avenue for large-scale LCZ mapping from RS data.
[ "['Chenying Liu' 'Hunsoo Song' 'Anamika Shreevastava' 'Conrad M Albrecht']" ]
null
null
2405.13994
null
null
http://arxiv.org/pdf/2405.13994v1
2024-05-22T20:56:57Z
2024-05-22T20:56:57Z
Practical $0.385$-Approximation for Submodular Maximization Subject to a Cardinality Constraint
Non-monotone constrained submodular maximization plays a crucial role in various machine learning applications. However, existing algorithms often struggle with a trade-off between approximation guarantees and practical efficiency. The current state-of-the-art is a recent $0.401$-approximation algorithm, but its computational complexity makes it highly impractical. The best practical algorithms for the problem only guarantee $1/e$-approximation. In this work, we present a novel algorithm for submodular maximization subject to a cardinality constraint that combines a guarantee of $0.385$-approximation with a low and practical query complexity of $O(n+k^2)$. Furthermore, we evaluate the empirical performance of our algorithm in experiments based on various machine learning applications, including Movie Recommendation, Image Summarization, and more. These experiments demonstrate the efficacy of our approach.
[ "['Murad Tukan' 'Loay Mualem' 'Moran Feldman']" ]
null
null
2405.13995
null
null
http://arxiv.org/abs/2405.13995v1
2024-05-22T21:05:35Z
2024-05-22T21:05:35Z
Leveraging World Events to Predict E-Commerce Consumer Demand under Anomaly
Consumer demand forecasting is of high importance for many e-commerce applications, including supply chain optimization, advertisement placement, and delivery speed optimization. However, reliable time series sales forecasting for e-commerce is difficult, especially during periods with many anomalies, as can often happen during pandemics, abnormal weather, or sports events. Although many time series algorithms have been applied to the task, prediction during anomalies still remains a challenge. In this work, we hypothesize that leveraging external knowledge found in world events can help overcome the challenge of prediction under anomalies. We mine a large repository of 40 years of world events and their textual representations. Further, we present a novel methodology based on transformers to construct an embedding of a day based on the relations of the day's events. Those embeddings are then used to forecast future consumer behavior. We empirically evaluate the methods over a large e-commerce products sales dataset, extracted from eBay, one of the world's largest online marketplaces. We show over numerous categories that our method outperforms state-of-the-art baselines during anomalies.
[ "['Dan Kalifa' 'Uriel Singer' 'Ido Guy' 'Guy D. Rosin' 'Kira Radinsky']" ]
null
null
2405.13997
null
null
http://arxiv.org/pdf/2405.13997v2
2024-06-01T05:29:27Z
2024-05-22T21:12:34Z
Sigmoid Gating is More Sample Efficient than Softmax Gating in Mixture of Experts
The softmax gating function is arguably the most popular choice in mixture of experts modeling. Despite its widespread use in practice, softmax gating may lead to unnecessary competition among experts, potentially causing the undesirable phenomenon of representation collapse due to its inherent structure. In response, the sigmoid gating function has been recently proposed as an alternative and has been demonstrated empirically to achieve superior performance. However, a rigorous examination of the sigmoid gating function is lacking in current literature. In this paper, we verify theoretically that sigmoid gating, in fact, enjoys a higher sample efficiency than softmax gating for the statistical task of expert estimation. Towards that goal, we consider a regression framework in which the unknown regression function is modeled as a mixture of experts, and study the rates of convergence of the least squares estimator in the over-specified case in which the number of experts fitted is larger than the true value. We show that two gating regimes naturally arise and, in each of them, we formulate identifiability conditions for the expert functions and derive the corresponding convergence rates. In both cases, we find that experts formulated as feed-forward networks with commonly used activation such as $mathrm{ReLU}$ and $mathrm{GELU}$ enjoy faster convergence rates under sigmoid gating than softmax gating. Furthermore, given the same choice of experts, we demonstrate that the sigmoid gating function requires a smaller sample size than its softmax counterpart to attain the same error of expert estimation and, therefore, is more sample efficient.
[ "['Huy Nguyen' 'Nhat Ho' 'Alessandro Rinaldo']" ]
null
null
2405.13998
null
null
http://arxiv.org/pdf/2405.13998v1
2024-05-22T21:13:23Z
2024-05-22T21:13:23Z
Bridging Operator Learning and Conditioned Neural Fields: A Unifying Perspective
Operator learning is an emerging area of machine learning which aims to learn mappings between infinite dimensional function spaces. Here we uncover a connection between operator learning architectures and conditioned neural fields from computer vision, providing a unified perspective for examining differences between popular operator learning models. We find that many commonly used operator learning models can be viewed as neural fields with conditioning mechanisms restricted to point-wise and/or global information. Motivated by this, we propose the Continuous Vision Transformer (CViT), a novel neural operator architecture that employs a vision transformer encoder and uses cross-attention to modulate a base field constructed with a trainable grid-based positional encoding of query coordinates. Despite its simplicity, CViT achieves state-of-the-art results across challenging benchmarks in climate modeling and fluid dynamics. Our contributions can be viewed as a first step towards adapting advanced computer vision architectures for building more flexible and accurate machine learning models in physical sciences.
[ "['Sifan Wang' 'Jacob H Seidman' 'Shyam Sankaran' 'Hanwen Wang'\n 'George J. Pappas' 'Paris Perdikaris']" ]
null
null
2405.14002
null
null
http://arxiv.org/pdf/2405.14002v1
2024-05-22T21:17:54Z
2024-05-22T21:17:54Z
Animal Behavior Analysis Methods Using Deep Learning: A Survey
Animal behavior serves as a reliable indicator of the adaptation of organisms to their environment and their overall well-being. Through rigorous observation of animal actions and interactions, researchers and observers can glean valuable insights into diverse facets of their lives, encompassing health, social dynamics, ecological relationships, and neuroethological dimensions. Although state-of-the-art deep learning models have demonstrated remarkable accuracy in classifying various forms of animal data, their adoption in animal behavior studies remains limited. This survey article endeavors to comprehensively explore deep learning architectures and strategies applied to the identification of animal behavior, spanning auditory, visual, and audiovisual methodologies. Furthermore, the manuscript scrutinizes extant animal behavior datasets, offering a detailed examination of the principal challenges confronting this research domain. The article culminates in a comprehensive discussion of key research directions within deep learning that hold potential for advancing the field of animal behavior studies.
[ "['Edoardo Fazzari' 'Donato Romano' 'Fabrizio Falchi' 'Cesare Stefanini']" ]
null
null
2405.14007
null
null
http://arxiv.org/pdf/2405.14007v1
2024-05-22T21:25:37Z
2024-05-22T21:25:37Z
A Practice in Enrollment Prediction with Markov Chain Models
Enrollment projection is a critical aspect of university management, guiding decisions related to resource allocation and revenue forecasting. However, despite its importance, there remains a lack of transparency regarding the methodologies utilized by many institutions. This paper presents an innovative approach to enrollment projection using Markov Chain modeling, drawing upon a case study conducted at Eastern Michigan University (EMU). Markov Chain modeling emerges as a promising approach for enrollment projection, offering precise predictions based on historical trends. This paper outlines the implementation of Enhanced Markov Chain modeling at EMU, detailing the methodology used to compute transition probabilities and evaluate model performance. Despite challenges posed by external uncertainties such as the COVID-19 pandemic, Markov Chain modeling has demonstrated impressive accuracy, with an average difference of less than 1 percent between predicted and actual enrollments. The paper concludes with a discussion of future directions and opportunities for collaboration among institutions.
[ "['Yan Zhao' 'Amy Otteson']" ]
null
null
2405.14008
null
null
http://arxiv.org/pdf/2405.14008v1
2024-05-22T21:34:26Z
2024-05-22T21:34:26Z
Bayesian Inverse Problems with Conditional Sinkhorn Generative Adversarial Networks in Least Volume Latent Spaces
Solving inverse problems in scientific and engineering fields has long been intriguing and holds great potential for many applications, yet most techniques still struggle to address issues such as high dimensionality, nonlinearity and model uncertainty inherent in these problems. Recently, generative models such as Generative Adversarial Networks (GANs) have shown great potential in approximating complex high dimensional conditional distributions and have paved the way for characterizing posterior densities in Bayesian inverse problems, yet the problems' high dimensionality and high nonlinearity often impedes the model's training. In this paper we show how to tackle these issues with Least Volume--a novel unsupervised nonlinear dimension reduction method--that can learn to represent the given datasets with the minimum number of latent variables while estimating their intrinsic dimensions. Once the low dimensional latent spaces are identified, efficient and accurate training of conditional generative models becomes feasible, resulting in a latent conditional GAN framework for posterior inference. We demonstrate the power of the proposed methodology on a variety of applications including inversion of parameters in systems of ODEs and high dimensional hydraulic conductivities in subsurface flow problems, and reveal the impact of the observables' and unobservables' intrinsic dimensions on inverse problems.
[ "['Qiuyi Chen' 'Panagiotis Tsilifis' 'Mark Fuge']" ]
null
null
2405.14009
null
null
http://arxiv.org/pdf/2405.14009v1
2024-05-22T21:35:56Z
2024-05-22T21:35:56Z
SlipStream: Adapting Pipelines for Distributed Training of Large DNNs Amid Failures
Training large Deep Neural Network (DNN) models requires thousands of GPUs for days or weeks at a time. At these scales, failures are frequent and can have a big impact on training throughput. Restoring performance using spare GPU servers becomes increasingly expensive as models grow. SlipStream is a system for efficient DNN training in the presence of failures, without using spare servers. It exploits the functional redundancy inherent in distributed training systems -- servers hold the same model parameters across data-parallel groups -- as well as the bubbles in the pipeline schedule within each data-parallel group. SlipStream dynamically re-routes the work of a failed server to its data-parallel peers, ensuring continuous training despite multiple failures. However, re-routing work leads to imbalances across pipeline stages that degrades training throughput. SlipStream introduces two optimizations that allow re-routed work to execute within bubbles of the original pipeline schedule. First, it decouples the backward pass computation into two phases. Second, it staggers the execution of the optimizer step across pipeline stages. Combined, these optimizations enable schedules that minimize or even eliminate training throughput degradation during failures. We describe a prototype for SlipStream and show that it achieves high training throughput under multiple failures, outperforming recent proposals for fault-tolerant training such as Oobleck and Bamboo by up to 1.46x and 1.64x, respectively.
[ "['Swapnil Gandhi' 'Mark Zhao' 'Athinagoras Skiadopoulos'\n 'Christos Kozyrakis']" ]
null
null
2405.14014
null
null
http://arxiv.org/pdf/2405.14014v3
2024-06-13T16:51:50Z
2024-05-22T21:48:17Z
RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar
3D occupancy-based perception pipeline has significantly advanced autonomous driving by capturing detailed scene descriptions and demonstrating strong generalizability across various object categories and shapes. Current methods predominantly rely on LiDAR or camera inputs for 3D occupancy prediction. These methods are susceptible to adverse weather conditions, limiting the all-weather deployment of self-driving cars. To improve perception robustness, we leverage the recent advances in automotive radars and introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction. Our method, RadarOcc, circumvents the limitations of sparse radar point clouds by directly processing the 4D radar tensor, thus preserving essential scene details. RadarOcc innovatively addresses the challenges associated with the voluminous and noisy 4D radar data by employing Doppler bins descriptors, sidelobe-aware spatial sparsification, and range-wise self-attention mechanisms. To minimize the interpolation errors associated with direct coordinate transformations, we also devise a spherical-based feature encoding followed by spherical-to-Cartesian feature aggregation. We benchmark various baseline methods based on distinct modalities on the public K-Radar dataset. The results demonstrate RadarOcc's state-of-the-art performance in radar-based 3D occupancy prediction and promising results even when compared with LiDAR- or camera-based methods. Additionally, we present qualitative evidence of the superior performance of 4D radar in adverse weather conditions and explore the impact of key pipeline components through ablation studies.
[ "['Fangqiang Ding' 'Xiangyu Wen' 'Lawrence Zhu' 'Yiming Li'\n 'Chris Xiaoxuan Lu']" ]
null
null
2405.14016
null
null
http://arxiv.org/pdf/2405.14016v2
2024-07-14T01:11:22Z
2024-05-22T21:49:28Z
Towards a Unified Framework for Evaluating Explanations
The challenge of creating interpretable models has been taken up by two main research communities: ML researchers primarily focused on lower-level explainability methods that suit the needs of engineers, and HCI researchers who have more heavily emphasized user-centered approaches often based on participatory design methods. This paper reviews how these communities have evaluated interpretability, identifying overlaps and semantic misalignments. We propose moving towards a unified framework of evaluation criteria and lay the groundwork for such a framework by articulating the relationships between existing criteria. We argue that explanations serve as mediators between models and stakeholders, whether for intrinsically interpretable models or opaque black-box models analyzed via post-hoc techniques. We further argue that useful explanations require both faithfulness and intelligibility. Explanation plausibility is a prerequisite for intelligibility, while stability is a prerequisite for explanation faithfulness. We illustrate these criteria, as well as specific evaluation methods, using examples from an ongoing study of an interpretable neural network for predicting a particular learner behavior.
[ "['Juan D. Pinto' 'Luc Paquette']" ]
null
null
2405.14018
null
null
http://arxiv.org/pdf/2405.14018v1
2024-05-22T21:52:12Z
2024-05-22T21:52:12Z
Watermarking Generative Tabular Data
In this paper, we introduce a simple yet effective tabular data watermarking mechanism with statistical guarantees. We show theoretically that the proposed watermark can be effectively detected, while faithfully preserving the data fidelity, and also demonstrates appealing robustness against additive noise attack. The general idea is to achieve the watermarking through a strategic embedding based on simple data binning. Specifically, it divides the feature's value range into finely segmented intervals and embeds watermarks into selected ``green list" intervals. To detect the watermarks, we develop a principled statistical hypothesis-testing framework with minimal assumptions: it remains valid as long as the underlying data distribution has a continuous density function. The watermarking efficacy is demonstrated through rigorous theoretical analysis and empirical validation, highlighting its utility in enhancing the security of synthetic and real-world datasets.
[ "['Hengzhi He' 'Peiyu Yu' 'Junpeng Ren' 'Ying Nian Wu' 'Guang Cheng']" ]
null
null
2405.14020
null
null
http://arxiv.org/pdf/2405.14020v1
2024-05-22T21:54:05Z
2024-05-22T21:54:05Z
Unlearning Information Bottleneck: Machine Unlearning of Systematic Patterns and Biases
Effective adaptation to distribution shifts in training data is pivotal for sustaining robustness in neural networks, especially when removing specific biases or outdated information, a process known as machine unlearning. Traditional approaches typically assume that data variations are random, which makes it difficult to adjust the model parameters accurately to remove patterns and characteristics from unlearned data. In this work, we present Unlearning Information Bottleneck (UIB), a novel information-theoretic framework designed to enhance the process of machine unlearning that effectively leverages the influence of systematic patterns and biases for parameter adjustment. By proposing a variational upper bound, we recalibrate the model parameters through a dynamic prior that integrates changes in data distribution with an affordable computational cost, allowing efficient and accurate removal of outdated or unwanted data patterns and biases. Our experiments across various datasets, models, and unlearning methods demonstrate that our approach effectively removes systematic patterns and biases while maintaining the performance of models post-unlearning.
[ "['Ling Han' 'Hao Huang' 'Dustin Scheinost' 'Mary-Anne Hartley'\n 'María Rodríguez Martínez']" ]
null
null
2405.14021
null
null
http://arxiv.org/pdf/2405.14021v1
2024-05-22T21:54:12Z
2024-05-22T21:54:12Z
A Study of Posterior Stability for Time-Series Latent Diffusion
Latent diffusion has shown promising results in image generation and permits efficient sampling. However, this framework might suffer from the problem of posterior collapse when applied to time series. In this paper, we conduct an impact analysis of this problem. With a theoretical insight, we first explain that posterior collapse reduces latent diffusion to a VAE, making it less expressive. Then, we introduce the notion of dependency measures, showing that the latent variable sampled from the diffusion model loses control of the generation process in this situation and that latent diffusion exhibits dependency illusion in the case of shuffled time series. We also analyze the causes of posterior collapse and introduce a new framework based on this analysis, which addresses the problem and supports a more expressive prior distribution. Our experiments on various real-world time-series datasets demonstrate that our new model maintains a stable posterior and outperforms the baselines in time series generation.
[ "['Yangming Li' 'Mihaela van der Schaar']" ]
null
null
2405.14023
null
null
http://arxiv.org/pdf/2405.14023v1
2024-05-22T21:59:22Z
2024-05-22T21:59:22Z
WordGame: Efficient & Effective LLM Jailbreak via Simultaneous Obfuscation in Query and Response
The recent breakthrough in large language models (LLMs) such as ChatGPT has revolutionized production processes at an unprecedented pace. Alongside this progress also comes mounting concerns about LLMs' susceptibility to jailbreaking attacks, which leads to the generation of harmful or unsafe content. While safety alignment measures have been implemented in LLMs to mitigate existing jailbreak attempts and force them to become increasingly complicated, it is still far from perfect. In this paper, we analyze the common pattern of the current safety alignment and show that it is possible to exploit such patterns for jailbreaking attacks by simultaneous obfuscation in queries and responses. Specifically, we propose WordGame attack, which replaces malicious words with word games to break down the adversarial intent of a query and encourage benign content regarding the games to precede the anticipated harmful content in the response, creating a context that is hardly covered by any corpus used for safety alignment. Extensive experiments demonstrate that WordGame attack can break the guardrails of the current leading proprietary and open-source LLMs, including the latest Claude-3, GPT-4, and Llama-3 models. Further ablation studies on such simultaneous obfuscation in query and response provide evidence of the merits of the attack strategy beyond an individual attack.
[ "['Tianrong Zhang' 'Bochuan Cao' 'Yuanpu Cao' 'Lu Lin' 'Prasenjit Mitra'\n 'Jinghui Chen']" ]
null
null
2405.14033
null
null
http://arxiv.org/pdf/2405.14033v1
2024-05-22T22:08:13Z
2024-05-22T22:08:13Z
Adversarial Training of Two-Layer Polynomial and ReLU Activation Networks via Convex Optimization
Training neural networks which are robust to adversarial attacks remains an important problem in deep learning, especially as heavily overparameterized models are adopted in safety-critical settings. Drawing from recent work which reformulates the training problems for two-layer ReLU and polynomial activation networks as convex programs, we devise a convex semidefinite program (SDP) for adversarial training of polynomial activation networks via the S-procedure. We also derive a convex SDP to compute the minimum distance from a correctly classified example to the decision boundary of a polynomial activation network. Adversarial training for two-layer ReLU activation networks has been explored in the literature, but, in contrast to prior work, we present a scalable approach which is compatible with standard machine libraries and GPU acceleration. The adversarial training SDP for polynomial activation networks leads to large increases in robust test accuracy against $ell^infty$ attacks on the Breast Cancer Wisconsin dataset from the UCI Machine Learning Repository. For two-layer ReLU networks, we leverage our scalable implementation to retrain the final two fully connected layers of a Pre-Activation ResNet-18 model on the CIFAR-10 dataset. Our 'robustified' model achieves higher clean and robust test accuracies than the same architecture trained with sharpness-aware minimization.
[ "['Daniel Kuelbs' 'Sanjay Lall' 'Mert Pilanci']" ]
null
null
2405.14038
null
null
http://arxiv.org/pdf/2405.14038v1
2024-05-22T22:19:12Z
2024-05-22T22:19:12Z
FLIPHAT: Joint Differential Privacy for High Dimensional Sparse Linear Bandits
High dimensional sparse linear bandits serve as an efficient model for sequential decision-making problems (e.g. personalized medicine), where high dimensional features (e.g. genomic data) on the users are available, but only a small subset of them are relevant. Motivated by data privacy concerns in these applications, we study the joint differentially private high dimensional sparse linear bandits, where both rewards and contexts are considered as private data. First, to quantify the cost of privacy, we derive a lower bound on the regret achievable in this setting. To further address the problem, we design a computationally efficient bandit algorithm, textbf{F}orgetfutextbf{L} textbf{I}terative textbf{P}rivate textbf{HA}rd textbf{T}hresholding (FLIPHAT). Along with doubling of episodes and episodic forgetting, FLIPHAT deploys a variant of Noisy Iterative Hard Thresholding (N-IHT) algorithm as a sparse linear regression oracle to ensure both privacy and regret-optimality. We show that FLIPHAT achieves optimal regret up to logarithmic factors. We analyze the regret by providing a novel refined analysis of the estimation error of N-IHT, which is of parallel interest.
[ "['Sunrit Chakraborty' 'Saptarshi Roy' 'Debabrota Basu']" ]
null
null
2405.14039
null
null
http://arxiv.org/pdf/2405.14039v1
2024-05-22T22:22:25Z
2024-05-22T22:22:25Z
Trajectory Volatility for Out-of-Distribution Detection in Mathematical Reasoning
Real-world data deviating from the independent and identically distributed (i.i.d.) assumption of in-distribution training data poses security threats to deep networks, thus advancing out-of-distribution (OOD) detection algorithms. Detection methods in generative language models (GLMs) mainly focus on uncertainty estimation and embedding distance measurement, with the latter proven to be most effective in traditional linguistic tasks like summarization and translation. However, another complex generative scenario mathematical reasoning poses significant challenges to embedding-based methods due to its high-density feature of output spaces, but this feature causes larger discrepancies in the embedding shift trajectory between different samples in latent spaces. Hence, we propose a trajectory-based method TV score, which uses trajectory volatility for OOD detection in mathematical reasoning. Experiments show that our method outperforms all traditional algorithms on GLMs under mathematical reasoning scenarios and can be extended to more applications with high-density features in output spaces, such as multiple-choice questions.
[ "['Yiming Wang' 'Pei Zhang' 'Baosong Yang' 'Derek F. Wong'\n 'Zhuosheng Zhang' 'Rui Wang']" ]
null
null
2405.14045
null
null
http://arxiv.org/pdf/2405.14045v1
2024-05-22T22:32:04Z
2024-05-22T22:32:04Z
Learning rigid-body simulators over implicit shapes for large-scale scenes and vision
Simulating large scenes with many rigid objects is crucial for a variety of applications, such as robotics, engineering, film and video games. Rigid interactions are notoriously hard to model: small changes to the initial state or the simulation parameters can lead to large changes in the final state. Recently, learned simulators based on graph networks (GNNs) were developed as an alternative to hand-designed simulators like MuJoCo and PyBullet. They are able to accurately capture dynamics of real objects directly from real-world observations. However, current state-of-the-art learned simulators operate on meshes and scale poorly to scenes with many objects or detailed shapes. Here we present SDF-Sim, the first learned rigid-body simulator designed for scale. We use learned signed-distance functions (SDFs) to represent the object shapes and to speed up distance computation. We design the simulator to leverage SDFs and avoid the fundamental bottleneck of the previous simulators associated with collision detection. For the first time in literature, we demonstrate that we can scale the GNN-based simulators to scenes with hundreds of objects and up to 1.1 million nodes, where mesh-based approaches run out of memory. Finally, we show that SDF-Sim can be applied to real world scenes by extracting SDFs from multi-view images.
[ "['Yulia Rubanova' 'Tatiana Lopez-Guevara' 'Kelsey R. Allen'\n 'William F. Whitney' 'Kimberly Stachenfeld' 'Tobias Pfaff']" ]
null
null
2405.14049
null
null
http://arxiv.org/pdf/2405.14049v1
2024-05-22T22:39:29Z
2024-05-22T22:39:29Z
Particle physics DL-simulation with control over generated data properties
The research of innovative methods aimed at reducing costs and shortening the time needed for simulation, going beyond conventional approaches based on Monte Carlo methods, has been sparked by the development of collision simulations at the Large Hadron Collider at CERN. Deep learning generative methods including VAE, GANs and diffusion models have been used for this purpose. Although they are much faster and simpler than standard approaches, they do not always keep high fidelity of the simulated data. This work aims to mitigate this issue, by providing an alternative solution to currently employed algorithms by introducing the mechanism of control over the generated data properties. To achieve this, we extend the recently introduced CorrVAE, which enables user-defined parameter manipulation of the generated output. We adapt the model to the problem of particle physics simulation. The proposed solution achieved promising results, demonstrating control over the parameters of the generated output and constituting an alternative for simulating the ZDC calorimeter in the ALICE experiment at CERN.
[ "['Karol Rogoziński' 'Jan Dubiński' 'Przemysław Rokita' 'Kamil Deja']" ]
null
null
2405.14051
null
null
http://arxiv.org/pdf/2405.14051v1
2024-05-22T22:41:56Z
2024-05-22T22:41:56Z
A Concentration Inequality for Maximum Mean Discrepancy (MMD)-based Statistics and Its Application in Generative Models
Maximum Mean Discrepancy (MMD) is a probability metric that has found numerous applications in machine learning. In this work, we focus on its application in generative models, including the minimum MMD estimator, Generative Moment Matching Network (GMMN), and Generative Adversarial Network (GAN). In these cases, MMD is part of an objective function in a minimization or min-max optimization problem. Even if its empirical performance is competitive, the consistency and convergence rate analysis of the corresponding MMD-based estimators has yet to be carried out. We propose a uniform concentration inequality for a class of Maximum Mean Discrepancy (MMD)-based estimators, that is, a maximum deviation bound of empirical MMD values over a collection of generated distributions and adversarially learned kernels. Here, our inequality serves as an efficient tool in the theoretical analysis for MMD-based generative models. As elaborating examples, we applied our main result to provide the generalization error bounds for the MMD-based estimators in the context of the minimum MMD estimator and MMD GAN.
[ "['Yijin Ni' 'Xiaoming Huo']" ]
null
null
2405.14058
null
null
http://arxiv.org/pdf/2405.14058v1
2024-05-22T23:06:34Z
2024-05-22T23:06:34Z
Formally Verifying Deep Reinforcement Learning Controllers with Lyapunov Barrier Certificates
Deep reinforcement learning (DRL) is a powerful machine learning paradigm for generating agents that control autonomous systems. However, the "black box" nature of DRL agents limits their deployment in real-world safety-critical applications. A promising approach for providing strong guarantees on an agent's behavior is to use Neural Lyapunov Barrier (NLB) certificates, which are learned functions over the system whose properties indirectly imply that an agent behaves as desired. However, NLB-based certificates are typically difficult to learn and even more difficult to verify, especially for complex systems. In this work, we present a novel method for training and verifying NLB-based certificates for discrete-time systems. Specifically, we introduce a technique for certificate composition, which simplifies the verification of highly-complex systems by strategically designing a sequence of certificates. When jointly verified with neural network verification engines, these certificates provide a formal guarantee that a DRL agent both achieves its goals and avoids unsafe behavior. Furthermore, we introduce a technique for certificate filtering, which significantly simplifies the process of producing formally verified certificates. We demonstrate the merits of our approach with a case study on providing safety and liveness guarantees for a DRL-controlled spacecraft.
[ "['Udayan Mandal' 'Guy Amir' 'Haoze Wu' 'Ieva Daukantas'\n 'Fletcher Lee Newell' 'Umberto J. Ravaioli' 'Baoluo Meng'\n 'Michael Durling' 'Milan Ganai' 'Tobey Shim' 'Guy Katz' 'Clark Barrett']" ]
null
null
2405.14060
null
null
http://arxiv.org/pdf/2405.14060v1
2024-05-22T23:09:57Z
2024-05-22T23:09:57Z
Probabilistic Inference in the Era of Tensor Networks and Differential Programming
Probabilistic inference is a fundamental task in modern machine learning. Recent advances in tensor network (TN) contraction algorithms have enabled the development of better exact inference methods. However, many common inference tasks in probabilistic graphical models (PGMs) still lack corresponding TN-based adaptations. In this work, we advance the connection between PGMs and TNs by formulating and implementing tensor-based solutions for the following inference tasks: (i) computing the partition function, (ii) computing the marginal probability of sets of variables in the model, (iii) determining the most likely assignment to a set of variables, and (iv) the same as (iii) but after having marginalized a different set of variables. We also present a generalized method for generating samples from a learned probability distribution. Our work is motivated by recent technical advances in the fields of quantum circuit simulation, quantum many-body physics, and statistical physics. Through an experimental evaluation, we demonstrate that the integration of these quantum technologies with a series of algorithms introduced in this study significantly improves the effectiveness of existing methods for solving probabilistic inference tasks.
[ "['Martin Roa-Villescas' 'Xuanzhao Gao' 'Sander Stuijk' 'Henk Corporaal'\n 'Jin-Guo Liu']" ]
null
null
2405.14061
null
null
http://arxiv.org/pdf/2405.14061v1
2024-05-22T23:18:58Z
2024-05-22T23:18:58Z
Meanings and Feelings of Large Language Models: Observability of Latent States in Generative AI
We tackle the question of whether Large Language Models (LLMs), viewed as dynamical systems with state evolving in the embedding space of symbolic tokens, are observable. That is, whether there exist multiple 'mental' state trajectories that yield the same sequence of generated tokens, or sequences that belong to the same Nerode equivalence class ('meaning'). If not observable, mental state trajectories ('experiences') evoked by an input ('perception') or by feedback from the model's own state ('thoughts') could remain self-contained and evolve unbeknown to the user while being potentially accessible to the model provider. Such "self-contained experiences evoked by perception or thought" are akin to what the American Psychological Association (APA) defines as 'feelings'. Beyond the lexical curiosity, we show that current LLMs implemented by autoregressive Transformers cannot have 'feelings' according to this definition: The set of state trajectories indistinguishable from the tokenized output is a singleton. But if there are 'system prompts' not visible to the user, then the set of indistinguishable trajectories becomes non-trivial, and there can be multiple state trajectories that yield the same verbalized output. We prove these claims analytically, and show examples of modifications to standard LLMs that engender such 'feelings.' Our analysis sheds light on possible designs that would enable a model to perform non-trivial computation that is not visible to the user, as well as on controls that the provider of services using the model could take to prevent unintended behavior.
[ "['Tian Yu Liu' 'Stefano Soatto' 'Matteo Marchi' 'Pratik Chaudhari'\n 'Paulo Tabuada']" ]
null
null
2405.14062
null
null
http://arxiv.org/pdf/2405.14062v1
2024-05-22T23:21:15Z
2024-05-22T23:21:15Z
ChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles
We present ChatScene, a Large Language Model (LLM)-based agent that leverages the capabilities of LLMs to generate safety-critical scenarios for autonomous vehicles. Given unstructured language instructions, the agent first generates textually described traffic scenarios using LLMs. These scenario descriptions are subsequently broken down into several sub-descriptions for specified details such as behaviors and locations of vehicles. The agent then distinctively transforms the textually described sub-scenarios into domain-specific languages, which then generate actual code for prediction and control in simulators, facilitating the creation of diverse and complex scenarios within the CARLA simulation environment. A key part of our agent is a comprehensive knowledge retrieval component, which efficiently translates specific textual descriptions into corresponding domain-specific code snippets by training a knowledge database containing the scenario description and code pairs. Extensive experimental results underscore the efficacy of ChatScene in improving the safety of autonomous vehicles. For instance, the scenarios generated by ChatScene show a 15% increase in collision rates compared to state-of-the-art baselines when tested against different reinforcement learning-based ego vehicles. Furthermore, we show that by using our generated safety-critical scenarios to fine-tune different RL-based autonomous driving models, they can achieve a 9% reduction in collision rates, surpassing current SOTA methods. ChatScene effectively bridges the gap between textual descriptions of traffic scenarios and practical CARLA simulations, providing a unified way to conveniently generate safety-critical scenarios for safety testing and improvement for AVs.
[ "['Jiawei Zhang' 'Chejian Xu' 'Bo Li']" ]
null
null
2405.14064
null
null
http://arxiv.org/pdf/2405.14064v1
2024-05-22T23:28:24Z
2024-05-22T23:28:24Z
Building a stable classifier with the inflated argmax
We propose a new framework for algorithmic stability in the context of multiclass classification. In practice, classification algorithms often operate by first assigning a continuous score (for instance, an estimated probability) to each possible label, then taking the maximizer -- i.e., selecting the class that has the highest score. A drawback of this type of approach is that it is inherently unstable, meaning that it is very sensitive to slight perturbations of the training data, since taking the maximizer is discontinuous. Motivated by this challenge, we propose a pipeline for constructing stable classifiers from data, using bagging (i.e., resampling and averaging) to produce stable continuous scores, and then using a stable relaxation of argmax, which we call the "inflated argmax," to convert these scores to a set of candidate labels. The resulting stability guarantee places no distributional assumptions on the data, does not depend on the number of classes or dimensionality of the covariates, and holds for any base classifier. Using a common benchmark data set, we demonstrate that the inflated argmax provides necessary protection against unstable classifiers, without loss of accuracy.
[ "['Jake A. Soloff' 'Rina Foygel Barber' 'Rebecca Willett']" ]
null
null
2405.14066
null
null
http://arxiv.org/pdf/2405.14066v1
2024-05-22T23:45:33Z
2024-05-22T23:45:33Z
Online Classification with Predictions
We study online classification when the learner has access to predictions about future examples. We design an online learner whose expected regret is never worse than the worst-case regret, gracefully improves with the quality of the predictions, and can be significantly better than the worst-case regret when the predictions of future examples are accurate. As a corollary, we show that if the learner is always guaranteed to observe data where future examples are easily predictable, then online learning can be as easy as transductive online learning. Our results complement recent work in online algorithms with predictions and smoothed online classification, which go beyond a worse-case analysis by using machine-learned predictions and distributional assumptions respectively.
[ "['Vinod Raman' 'Ambuj Tewari']" ]
null
null
2405.14073
null
null
http://arxiv.org/pdf/2405.14073v1
2024-05-23T00:35:23Z
2024-05-23T00:35:23Z
PEAC: Unsupervised Pre-training for Cross-Embodiment Reinforcement Learning
Designing generalizable agents capable of adapting to diverse embodiments has achieved significant attention in Reinforcement Learning (RL), which is critical for deploying RL agents in various real-world applications. Previous Cross-Embodiment RL approaches have focused on transferring knowledge across embodiments within specific tasks. These methods often result in knowledge tightly coupled with those tasks and fail to adequately capture the distinct characteristics of different embodiments. To address this limitation, we introduce the notion of Cross-Embodiment Unsupervised RL (CEURL), which leverages unsupervised learning to enable agents to acquire embodiment-aware and task-agnostic knowledge through online interactions within reward-free environments. We formulate CEURL as a novel Controlled Embodiment Markov Decision Process (CE-MDP) and systematically analyze CEURL's pre-training objectives under CE-MDP. Based on these analyses, we develop a novel algorithm Pre-trained Embodiment-Aware Control (PEAC) for handling CEURL, incorporating an intrinsic reward function specifically designed for cross-embodiment pre-training. PEAC not only provides an intuitive optimization strategy for cross-embodiment pre-training but also can integrate flexibly with existing unsupervised RL methods, facilitating cross-embodiment exploration and skill discovery. Extensive experiments in both simulated (e.g., DMC and Robosuite) and real-world environments (e.g., legged locomotion) demonstrate that PEAC significantly improves adaptation performance and cross-embodiment generalization, demonstrating its effectiveness in overcoming the unique challenges of CEURL.
[ "['Chengyang Ying' 'Zhongkai Hao' 'Xinning Zhou' 'Xuezhou Xu' 'Hang Su'\n 'Xingxing Zhang' 'Jun Zhu']" ]
null
null
2405.14075
null
null
http://arxiv.org/pdf/2405.14075v1
2024-05-23T00:40:43Z
2024-05-23T00:40:43Z
$T^2$ of Thoughts: Temperature Tree Elicits Reasoning in Large Language Models
Large Language Models (LLMs) have emerged as powerful tools in artificial intelligence, especially in complex decision-making scenarios, but their static problem-solving strategies often limit their adaptability to dynamic environments. We explore the enhancement of reasoning capabilities in LLMs through Temperature Tree ($T^2$) prompting via Particle Swarm Optimization, termed as $T^2$ of Thoughts ($T^2oT$). The primary focus is on enhancing decision-making processes by dynamically adjusting search parameters, especially temperature, to improve accuracy without increasing computational demands. We empirically validate that our hybrid $T^2oT$ approach yields enhancements in, single-solution accuracy, multi-solution generation and text generation quality. Our findings suggest that while dynamic search depth adjustments based on temperature can yield mixed results, a fixed search depth, when coupled with adaptive capabilities of $T^2oT$, provides a more reliable and versatile problem-solving strategy. This work highlights the potential for future explorations in optimizing algorithmic interactions with foundational language models, particularly illustrated by our development for the Game of 24 and Creative Writing tasks.
[ "['Chengkun Cai' 'Xu Zhao' 'Yucheng Du' 'Haoliang Liu' 'Lei Li']" ]
null
null
2405.14078
null
null
http://arxiv.org/pdf/2405.14078v1
2024-05-23T00:52:38Z
2024-05-23T00:52:38Z
A finite time analysis of distributed Q-learning
Multi-agent reinforcement learning (MARL) has witnessed a remarkable surge in interest, fueled by the empirical success achieved in applications of single-agent reinforcement learning (RL). In this study, we consider a distributed Q-learning scenario, wherein a number of agents cooperatively solve a sequential decision making problem without access to the central reward function which is an average of the local rewards. In particular, we study finite-time analysis of a distributed Q-learning algorithm, and provide a new sample complexity result of $tilde{mathcal{O}}left( minleft{frac{1}{epsilon^2}frac{t_{text{mix}}}{(1-gamma)^6 d_{min}^4 } ,frac{1}{epsilon}frac{sqrt{|gS||gA|}}{(1-sigma_2(boldsymbol{W}))(1-gamma)^4 d_{min}^3} right}right)$ under tabular lookup
[ "['Han-Dong Lim' 'Donghwan Lee']" ]
null
null
2405.14079
null
null
http://arxiv.org/pdf/2405.14079v1
2024-05-23T00:59:00Z
2024-05-23T00:59:00Z
Advancing Transportation Mode Share Analysis with Built Environment: Deep Hybrid Models with Urban Road Network
Transportation mode share analysis is important to various real-world transportation tasks as it helps researchers understand the travel behaviors and choices of passengers. A typical example is the prediction of communities' travel mode share by accounting for their sociodemographics like age, income, etc., and travel modes' attributes (e.g. travel cost and time). However, there exist only limited efforts in integrating the structure of the urban built environment, e.g., road networks, into the mode share models to capture the impacts of the built environment. This task usually requires manual feature engineering or prior knowledge of the urban design features. In this study, we propose deep hybrid models (DHM), which directly combine road networks and sociodemographic features as inputs for travel mode share analysis. Using graph embedding (GE) techniques, we enhance travel demand models with a more powerful representation of urban structures. In experiments of mode share prediction in Chicago, results demonstrate that DHM can provide valuable spatial insights into the sociodemographic structure, improving the performance of travel demand models in estimating different mode shares at the city level. Specifically, DHM improves the results by more than 20% while retaining the interpretation power of the choice models, demonstrating its superiority in interpretability, prediction accuracy, and geographical insights.
[ "['Dingyi Zhuang' 'Qingyi Wang' 'Yunhan Zheng' 'Xiaotong Guo'\n 'Shenhao Wang' 'Haris N Koutsopoulos' 'Jinhua Zhao']" ]
null
null
2405.14082
null
null
http://arxiv.org/pdf/2405.14082v1
2024-05-23T01:06:05Z
2024-05-23T01:06:05Z
Exclusively Penalized Q-learning for Offline Reinforcement Learning
Constraint-based offline reinforcement learning (RL) involves policy constraints or imposing penalties on the value function to mitigate overestimation errors caused by distributional shift. This paper focuses on a limitation in existing offline RL methods with penalized value function, indicating the potential for underestimation bias due to unnecessary bias introduced in the value function. To address this concern, we propose Exclusively Penalized Q-learning (EPQ), which reduces estimation bias in the value function by selectively penalizing states that are prone to inducing estimation errors. Numerical results show that our method significantly reduces underestimation bias and improves performance in various offline control tasks compared to other offline RL methods
[ "['Junghyuk Yeom' 'Yonghyeon Jo' 'Jungmo Kim' 'Sanghyeon Lee'\n 'Seungyul Han']" ]
null
null
2405.14088
null
null
http://arxiv.org/pdf/2405.14088v1
2024-05-23T01:32:25Z
2024-05-23T01:32:25Z
High-dimensional Learning with Noisy Labels
This paper provides theoretical insights into high-dimensional binary classification with class-conditional noisy labels. Specifically, we study the behavior of a linear classifier with a label noisiness aware loss function, when both the dimension of data $p$ and the sample size $n$ are large and comparable. Relying on random matrix theory by supposing a Gaussian mixture data model, the performance of the linear classifier when $p,nto infty$ is shown to converge towards a limit, involving scalar statistics of the data. Importantly, our findings show that the low-dimensional intuitions to handle label noise do not hold in high-dimension, in the sense that the optimal classifier in low-dimension dramatically fails in high-dimension. Based on our derivations, we design an optimized method that is shown to be provably more efficient in handling noisy labels in high dimensions. Our theoretical conclusions are further confirmed by experiments on real datasets, where we show that our optimized approach outperforms the considered baselines.
[ "['Aymane El Firdoussi' 'Mohamed El Amine Seddik']" ]
null
null
2405.14089
null
null
http://arxiv.org/pdf/2405.14089v1
2024-05-23T01:34:12Z
2024-05-23T01:34:12Z
Improved Canonicalization for Model Agnostic Equivariance
This work introduces a novel approach to achieving architecture-agnostic equivariance in deep learning, particularly addressing the limitations of traditional equivariant architectures and the inefficiencies of the existing architecture-agnostic methods. Building equivariant models using traditional methods requires designing equivariant versions of existing models and training them from scratch, a process that is both impractical and resource-intensive. Canonicalization has emerged as a promising alternative for inducing equivariance without altering model architecture, but it suffers from the need for highly expressive and expensive equivariant networks to learn canonical orientations accurately. We propose a new method that employs any non-equivariant network for canonicalization. Our method uses contrastive learning to efficiently learn a unique canonical orientation and offers more flexibility for the choice of canonicalization network. We empirically demonstrate that this approach outperforms existing methods in achieving equivariance for large pretrained models and significantly speeds up the canonicalization process, making it up to 2 times faster.
[ "['Siba Smarak Panigrahi' 'Arnab Kumar Mondal']" ]
null
null
2405.14090
null
null
http://arxiv.org/pdf/2405.14090v1
2024-05-23T01:34:21Z
2024-05-23T01:34:21Z
Actively Learning Combinatorial Optimization Using a Membership Oracle
We consider solving a combinatorial optimization problem with an unknown linear constraint using a membership oracle that, given a solution, determines whether it is feasible or infeasible with absolute certainty. The goal of the decision maker is to find the best possible solution subject to a budget on the number of oracle calls. Inspired by active learning based on Support Vector Machines (SVMs), we adapt a classical framework in order to solve the problem by learning and exploiting a surrogate linear constraint. The resulting new framework includes training a linear separator on the labeled points and selecting new points to be labeled, which is achieved by applying a sampling strategy and solving a 0-1 integer linear program. Following the active learning literature, one can consider using SVM as a linear classifier and the information-based sampling strategy known as Simple margin. We improve on both sides: we propose an alternative sampling strategy based on mixed-integer quadratic programming and a linear separation method inspired by an algorithm for convex optimization in the oracle model. We conduct experiments on the pure knapsack problem and on a college study plan problem from the literature to show how different linear separation methods and sampling strategies influence the quality of the results in terms of objective value.
[ "['Rosario Messana' 'Rui Chen' 'Andrea Lodi']" ]
null
null
2405.14094
null
null
http://arxiv.org/pdf/2405.14094v2
2024-05-26T23:29:11Z
2024-05-23T01:48:32Z
Attending to Topological Spaces: The Cellular Transformer
Topological Deep Learning seeks to enhance the predictive performance of neural network models by harnessing topological structures in input data. Topological neural networks operate on spaces such as cell complexes and hypergraphs, that can be seen as generalizations of graphs. In this work, we introduce the Cellular Transformer (CT), a novel architecture that generalizes graph-based transformers to cell complexes. First, we propose a new formulation of the usual self- and cross-attention mechanisms, tailored to leverage incidence relations in cell complexes, e.g., edge-face and node-edge relations. Additionally, we propose a set of topological positional encodings specifically designed for cell complexes. By transforming three graph datasets into cell complex datasets, our experiments reveal that CT not only achieves state-of-the-art performance, but it does so without the need for more complex enhancements such as virtual nodes, in-domain structural encodings, or graph rewiring.
[ "['Rubén Ballester' 'Pablo Hernández-García' 'Mathilde Papillon'\n 'Claudio Battiloro' 'Nina Miolane' 'Tolga Birdal' 'Carles Casacuberta'\n 'Sergio Escalera' 'Mustafa Hajij']" ]
null
null
2405.14096
null
null
http://arxiv.org/pdf/2405.14096v1
2024-05-23T01:52:54Z
2024-05-23T01:52:54Z
Newton Informed Neural Operator for Computing Multiple Solutions of Nonlinear Partials Differential Equations
Solving nonlinear partial differential equations (PDEs) with multiple solutions using neural networks has found widespread applications in various fields such as physics, biology, and engineering. However, classical neural network methods for solving nonlinear PDEs, such as Physics-Informed Neural Networks (PINN), Deep Ritz methods, and DeepONet, often encounter challenges when confronted with the presence of multiple solutions inherent in the nonlinear problem. These methods may encounter ill-posedness issues. In this paper, we propose a novel approach called the Newton Informed Neural Operator, which builds upon existing neural network techniques to tackle nonlinearities. Our method combines classical Newton methods, addressing well-posed problems, and efficiently learns multiple solutions in a single learning process while requiring fewer supervised data points compared to existing neural network methods.
[ "['Wenrui Hao' 'Xinliang Liu' 'Yahong Yang']" ]
null
null
2405.14099
null
null
http://arxiv.org/pdf/2405.14099v1
2024-05-23T02:01:05Z
2024-05-23T02:01:05Z
Automatic Differentiation is Essential in Training Neural Networks for Solving Differential Equations
Neural network-based approaches have recently shown significant promise in solving partial differential equations (PDEs) in science and engineering, especially in scenarios featuring complex domains or the incorporation of empirical data. One advantage of the neural network method for PDEs lies in its automatic differentiation (AD), which necessitates only the sample points themselves, unlike traditional finite difference (FD) approximations that require nearby local points to compute derivatives. In this paper, we quantitatively demonstrate the advantage of AD in training neural networks. The concept of truncated entropy is introduced to characterize the training property. Specifically, through comprehensive experimental and theoretical analyses conducted on random feature models and two-layer neural networks, we discover that the defined truncated entropy serves as a reliable metric for quantifying the residual loss of random feature models and the training speed of neural networks for both AD and FD methods. Our experimental and theoretical analyses demonstrate that, from a training perspective, AD outperforms FD in solving partial differential equations.
[ "['Chuqi Chen' 'Yahong Yang' 'Yang Xiang' 'Wenrui Hao']" ]
null
null
2405.14101
null
null
http://arxiv.org/pdf/2405.14101v1
2024-05-23T02:08:44Z
2024-05-23T02:08:44Z
Enhancing Image Layout Control with Loss-Guided Diffusion Models
Diffusion models are a powerful class of generative models capable of producing high-quality images from pure noise. In particular, conditional diffusion models allow one to specify the contents of the desired image using a simple text prompt. Conditioning on a text prompt alone, however, does not allow for fine-grained control over the composition and layout of the final image, which instead depends closely on the initial noise distribution. While most methods which introduce spatial constraints (e.g., bounding boxes) require fine-tuning, a smaller and more recent subset of these methods are training-free. They are applicable whenever the prompt influences the model through an attention mechanism, and generally fall into one of two categories. The first entails modifying the cross-attention maps of specific tokens directly to enhance the signal in certain regions of the image. The second works by defining a loss function over the cross-attention maps, and using the gradient of this loss to guide the latent. While previous work explores these as alternative strategies, we provide an interpretation for these methods which highlights their complimentary features, and demonstrate that it is possible to obtain superior performance when both methods are used in concert.
[ "['Zakaria Patel' 'Kirill Serkh']" ]
null
null
2405.14103
null
null
http://arxiv.org/pdf/2405.14103v1
2024-05-23T02:13:34Z
2024-05-23T02:13:34Z
Online Self-Preferring Language Models
Aligning with human preference datasets has been critical to the success of large language models (LLMs). Reinforcement learning from human feedback (RLHF) employs a costly reward model to provide feedback for on-policy sampling responses. Recently, offline methods that directly fit responses with binary preferences in the dataset have emerged as alternatives. However, existing methods do not explicitly model preference strength information, which is crucial for distinguishing different response pairs. To overcome this limitation, we propose Online Self-Preferring (OSP) language models to learn from self-generated response pairs and self-judged preference strengths. For each prompt and corresponding self-generated responses, we introduce a ranked pairing method to construct multiple response pairs with preference strength information. We then propose the soft-preference cross-entropy loss to leverage such information. Empirically, we demonstrate that leveraging preference strength is crucial for avoiding overfitting and enhancing alignment performance. OSP achieves state-of-the-art alignment performance across various metrics in two widely used human preference datasets. OSP is parameter-efficient and more robust than the dominant online method, RLHF when limited offline data are available and generalizing to out-of-domain tasks. Moreover, OSP language models established by LLMs with proficiency in self-preferring can efficiently self-improve without external supervision.
[ "['Yuanzhao Zhai' 'Zhuo Zhang' 'Kele Xu' 'Hanyang Peng' 'Yue Yu'\n 'Dawei Feng' 'Cheng Yang' 'Bo Ding' 'Huaimin Wang']" ]
null
null
2405.14105
null
null
http://arxiv.org/pdf/2405.14105v2
2024-06-28T15:34:26Z
2024-05-23T02:14:17Z
Distributed Speculative Inference of Large Language Models
Accelerating the inference of large language models (LLMs) is an important challenge in artificial intelligence. This paper introduces distributed speculative inference (DSI), a novel distributed inference algorithm that is provably faster than speculative inference (SI) [leviathan2023fast, chen2023accelerating, miao2023specinfer] and traditional autoregressive inference (non-SI). Like other SI algorithms, DSI works on frozen LLMs, requiring no training or architectural modifications, and it preserves the target distribution. Prior studies on SI have demonstrated empirical speedups (compared to non-SI) but require a fast and accurate drafter LLM. In practice, off-the-shelf LLMs often do not have matching drafters that are sufficiently fast and accurate. We show a gap: SI gets slower than non-SI when using slower or less accurate drafters. We close this gap by proving that DSI is faster than both SI and non-SI given any drafters. By orchestrating multiple instances of the target and drafters, DSI is not only faster than SI but also supports LLMs that cannot be accelerated with SI. Our simulations show speedups of off-the-shelf LLMs in realistic settings: DSI is 1.29-1.92x faster than SI.
[ "['Nadav Timor' 'Jonathan Mamou' 'Daniel Korat' 'Moshe Berchansky'\n 'Oren Pereg' 'Moshe Wasserblat' 'Tomer Galanti' 'Michal Gordon'\n 'David Harel']" ]
null
null
2405.14106
null
null
http://arxiv.org/pdf/2405.14106v1
2024-05-23T02:24:52Z
2024-05-23T02:24:52Z
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
This paper presents a nearly tight audit of the Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm in the black-box model. Our auditing procedure empirically estimates the privacy leakage from DP-SGD using membership inference attacks; unlike prior work, the estimates are appreciably close to the theoretical DP bounds. The main intuition is to craft worst-case initial model parameters, as DP-SGD's privacy analysis is agnostic to the choice of the initial model parameters. For models trained with theoretical $varepsilon=10.0$ on MNIST and CIFAR-10, our auditing procedure yields empirical estimates of $7.21$ and $6.95$, respectively, on 1,000-record samples and $6.48$ and $4.96$ on the full datasets. By contrast, previous work achieved tight audits only in stronger (i.e., less realistic) white-box models that allow the adversary to access the model's inner parameters and insert arbitrary gradients. Our auditing procedure can be used to detect bugs and DP violations more easily and offers valuable insight into how the privacy analysis of DP-SGD can be further improved.
[ "['Meenatchi Sundaram Muthu Selva Annamalai' 'Emiliano De Cristofaro']" ]
null
null
2405.14108
null
null
http://arxiv.org/pdf/2405.14108v3
2024-07-07T19:12:04Z
2024-05-23T02:27:39Z
Deep Learning for Protein-Ligand Docking: Are We There Yet?
The effects of ligand binding on protein structures and their in vivo functions carry numerous implications for modern biomedical research and biotechnology development efforts such as drug discovery. Although several deep learning (DL) methods and benchmarks designed for protein-ligand docking have recently been introduced, to date no prior works have systematically studied the behavior of docking methods within the practical context of (1) using predicted (apo) protein structures for docking (e.g., for broad applicability); (2) docking multiple ligands concurrently to a given target protein (e.g., for enzyme design); and (3) having no prior knowledge of binding pockets (e.g., for pocket generalization). To enable a deeper understanding of docking methods' real-world utility, we introduce PoseBench, the first comprehensive benchmark for practical protein-ligand docking. PoseBench enables researchers to rigorously and systematically evaluate DL docking methods for apo-to-holo protein-ligand docking and protein-ligand structure generation using both single and multi-ligand benchmark datasets, the latter of which we introduce for the first time to the DL community. Empirically, using PoseBench, we find that all recent DL docking methods but one fail to generalize to multi-ligand protein targets and also that template-based docking algorithms perform equally well or better for multi-ligand docking as recent single-ligand DL docking methods, suggesting areas of improvement for future work. Code, data, tutorials, and benchmark results are available at https://github.com/BioinfoMachineLearning/PoseBench.
[ "['Alex Morehead' 'Nabin Giri' 'Jian Liu' 'Jianlin Cheng']" ]
null
null
2405.14111
null
null
http://arxiv.org/pdf/2405.14111v1
2024-05-23T02:31:55Z
2024-05-23T02:31:55Z
Improving Generalization of Deep Neural Networks by Optimum Shifting
Recent studies showed that the generalization of neural networks is correlated with the sharpness of the loss landscape, and flat minima suggests a better generalization ability than sharp minima. In this paper, we propose a novel method called emph{optimum shifting}, which changes the parameters of a neural network from a sharp minimum to a flatter one while maintaining the same training loss value. Our method is based on the observation that when the input and output of a neural network are fixed, the matrix multiplications within the network can be treated as systems of under-determined linear equations, enabling adjustment of parameters in the solution space, which can be simply accomplished by solving a constrained optimization problem. Furthermore, we introduce a practical stochastic optimum shifting technique utilizing the Neural Collapse theory to reduce computational costs and provide more degrees of freedom for optimum shifting. Extensive experiments (including classification and detection) with various deep neural network architectures on benchmark datasets demonstrate the effectiveness of our method.
[ "['Yuyan Zhou' 'Ye Li' 'Lei Feng' 'Sheng-Jun Huang']" ]
null
null
2405.14114
null
null
http://arxiv.org/pdf/2405.14114v2
2024-05-28T03:11:52Z
2024-05-23T02:41:36Z
Offline Reinforcement Learning from Datasets with Structured Non-Stationarity
Current Reinforcement Learning (RL) is often limited by the large amount of data needed to learn a successful policy. Offline RL aims to solve this issue by using transitions collected by a different behavior policy. We address a novel Offline RL problem setting in which, while collecting the dataset, the transition and reward functions gradually change between episodes but stay constant within each episode. We propose a method based on Contrastive Predictive Coding that identifies this non-stationarity in the offline dataset, accounts for it when training a policy, and predicts it during evaluation. We analyze our proposed method and show that it performs well in simple continuous control tasks and challenging, high-dimensional locomotion tasks. We show that our method often achieves the oracle performance and performs better than baselines.
[ "['Johannes Ackermann' 'Takayuki Osa' 'Masashi Sugiyama']" ]
null
null
2405.14115
null
null
http://arxiv.org/pdf/2405.14115v1
2024-05-23T02:42:32Z
2024-05-23T02:42:32Z
Configuring Data Augmentations to Reduce Variance Shift in Positional Embedding of Vision Transformers
Vision transformers (ViTs) have demonstrated remarkable performance in a variety of vision tasks. Despite their promising capabilities, training a ViT requires a large amount of diverse data. Several studies empirically found that using rich data augmentations, such as Mixup, Cutmix, and random erasing, is critical to the successful training of ViTs. Now, the use of rich data augmentations has become a standard practice in the current state. However, we report a vulnerability to this practice: Certain data augmentations such as Mixup cause a variance shift in the positional embedding of ViT, which has been a hidden factor that degrades the performance of ViT during the test phase. We claim that achieving a stable effect from positional embedding requires a specific condition on the image, which is often broken for the current data augmentation methods. We provide a detailed analysis of this problem as well as the correct configuration for these data augmentations to remove the side effects of variance shift. Experiments showed that adopting our guidelines improves the performance of ViTs compared with the current configuration of data augmentations.
[ "['Bum Jun Kim' 'Sang Woo Kim']" ]
null
null
2405.14116
null
null
http://arxiv.org/pdf/2405.14116v1
2024-05-23T02:43:31Z
2024-05-23T02:43:31Z
Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
The rapid development of collaborative robotics has provided a new possibility of helping the elderly who has difficulties in daily life, allowing robots to operate according to specific intentions. However, efficient human-robot cooperation requires natural, accurate and reliable intention recognition in shared environments. The current paramount challenge for this is reducing the uncertainty of multimodal fused intention to be recognized and reasoning adaptively a more reliable result despite current interactive condition. In this work we propose a novel learning-based multimodal fusion framework Batch Multimodal Confidence Learning for Opinion Pool (BMCLOP). Our approach combines Bayesian multimodal fusion method and batch confidence learning algorithm to improve accuracy, uncertainty reduction and success rate given the interactive condition. In particular, the generic and practical multimodal intention recognition framework can be easily extended further. Our desired assistive scenarios consider three modalities gestures, speech and gaze, all of which produce categorical distributions over all the finite intentions. The proposed method is validated with a six-DoF robot through extensive experiments and exhibits high performance compared to baselines.
[ "['Xiyuan Zhao' 'Huijun Li' 'Tianyuan Miao' 'Xianyi Zhu' 'Zhikai Wei'\n 'Aiguo Song']" ]
null
null
2405.14121
null
null
http://arxiv.org/pdf/2405.14121v1
2024-05-23T02:48:16Z
2024-05-23T02:48:16Z
One-shot Active Learning Based on Lewis Weight Sampling for Multiple Deep Models
Active learning (AL) for multiple target models aims to reduce labeled data querying while effectively training multiple models concurrently. Existing AL algorithms often rely on iterative model training, which can be computationally expensive, particularly for deep models. In this paper, we propose a one-shot AL method to address this challenge, which performs all label queries without repeated model training. Specifically, we extract different representations of the same dataset using distinct network backbones, and actively learn the linear prediction layer on each representation via an $ell_p$-regression formulation. The regression problems are solved approximately by sampling and reweighting the unlabeled instances based on their maximum Lewis weights across the representations. An upper bound on the number of samples needed is provided with a rigorous analysis for $pin [1, +infty)$. Experimental results on 11 benchmarks show that our one-shot approach achieves competitive performances with the state-of-the-art AL methods for multiple target models.
[ "['Sheng-Jun Huang' 'Yi Li' 'Yiming Sun' 'Ying-Peng Tang']" ]
null
null
2405.14124
null
null
http://arxiv.org/pdf/2405.14124v1
2024-05-23T02:49:57Z
2024-05-23T02:49:57Z
Mixture of Experts Meets Prompt-Based Continual Learning
Exploiting the power of pre-trained models, prompt-based approaches stand out compared to other continual learning solutions in effectively preventing catastrophic forgetting, even with very few learnable parameters and without the need for a memory buffer. While existing prompt-based continual learning methods excel in leveraging prompts for state-of-the-art performance, they often lack a theoretical explanation for the effectiveness of prompting. This paper conducts a theoretical analysis to unravel how prompts bestow such advantages in continual learning, thus offering a new perspective on prompt design. We first show that the attention block of pre-trained models like Vision Transformers inherently encodes a special mixture of experts architecture, characterized by linear experts and quadratic gating score functions. This realization drives us to provide a novel view on prefix tuning, reframing it as the addition of new task-specific experts, thereby inspiring the design of a novel gating mechanism termed Non-linear Residual Gates (NoRGa). Through the incorporation of non-linear activation and residual connection, NoRGa enhances continual learning performance while preserving parameter efficiency. The effectiveness of NoRGa is substantiated both theoretically and empirically across diverse benchmarks and pretraining paradigms.
[ "['Minh Le' 'An Nguyen' 'Huy Nguyen' 'Trang Nguyen' 'Trang Pham'\n 'Linh Van Ngo' 'Nhat Ho']" ]
null
null
2405.14126
null
null
http://arxiv.org/pdf/2405.14126v1
2024-05-23T02:58:23Z
2024-05-23T02:58:23Z
The Disappearance of Timestep Embedding in Modern Time-Dependent Neural Networks
Dynamical systems are often time-varying, whose modeling requires a function that evolves with respect to time. Recent studies such as the neural ordinary differential equation proposed a time-dependent neural network, which provides a neural network varying with respect to time. However, we claim that the architectural choice to build a time-dependent neural network significantly affects its time-awareness but still lacks sufficient validation in its current states. In this study, we conduct an in-depth analysis of the architecture of modern time-dependent neural networks. Here, we report a vulnerability of vanishing timestep embedding, which disables the time-awareness of a time-dependent neural network. Furthermore, we find that this vulnerability can also be observed in diffusion models because they employ a similar architecture that incorporates timestep embedding to discriminate between different timesteps during a diffusion process. Our analysis provides a detailed description of this phenomenon as well as several solutions to address the root cause. Through experiments on neural ordinary differential equations and diffusion models, we observed that ensuring alive time-awareness via proposed solutions boosted their performance, which implies that their current implementations lack sufficient time-dependency.
[ "['Bum Jun Kim' 'Yoshinobu Kawahara' 'Sang Woo Kim']" ]
null
null
2405.14128
null
null
http://arxiv.org/pdf/2405.14128v2
2024-05-24T03:25:08Z
2024-05-23T03:01:32Z
Transformers for Image-Goal Navigation
Visual perception and navigation have emerged as major focus areas in the field of embodied artificial intelligence. We consider the task of image-goal navigation, where an agent is tasked to navigate to a goal specified by an image, relying only on images from an onboard camera. This task is particularly challenging since it demands robust scene understanding, goal-oriented planning and long-horizon navigation. Most existing approaches typically learn navigation policies reliant on recurrent neural networks trained via online reinforcement learning. However, training such policies requires substantial computational resources and time, and performance of these models is not reliable on long-horizon navigation. In this work, we present a generative Transformer based model that jointly models image goals, camera observations and the robot's past actions to predict future actions. We use state-of-the-art perception models and navigation policies to learn robust goal conditioned policies without the need for real-time interaction with the environment. Our model demonstrates capability in capturing and associating visual information across long time horizons, helping in effective navigation. NOTE: This work was submitted as part of a Master's Capstone Project and must be treated as such. This is still an early work in progress and not the final version.
[ "['Nikhilanj Pelluri']" ]
null
null
2405.14131
null
null
http://arxiv.org/pdf/2405.14131v1
2024-05-23T03:11:07Z
2024-05-23T03:11:07Z
Statistical Advantages of Perturbing Cosine Router in Sparse Mixture of Experts
The cosine router in sparse Mixture of Experts (MoE) has recently emerged as an attractive alternative to the conventional linear router. Indeed, the cosine router demonstrates favorable performance in image and language tasks and exhibits better ability to mitigate the representation collapse issue, which often leads to parameter redundancy and limited representation potentials. Despite its empirical success, a comprehensive analysis of the cosine router in sparse MoE has been lacking. Considering the least square estimation of the cosine routing sparse MoE, we demonstrate that due to the intrinsic interaction of the model parameters in the cosine router via some partial differential equations, regardless of the structures of the experts, the estimation rates of experts and model parameters can be as slow as $mathcal{O}(1/log^{tau}(n))$ where $tau > 0$ is some constant and $n$ is the sample size. Surprisingly, these pessimistic non-polynomial convergence rates can be circumvented by the widely used technique in practice to stabilize the cosine router -- simply adding noises to the $mathbb{L}_{2}$ norms in the cosine router, which we refer to as textit{perturbed cosine router}. Under the strongly identifiable settings of the expert functions, we prove that the estimation rates for both the experts and model parameters under the perturbed cosine routing sparse MoE are significantly improved to polynomial rates. Finally, we conduct extensive simulation studies in both synthetic and real data settings to empirically validate our theoretical results.
[ "['Huy Nguyen' 'Pedram Akbarian' 'Trang Pham' 'Trang Nguyen'\n 'Shujian Zhang' 'Nhat Ho']" ]
null
null
2405.14132
null
null
http://arxiv.org/pdf/2405.14132v1
2024-05-23T03:11:18Z
2024-05-23T03:11:18Z
Text-to-Model: Text-Conditioned Neural Network Diffusion for Train-Once-for-All Personalization
Generative artificial intelligence (GenAI) has made significant progress in understanding world knowledge and generating content from human languages across various modalities, like text-to-text large language models, text-to-image stable diffusion, and text-to-video Sora. While in this paper, we investigate the capability of GenAI for text-to-model generation, to see whether GenAI can comprehend hyper-level knowledge embedded within AI itself parameters. Specifically, we study a practical scenario termed train-once-for-all personalization, aiming to generate personalized models for diverse end-users and tasks using text prompts. Inspired by the recent emergence of neural network diffusion, we present Tina, a text-conditioned neural network diffusion for train-once-for-all personalization. Tina leverages a diffusion transformer model conditioned on task descriptions embedded using a CLIP model. Despite the astronomical number of potential personalized tasks (e.g., $1.73times10^{13}$), by our design, Tina demonstrates remarkable in-distribution and out-of-distribution generalization even trained on small datasets ($sim 1000$). We further verify whether and how Tina understands world knowledge by analyzing its capabilities under zero-shot/few-shot image prompts, different numbers of personalized classes, prompts of natural language descriptions, and predicting unseen entities.
[ "['Zexi Li' 'Lingzhi Gao' 'Chao Wu']" ]
null
null
2405.14133
null
null
http://arxiv.org/pdf/2405.14133v1
2024-05-23T03:12:49Z
2024-05-23T03:12:49Z
Automated Loss function Search for Class-imbalanced Node Classification
Class-imbalanced node classification tasks are prevalent in real-world scenarios. Due to the uneven distribution of nodes across different classes, learning high-quality node representations remains a challenging endeavor. The engineering of loss functions has shown promising potential in addressing this issue. It involves the meticulous design of loss functions, utilizing information about the quantities of nodes in different categories and the network's topology to learn unbiased node representations. However, the design of these loss functions heavily relies on human expert knowledge and exhibits limited adaptability to specific target tasks. In this paper, we introduce a high-performance, flexible, and generalizable automated loss function search framework to tackle this challenge. Across 15 combinations of graph neural networks and datasets, our framework achieves a significant improvement in performance compared to state-of-the-art methods. Additionally, we observe that homophily in graph-structured data significantly contributes to the transferability of the proposed framework.
[ "['Xinyu Guo' 'Kai Wu' 'Xiaoyu Zhang' 'Jing Liu']" ]
null
null
2405.14135
null
null
http://arxiv.org/pdf/2405.14135v1
2024-05-23T03:19:02Z
2024-05-23T03:19:02Z
Learning Geospatial Region Embedding with Heterogeneous Graph
Learning effective geospatial embeddings is crucial for a series of geospatial applications such as city analytics and earth monitoring. However, learning comprehensive region representations presents two significant challenges: first, the deficiency of effective intra-region feature representation; and second, the difficulty of learning from intricate inter-region dependencies. In this paper, we present GeoHG, an effective heterogeneous graph structure for learning comprehensive region embeddings for various downstream tasks. Specifically, we tailor satellite image representation learning through geo-entity segmentation and point-of-interest (POI) integration for expressive intra-regional features. Furthermore, GeoHG unifies informative spatial interdependencies and socio-environmental attributes into a powerful heterogeneous graph to encourage explicit modeling of higher-order inter-regional relationships. The intra-regional features and inter-regional correlations are seamlessly integrated by a model-agnostic graph learning framework for diverse downstream tasks. Extensive experiments demonstrate the effectiveness of GeoHG in geo-prediction tasks compared to existing methods, even under extreme data scarcity (with just 5% of training data). With interpretable region representations, GeoHG exhibits strong generalization capabilities across regions. We will release code and data upon paper notification.
[ "['Xingchen Zou' 'Jiani Huang' 'Xixuan Hao' 'Yuhao Yang' 'Haomin Wen'\n 'Yibo Yan' 'Chao Huang' 'Yuxuan Liang']" ]
null
null
2405.14139
null
null
http://arxiv.org/pdf/2405.14139v1
2024-05-23T03:28:52Z
2024-05-23T03:28:52Z
Contribute to balance, wire in accordance: Emergence of backpropagation from a simple, bio-plausible neuroplasticity rule
Backpropagation (BP) has been pivotal in advancing machine learning and remains essential in computational applications and comparative studies of biological and artificial neural networks. Despite its widespread use, the implementation of BP in the brain remains elusive, and its biological plausibility is often questioned due to inherent issues such as the need for symmetry of weights between forward and backward connections, and the requirement of distinct forward and backward phases of computation. Here, we introduce a novel neuroplasticity rule that offers a potential mechanism for implementing BP in the brain. Similar in general form to the classical Hebbian rule, this rule is based on the core principles of maintaining the balance of excitatory and inhibitory inputs as well as on retrograde signaling, and operates over three progressively slower timescales: neural firing, retrograde signaling, and neural plasticity. We hypothesize that each neuron possesses an internal state, termed credit, in addition to its firing rate. After achieving equilibrium in firing rates, neurons receive credits based on their contribution to the E-I balance of postsynaptic neurons through retrograde signaling. As the network's credit distribution stabilizes, connections from those presynaptic neurons are strengthened that significantly contribute to the balance of postsynaptic neurons. We demonstrate mathematically that our learning rule precisely replicates BP in layered neural networks without any approximations. Simulations on artificial neural networks reveal that this rule induces varying community structures in networks, depending on the learning rate. This simple theoretical framework presents a biologically plausible implementation of BP, with testable assumptions and predictions that may be evaluated through biological experiments.
[ "['Xinhao Fan' 'Shreesh P Mysore']" ]
null
null
2405.14147
null
null
http://arxiv.org/pdf/2405.14147v1
2024-05-23T03:46:07Z
2024-05-23T03:46:07Z
Minimum number of neurons in fully connected layers of a given neural network (the first approximation)
This paper presents an algorithm for searching for the minimum number of neurons in fully connected layers of an arbitrary network solving given problem, which does not require multiple training of the network with different number of neurons. The algorithm is based at training the initial wide network using the cross-validation method over at least two folds. Then by using truncated singular value decomposition autoencoder inserted after the studied layer of trained network we search the minimum number of neurons in inference only mode of the network. It is shown that the minimum number of neurons in a fully connected layer could be interpreted not as network hyperparameter associated with the other hyperparameters of the network, but as internal (latent) property of the solution, determined by the network architecture, the training dataset, layer position, and the quality metric used. So the minimum number of neurons can be estimated for each hidden fully connected layer independently. The proposed algorithm is the first approximation for estimating the minimum number of neurons in the layer, since, on the one hand, the algorithm does not guarantee that a neural network with the found number of neurons can be trained to the required quality, and on the other hand, it searches for the minimum number of neurons in a limited class of possible solutions. The solution was tested on several datasets in classification and regression problems.
[ "['Oleg I. Berngardt']" ]
null
null
2405.14153
null
null
http://arxiv.org/pdf/2405.14153v1
2024-05-23T04:03:36Z
2024-05-23T04:03:36Z
A Neighbor-Searching Discrepancy-based Drift Detection Scheme for Learning Evolving Data
Uncertain changes in data streams present challenges for machine learning models to dynamically adapt and uphold performance in real-time. Particularly, classification boundary change, also known as real concept drift, is the major cause of classification performance deterioration. However, accurately detecting real concept drift remains challenging because the theoretical foundations of existing drift detection methods - two-sample distribution tests and monitoring classification error rate, both suffer from inherent limitations such as the inability to distinguish virtual drift (changes not affecting the classification boundary, will introduce unnecessary model maintenance), limited statistical power, or high computational cost. Furthermore, no existing detection method can provide information on the trend of the drift, which could be invaluable for model maintenance. This work presents a novel real concept drift detection method based on Neighbor-Searching Discrepancy, a new statistic that measures the classification boundary difference between two samples. The proposed method is able to detect real concept drift with high accuracy while ignoring virtual drift. It can also indicate the direction of the classification boundary change by identifying the invasion or retreat of a certain class, which is also an indicator of separability change between classes. A comprehensive evaluation of 11 experiments is conducted, including empirical verification of the proposed theory using artificial datasets, and experimental comparisons with commonly used drift handling methods on real-world datasets. The results show that the proposed theory is robust against a range of distributions and dimensions, and the drift detection method outperforms state-of-the-art alternative methods.
[ "['Feng Gu' 'Jie Lu' 'Zhen Fang' 'Kun Wang' 'Guangquan Zhang']" ]
null
null
2405.14161
null
null
http://arxiv.org/pdf/2405.14161v1
2024-05-23T04:27:11Z
2024-05-23T04:27:11Z
Self-Taught Recognizer: Toward Unsupervised Adaptation for Speech Foundation Models
We propose an unsupervised adaptation framework, Self-TAught Recognizer (STAR), which leverages unlabeled data to enhance the robustness of automatic speech recognition (ASR) systems in diverse target domains, such as noise and accents. STAR is developed for prevalent speech foundation models based on Transformer-related architecture with auto-regressive decoding (e.g., Whisper, Canary). Specifically, we propose a novel indicator that empirically integrates step-wise information during decoding to assess the token-level quality of pseudo labels without ground truth, thereby guiding model updates for effective unsupervised adaptation. Experimental results show that STAR achieves an average of 13.5% relative reduction in word error rate across 14 target domains, and it sometimes even approaches the upper-bound performance of supervised adaptation. Surprisingly, we also observe that STAR prevents the adapted model from the common catastrophic forgetting problem without recalling source-domain data. Furthermore, STAR exhibits high data efficiency that only requires less than one-hour unlabeled data, and seamless generality to alternative large speech models and speech translation tasks. Our code aims to open source to the research communities.
[ "['Yuchen Hu' 'Chen Chen' 'Chao-Han Huck Yang' 'Chengwei Qin' 'Pin-Yu Chen'\n 'Eng Siong Chng' 'Chao Zhang']" ]
null
null
2405.14176
null
null
http://arxiv.org/pdf/2405.14176v1
2024-05-23T05:02:00Z
2024-05-23T05:02:00Z
Certified Robustness against Sparse Adversarial Perturbations via Data Localization
Recent work in adversarial robustness suggests that natural data distributions are localized, i.e., they place high probability in small volume regions of the input space, and that this property can be utilized for designing classifiers with improved robustness guarantees for $ell_2$-bounded perturbations. Yet, it is still unclear if this observation holds true for more general metrics. In this work, we extend this theory to $ell_0$-bounded adversarial perturbations, where the attacker can modify a few pixels of the image but is unrestricted in the magnitude of perturbation, and we show necessary and sufficient conditions for the existence of $ell_0$-robust classifiers. Theoretical certification approaches in this regime essentially employ voting over a large ensemble of classifiers. Such procedures are combinatorial and expensive or require complicated certification techniques. In contrast, a simple classifier emerges from our theory, dubbed Box-NN, which naturally incorporates the geometry of the problem and improves upon the current state-of-the-art in certified robustness against sparse attacks for the MNIST and Fashion-MNIST datasets.
[ "['Ambar Pal' 'René Vidal' 'Jeremias Sulam']" ]
null
null
2405.14183
null
null
http://arxiv.org/pdf/2405.14183v1
2024-05-23T05:27:51Z
2024-05-23T05:27:51Z
Deterministic Policies for Constrained Reinforcement Learning in Polynomial-Time
We present a novel algorithm that efficiently computes near-optimal deterministic policies for constrained reinforcement learning (CRL) problems. Our approach combines three key ideas: (1) value-demand augmentation, (2) action-space approximate dynamic programming, and (3) time-space rounding. Under mild reward assumptions, our algorithm constitutes a fully polynomial-time approximation scheme (FPTAS) for a diverse class of cost criteria. This class requires that the cost of a policy can be computed recursively over both time and (state) space, which includes classical expectation, almost sure, and anytime constraints. Our work not only provides provably efficient algorithms to address real-world challenges in decision-making but also offers a unifying theory for the efficient computation of constrained deterministic policies.
[ "['Jeremy McMahan']" ]
null
null
2405.14185
null
null
http://arxiv.org/pdf/2405.14185v1
2024-05-23T05:29:29Z
2024-05-23T05:29:29Z
A structure-aware framework for learning device placements on computation graphs
Existing approaches for device placement ignore the topological features of computation graphs and rely mostly on heuristic methods for graph partitioning. At the same time, they either follow a grouper-placer or an encoder-placer architecture, which requires understanding the interaction structure between code operations. To bridge the gap between encoder-placer and grouper-placer techniques, we propose a novel framework for the task of device placement, relying on smaller computation graphs extracted from the OpenVINO toolkit using reinforcement learning. The framework consists of five steps, including graph coarsening, node representation learning and policy optimization. It facilitates end-to-end training and takes into consideration the directed and acyclic nature of the computation graphs. We also propose a model variant, inspired by graph parsing networks and complex network analysis, enabling graph representation learning and personalized graph partitioning jointly, using an unspecified number of groups. To train the entire framework, we utilize reinforcement learning techniques by employing the execution time of the suggested device placements to formulate the reward. We demonstrate the flexibility and effectiveness of our approach through multiple experiments with three benchmark models, namely Inception-V3, ResNet, and BERT. The robustness of the proposed framework is also highlighted through an ablation study. The suggested placements improve the inference speed for the benchmark models by up to $58.2%$ over CPU execution and by up to $60.24%$ compared to other commonly used baselines.
[ "['Shukai Duan' 'Heng Ping' 'Nikos Kanakaris' 'Xiongye Xiao' 'Peiyu Zhang'\n 'Panagiotis Kyriakis' 'Nesreen K. Ahmed' 'Guixiang Ma' 'Mihai Capota'\n 'Shahin Nazarian' 'Theodore L. Willke' 'Paul Bogdan']" ]
null
null
2405.14186
null
null
http://arxiv.org/pdf/2405.14186v1
2024-05-23T05:29:36Z
2024-05-23T05:29:36Z
Fairness Hub Technical Briefs: Definition and Detection of Distribution Shift
Distribution shift is a common situation in machine learning tasks, where the data used for training a model is different from the data the model is applied to in the real world. This issue arises across multiple technical settings: from standard prediction tasks, to time-series forecasting, and to more recent applications of large language models (LLMs). This mismatch can lead to performance reductions, and can be related to a multiplicity of factors: sampling issues and non-representative data, changes in the environment or policies, or the emergence of previously unseen scenarios. This brief focuses on the definition and detection of distribution shifts in educational settings. We focus on standard prediction problems, where the task is to learn a model that takes in a series of input (predictors) $X=(x_1,x_2,...,x_m)$ and produces an output $Y=f(X)$.
[ "['Nicolas Acevedo' 'Carmen Cortez' 'Chris Brooks' 'Rene Kizilcec'\n 'Renzhe Yu']" ]
null
null
2405.14194
null
null
http://arxiv.org/pdf/2405.14194v1
2024-05-23T05:42:38Z
2024-05-23T05:42:38Z
Graphlets correct for the topological information missed by random walks
Random walks are widely used for mining networks due to the computational efficiency of computing them. For instance, graph representation learning learns a d-dimensional embedding space, so that the nodes that tend to co-occur on random walks (a proxy of being in the same network neighborhood) are close in the embedding space. Specific local network topology (i.e., structure) influences the co-occurrence of nodes on random walks, so random walks of limited length capture only partial topological information, hence diminishing the performance of downstream methods. We explicitly capture all topological neighborhood information and improve performance by introducing orbit adjacencies that quantify the adjacencies of two nodes as co-occurring on a given pair of graphlet orbits, which are symmetric positions on graphlets (small, connected, non-isomorphic, induced subgraphs of a large network). Importantly, we mathematically prove that random walks on up to k nodes capture only a subset of all the possible orbit adjacencies for up to k-node graphlets. Furthermore, we enable orbit adjacency-based analysis of networks by developing an efficient GRaphlet-orbit ADjacency COunter (GRADCO), which exhaustively computes all 28 orbit adjacency matrices for up to four-node graphlets. Note that four-node graphlets suffice, because real networks are usually small-world. In large networks on around 20,000 nodes, GRADCOcomputesthe28matricesinminutes. Onsixrealnetworksfromvarious domains, we compare the performance of node-label predictors obtained by using the network embeddings based on our orbit adjacencies to those based on random walks. We find that orbit adjacencies, which include those unseen by random walks, outperform random walk-based adjacencies, demonstrating the importance of the inclusion of the topological neighborhood information that is unseen by random walks.
[ "['Sam F. L. Windels' 'Noel Malod-Dognin' 'Natasa Przulj']" ]
null
null
2405.14199
null
null
http://arxiv.org/pdf/2405.14199v1
2024-05-23T05:52:42Z
2024-05-23T05:52:42Z
Adaptive Teaching in Heterogeneous Agents: Balancing Surprise in Sparse Reward Scenarios
Learning from Demonstration (LfD) can be an efficient way to train systems with analogous agents by enabling ``Student'' agents to learn from the demonstrations of the most experienced ``Teacher'' agent, instead of training their policy in parallel. However, when there are discrepancies in agent capabilities, such as divergent actuator power or joint angle constraints, naively replicating demonstrations that are out of bounds for the Student's capability can limit efficient learning. We present a Teacher-Student learning framework specifically tailored to address the challenge of heterogeneity between the Teacher and Student agents. Our framework is based on the concept of ``surprise'', inspired by its application in exploration incentivization in sparse-reward environments. Surprise is repurposed to enable the Teacher to detect and adapt to differences between itself and the Student. By focusing on maximizing its surprise in response to the environment while concurrently minimizing the Student's surprise in response to the demonstrations, the Teacher agent can effectively tailor its demonstrations to the Student's specific capabilities and constraints. We validate our method by demonstrating improvements in the Student's learning in control tasks within sparse-reward environments.
[ "['Emma Clark' 'Kanghyun Ryu' 'Negar Mehr']" ]
null
null
2405.14203
null
null
http://arxiv.org/pdf/2405.14203v1
2024-05-23T06:02:07Z
2024-05-23T06:02:07Z
GLaD: Synergizing Molecular Graphs and Language Descriptors for Enhanced Power Conversion Efficiency Prediction in Organic Photovoltaic Devices
This paper presents a novel approach for predicting Power Conversion Efficiency (PCE) of Organic Photovoltaic (OPV) devices, called GLaD: synergizing molecular Graphs and Language Descriptors for enhanced PCE prediction. Due to the lack of high-quality experimental data, we collect a dataset consisting of 500 pairs of OPV donor and acceptor molecules along with their corresponding PCE values, which we utilize as the training data for our predictive model. In this low-data regime, GLaD leverages properties learned from large language models (LLMs) pretrained on extensive scientific literature to enrich molecular structural representations, allowing for a multimodal representation of molecules. GLaD achieves precise predictions of PCE, thereby facilitating the synthesis of new OPV molecules with improved efficiency. Furthermore, GLaD showcases versatility, as it applies to a range of molecular property prediction tasks (BBBP, BACE, ClinTox, and SIDER), not limited to those concerning OPV materials. Especially, GLaD proves valuable for tasks in low-data regimes within the chemical space, as it enriches molecular representations by incorporating molecular property descriptions learned from large-scale pretraining. This capability is significant in real-world scientific endeavors like drug and material discovery, where access to comprehensive data is crucial for informed decision-making and efficient exploration of the chemical space.
[ "['Thao Nguyen' 'Tiara Torres-Flores' 'Changhyun Hwang' 'Carl Edwards'\n 'Ying Diao' 'Heng Ji']" ]
null
null
2405.14205
null
null
http://arxiv.org/pdf/2405.14205v1
2024-05-23T06:03:19Z
2024-05-23T06:03:19Z
Agent Planning with World Knowledge Model
Recent endeavors towards directly using large language models (LLMs) as agent models to execute interactive planning tasks have shown commendable results. Despite their achievements, however, they still struggle with brainless trial-and-error in global planning and generating hallucinatory actions in local planning due to their poor understanding of the ''real'' physical world. Imitating humans' mental world knowledge model which provides global prior knowledge before the task and maintains local dynamic knowledge during the task, in this paper, we introduce parametric World Knowledge Model (WKM) to facilitate agent planning. Concretely, we steer the agent model to self-synthesize knowledge from both expert and sampled trajectories. Then we develop WKM, providing prior task knowledge to guide the global planning and dynamic state knowledge to assist the local planning. Experimental results on three complex real-world simulated datasets with three state-of-the-art open-source LLMs, Mistral-7B, Gemma-7B, and Llama-3-8B, demonstrate that our method can achieve superior performance compared to various strong baselines. Besides, we analyze to illustrate that our WKM can effectively alleviate the blind trial-and-error and hallucinatory action issues, providing strong support for the agent's understanding of the world. Other interesting findings include: 1) our instance-level task knowledge can generalize better to unseen tasks, 2) weak WKM can guide strong agent model planning, and 3) unified WKM training has promising potential for further development. Code will be available at https://github.com/zjunlp/WKM.
[ "['Shuofei Qiao' 'Runnan Fang' 'Ningyu Zhang' 'Yuqi Zhu' 'Xiang Chen'\n 'Shumin Deng' 'Yong Jiang' 'Pengjun Xie' 'Fei Huang' 'Huajun Chen']" ]
null
null
2405.14214
null
null
http://arxiv.org/pdf/2405.14214v1
2024-05-23T06:17:26Z
2024-05-23T06:17:26Z
A Behavior-Aware Approach for Deep Reinforcement Learning in Non-stationary Environments without Known Change Points
Deep reinforcement learning is used in various domains, but usually under the assumption that the environment has stationary conditions like transitions and state distributions. When this assumption is not met, performance suffers. For this reason, tracking continuous environmental changes and adapting to unpredictable conditions is challenging yet crucial because it ensures that systems remain reliable and flexible in practical scenarios. Our research introduces Behavior-Aware Detection and Adaptation (BADA), an innovative framework that merges environmental change detection with behavior adaptation. The key inspiration behind our method is that policies exhibit different global behaviors in changing environments. Specifically, environmental changes are identified by analyzing variations between behaviors using Wasserstein distances without manually set thresholds. The model adapts to the new environment through behavior regularization based on the extent of changes. The results of a series of experiments demonstrate better performance relative to several current algorithms. This research also indicates significant potential for tackling this long-standing challenge.
[ "['Zihe Liu' 'Jie Lu' 'Guangquan Zhang' 'Junyu Xuan']" ]
null
null
2405.14219
null
null
http://arxiv.org/pdf/2405.14219v1
2024-05-23T06:28:44Z
2024-05-23T06:28:44Z
Understanding the Training and Generalization of Pretrained Transformer for Sequential Decision Making
In this paper, we consider the supervised pretrained transformer for a class of sequential decision-making problems. The class of considered problems is a subset of the general formulation of reinforcement learning in that there is no transition probability matrix, and the class of problems covers bandits, dynamic pricing, and newsvendor problems as special cases. Such a structure enables the use of optimal actions/decisions in the pretraining phase, and the usage also provides new insights for the training and generalization of the pretrained transformer. We first note that the training of the transformer model can be viewed as a performative prediction problem, and the existing methods and theories largely ignore or cannot resolve the arisen out-of-distribution issue. We propose a natural solution that includes the transformer-generated action sequences in the training procedure, and it enjoys better properties both numerically and theoretically. The availability of the optimal actions in the considered tasks also allows us to analyze the properties of the pretrained transformer as an algorithm and explains why it may lack exploration and how this can be automatically resolved. Numerically, we categorize the advantages of the pretrained transformer over the structured algorithms such as UCB and Thompson sampling into three cases: (i) it better utilizes the prior knowledge in the pretraining data; (ii) it can elegantly handle the misspecification issue suffered by the structured algorithms; (iii) for short time horizon such as $Tle50$, it behaves more greedy and enjoys much better regret than the structured algorithms which are designed for asymptotic optimality.
[ "['Hanzhao Wang' 'Yu Pan' 'Fupeng Sun' 'Shang Liu' 'Kalyan Talluri'\n 'Guanting Chen' 'Xiaocheng Li']" ]
null
null
2405.14222
null
null
http://arxiv.org/pdf/2405.14222v1
2024-05-23T06:32:42Z
2024-05-23T06:32:42Z
RAQ-VAE: Rate-Adaptive Vector-Quantized Variational Autoencoder
Vector Quantized Variational AutoEncoder (VQ-VAE) is an established technique in machine learning for learning discrete representations across various modalities. However, its scalability and applicability are limited by the need to retrain the model to adjust the codebook for different data or model scales. We introduce the Rate-Adaptive VQ-VAE (RAQ-VAE) framework, which addresses this challenge with two novel codebook representation methods: a model-based approach using a clustering-based technique on an existing well-trained VQ-VAE model, and a data-driven approach utilizing a sequence-to-sequence (Seq2Seq) model for variable-rate codebook generation. Our experiments demonstrate that RAQ-VAE achieves effective reconstruction performance across multiple rates, often outperforming conventional fixed-rate VQ-VAE models. This work enhances the adaptability and performance of VQ-VAEs, with broad applications in data reconstruction, generation, and computer vision tasks.
[ "['Jiwan Seo' 'Joonhyuk Kang']" ]
null
null
2405.14226
null
null
http://arxiv.org/pdf/2405.14226v1
2024-05-23T06:57:04Z
2024-05-23T06:57:04Z
Variational Delayed Policy Optimization
In environments with delayed observation, state augmentation by including actions within the delay window is adopted to retrieve Markovian property to enable reinforcement learning (RL). However, state-of-the-art (SOTA) RL techniques with Temporal-Difference (TD) learning frameworks often suffer from learning inefficiency, due to the significant expansion of the augmented state space with the delay. To improve learning efficiency without sacrificing performance, this work introduces a novel framework called Variational Delayed Policy Optimization (VDPO), which reformulates delayed RL as a variational inference problem. This problem is further modelled as a two-step iterative optimization problem, where the first step is TD learning in the delay-free environment with a small state space, and the second step is behaviour cloning which can be addressed much more efficiently than TD learning. We not only provide a theoretical analysis of VDPO in terms of sample complexity and performance, but also empirically demonstrate that VDPO can achieve consistent performance with SOTA methods, with a significant enhancement of sample efficiency (approximately 50% less amount of samples) in the MuJoCo benchmark.
[ "['Qingyuan Wu' 'Simon Sinong Zhan' 'Yixuan Wang' 'Yuhui Wang'\n 'Chung-Wei Lin' 'Chen Lv' 'Qi Zhu' 'Chao Huang']" ]
null
null
2405.14232
null
null
http://arxiv.org/pdf/2405.14232v2
2024-05-24T09:04:33Z
2024-05-23T07:04:41Z
FloodDamageCast: Building Flood Damage Nowcasting with Machine Learning and Data Augmentation
Near-real time estimation of damage to buildings and infrastructure, referred to as damage nowcasting in this study, is crucial for empowering emergency responders to make informed decisions regarding evacuation orders and infrastructure repair priorities during disaster response and recovery. Here, we introduce FloodDamageCast, a machine learning framework tailored for property flood damage nowcasting. The framework leverages heterogeneous data to predict residential flood damage at a resolution of 500 meters by 500 meters within Harris County, Texas, during the 2017 Hurricane Harvey. To deal with data imbalance, FloodDamageCast incorporates a generative adversarial networks-based data augmentation coupled with an efficient machine learning model. The results demonstrate the model's ability to identify high-damage spatial areas that would be overlooked by baseline models. Insights gleaned from flood damage nowcasting can assist emergency responders to more efficiently identify repair needs, allocate resources, and streamline on-the-ground inspections, thereby saving both time and effort.
[ "['Chia-Fu Liu' 'Lipai Huang' 'Kai Yin' 'Sam Brody' 'Ali Mostafavi']" ]
null
null
2405.14233
null
null
http://arxiv.org/pdf/2405.14233v1
2024-05-23T07:08:57Z
2024-05-23T07:08:57Z
Language processing in humans and computers
Machine-learned language models have transformed everyday life: they steer us when we study, drive, manage money. They have the potential to transform our civilization. But they hallucinate. Their realities are virtual. This note provides a high-level overview of language models and outlines a low-level model of learning machines. It turns out that, after they become capable of recognizing hallucinations and dreaming safely, as humans tend to be, the language-learning machines proceed to generate broader systems of false beliefs and self-confirming theories, as humans tend to do.
[ "['Dusko Pavlovic']" ]
null
null
2405.14239
null
null
http://arxiv.org/pdf/2405.14239v1
2024-05-23T07:18:08Z
2024-05-23T07:18:08Z
Harmony: A Joint Self-Supervised and Weakly-Supervised Framework for Learning General Purpose Visual Representations
Vision-language contrastive learning frameworks like CLIP enable learning representations from natural language supervision, and provide strong zero-shot classification capabilities. However, due to the nature of the supervisory signal in these paradigms, they lack the ability to learn localized features, leading to degraded performance on dense prediction tasks like segmentation and detection. On the other hand, self-supervised learning methods have shown the ability to learn granular representations, complementing the high-level features in vision-language training. In this work, we present Harmony, a framework that combines vision-language training with discriminative and generative self-supervision to learn visual features that can be generalized across vision downstream tasks. Our framework is specifically designed to work on web-scraped data by not relying on negative examples and addressing the one-to-one correspondence issue using soft CLIP targets generated by an EMA model. We comprehensively evaluate Harmony across various vision downstream tasks and find that it significantly outperforms the baseline CLIP and the previously leading joint self and weakly-supervised methods, MaskCLIP and SLIP. Specifically, when comparing against these methods, Harmony shows superior performance in fine-tuning and zero-shot classification on ImageNet-1k, semantic segmentation on ADE20K, and both object detection and instance segmentation on MS-COCO, when pre-training a ViT-S/16 on CC3M. We also show that Harmony outperforms other self-supervised learning methods like iBOT and MAE across all tasks evaluated. On https://github.com/MohammedSB/Harmony our code is publicly available.
[ "['Mohammed Baharoon' 'Jonathan Klein' 'Dominik L. Michels']" ]
null
null
2405.14244
null
null
http://arxiv.org/pdf/2405.14244v1
2024-05-23T07:23:33Z
2024-05-23T07:23:33Z
Tell my why: Training preferences-based RL with human preferences and step-level explanations
Human-in-the-loop reinforcement learning (HRL) allows the training of agents through various interfaces, even for non-expert humans. Recently, preference-based methods (PBRL), where the human has to give his preference over two trajectories, increased in popularity since they allow training in domains where more direct feedback is hard to formulate. However, the current PBRL methods have limitations and do not provide humans with an expressive interface for giving feedback. With this work, we propose a new preference-based learning method that provides humans with a more expressive interface to provide their preference over trajectories and a factual explanation (or annotation of why they have this preference). These explanations allow the human to explain what parts of the trajectory are most relevant for the preference. We allow the expression of the explanations over individual trajectory steps. We evaluate our method in various simulations using a simulated human oracle (with realistic restrictions), and our results show that our extended feedback can improve the speed of learning. Code & data: github.com/under-rewiev
[ "['Jakob Karalus']" ]
null
null
2405.14246
null
null
http://arxiv.org/pdf/2405.14246v3
2024-07-10T04:01:55Z
2024-05-23T07:25:31Z
GCondenser: Benchmarking Graph Condensation
Large-scale graphs are valuable for graph representation learning, yet the abundant data in these graphs hinders the efficiency of the training process. Graph condensation (GC) alleviates this issue by compressing the large graph into a significantly smaller one that still supports effective model training. Although recent research has introduced various approaches to improve the effectiveness of the condensed graph, comprehensive and practical evaluations across different GC methods are neglected. This paper proposes the first large-scale graph condensation benchmark, GCondenser, to holistically evaluate and compare mainstream GC methods. GCondenser includes a standardised GC paradigm, consisting of condensation, validation, and evaluation procedures, as well as enabling extensions to new GC methods and datasets. With GCondenser, a comprehensive performance study is conducted, presenting the effectiveness of existing methods. GCondenser is open-sourced and available at https://github.com/superallen13/GCondenser.
[ "['Yilun Liu' 'Ruihong Qiu' 'Zi Huang']" ]
null
null
2405.14250
null
null
http://arxiv.org/pdf/2405.14250v3
2024-06-12T15:26:18Z
2024-05-23T07:28:56Z
Diffusion models for Gaussian distributions: Exact solutions and Wasserstein errors
Diffusion or score-based models recently showed high performance in image generation. They rely on a forward and a backward stochastic differential equations (SDE). The sampling of a data distribution is achieved by solving numerically the backward SDE or its associated flow ODE. Studying the convergence of these models necessitates to control four different types of error: the initialization error, the truncation error, the discretization and the score approximation. In this paper, we study theoretically the behavior of diffusion models and their numerical implementation when the data distribution is Gaussian. In this restricted framework where the score function is a linear operator, we can derive the analytical solutions of the forward and backward SDEs as well as the associated flow ODE. This provides exact expressions for various Wasserstein errors which enable us to compare the influence of each error type for any sampling scheme, thus allowing to monitor convergence directly in the data space instead of relying on Inception features. Our experiments show that the recommended numerical schemes from the diffusion models literature are also the best sampling schemes for Gaussian distributions.
[ "['Emile Pierret' 'Bruno Galerne']" ]
null
null
2405.14252
null
null
http://arxiv.org/pdf/2405.14252v2
2024-05-25T11:02:56Z
2024-05-23T07:31:10Z
Time-FFM: Towards LM-Empowered Federated Foundation Model for Time Series Forecasting
Unlike natural language processing and computer vision, the development of Foundation Models (FMs) for time series forecasting is blocked due to data scarcity. While recent efforts are focused on building such FMs by unlocking the potential of language models (LMs) for time series analysis, dedicated parameters for various downstream forecasting tasks need training, which hinders the common knowledge sharing across domains. Moreover, data owners may hesitate to share the access to local data due to privacy concerns and copyright protection, which makes it impossible to simply construct a FM on cross-domain training instances. To address these issues, we propose Time-FFM, a Federated Foundation Model for Time series forecasting by leveraging pretrained LMs. Specifically, we begin by transforming time series into the modality of text tokens. To bootstrap LMs for time series reasoning, we propose a prompt adaption module to determine domain-customized prompts dynamically instead of artificially. Given the data heterogeneity across domains, we design a personalized federated training strategy by learning global encoders and local prediction heads. Our comprehensive experiments indicate that Time-FFM outperforms state-of-the-arts and promises effective few-shot and zero-shot forecaster.
[ "['Qingxiang Liu' 'Xu Liu' 'Chenghao Liu' 'Qingsong Wen' 'Yuxuan Liang']" ]
null
null
2405.14253
null
null
http://arxiv.org/pdf/2405.14253v1
2024-05-23T07:31:20Z
2024-05-23T07:31:20Z
Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing
The ability to perform fast and accurate atomistic simulations is crucial for advancing the chemical sciences. By learning from high-quality data, machine-learned interatomic potentials achieve accuracy on par with ab initio and first-principles methods at a fraction of their computational cost. The success of machine-learned interatomic potentials arises from integrating inductive biases such as equivariance to group actions on an atomic system, e.g., equivariance to rotations and reflections. In particular, the field has notably advanced with the emergence of equivariant message-passing architectures. Most of these models represent an atomic system using spherical tensors, tensor products of which require complicated numerical coefficients and can be computationally demanding. This work introduces higher-rank irreducible Cartesian tensors as an alternative to spherical tensors, addressing the above limitations. We integrate irreducible Cartesian tensor products into message-passing neural networks and prove the equivariance of the resulting layers. Through empirical evaluations on various benchmark data sets, we consistently observe on-par or better performance than that of state-of-the-art spherical models.
[ "['Viktor Zaverkin' 'Francesco Alesiani' 'Takashi Maruyama'\n 'Federico Errica' 'Henrik Christiansen' 'Makoto Takamoto' 'Nicolas Weber'\n 'Mathias Niepert']" ]
null
null
2405.14256
null
null
http://arxiv.org/pdf/2405.14256v1
2024-05-23T07:37:16Z
2024-05-23T07:37:16Z
ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification
KV cache stores key and value states from previous tokens to avoid re-computation, yet it demands substantial storage space, especially for long sequences. Adaptive KV cache compression seeks to discern the saliency of tokens, preserving vital information while aggressively compressing those of less importance. However, previous methods of this approach exhibit significant performance degradation at high compression ratios due to inaccuracies in identifying salient tokens. In this paper, we present ZipCache, an accurate and efficient KV cache quantization method for LLMs. First, we construct a strong baseline for quantizing KV cache. Through the proposed channel-separable tokenwise quantization scheme, the memory overhead of quantization parameters are substantially reduced compared to fine-grained groupwise quantization. To enhance the compression ratio, we propose normalized attention score as an effective metric for identifying salient tokens by considering the lower triangle characteristics of the attention matrix. Moreover, we develop an efficient approximation method that decouples the saliency metric from full attention scores, enabling compatibility with fast attention implementations like FlashAttention. Extensive experiments demonstrate that ZipCache achieves superior compression ratios, fast generation speed and minimal performance losses compared with previous KV cache compression methods. For instance, when evaluating Mistral-7B model on GSM8k dataset, ZipCache is capable of compressing the KV cache by $4.98times$, with only a $0.38%$ drop in accuracy. In terms of efficiency, ZipCache also showcases a $37.3%$ reduction in prefill-phase latency, a $56.9%$ reduction in decoding-phase latency, and a $19.8%$ reduction in GPU memory usage when evaluating LLaMA3-8B model with a input length of $4096$.
[ "['Yefei He' 'Luoming Zhang' 'Weijia Wu' 'Jing Liu' 'Hong Zhou'\n 'Bohan Zhuang']" ]
null
null
2405.14257
null
null
http://arxiv.org/pdf/2405.14257v1
2024-05-23T07:37:33Z
2024-05-23T07:37:33Z
Deep Learning Methods for Adjusting Global MFD Speed Estimations to Local Link Configurations
In large-scale traffic optimization, models based on Macroscopic Fundamental Diagram (MFD) are recognized for their efficiency in broad analyses. However, they fail to reflect variations in the individual traffic status of each road link, leading to a gap in detailed traffic optimization and analysis. To address the limitation, this study introduces a Local Correction Factor (LCF) that a function integrates MFD-derived network mean speed with network configurations to accurately estimate the individual speed of the link. We use a novel deep learning framework combining Graph Attention Networks (GATs) with Gated Recurrent Units (GRUs) to capture both spatial configurations and temporal dynamics of the network. Coupled with a strategic network partitioning method, our model enhances the precision of link-level traffic speed estimations while preserving the computational benefits of aggregate models. In the experiment, we evaluate the proposed LCF through various urban traffic scenarios, including different demand levels, origin-destination distributions, and road configurations. The results show the robust adaptability and effectiveness of the proposed model. Furthermore, we validate the practicality of our model by calculating the travel time of each randomly generated path, with the average error relative to MFD-based results being reduced to approximately 76%.
[ "['Zhixiong Jin' 'Dimitrios Tsitsokas' 'Nikolas Geroliminis'\n 'Ludovic Leclercq']" ]
null
null
2405.14260
null
null
http://arxiv.org/pdf/2405.14260v1
2024-05-23T07:40:21Z
2024-05-23T07:40:21Z
Graph Sparsification via Mixture of Graphs
Graph Neural Networks (GNNs) have demonstrated superior performance across various graph learning tasks but face significant computational challenges when applied to large-scale graphs. One effective approach to mitigate these challenges is graph sparsification, which involves removing non-essential edges to reduce computational overhead. However, previous graph sparsification methods often rely on a single global sparsity setting and uniform pruning criteria, failing to provide customized sparsification schemes for each node's complex local context. In this paper, we introduce Mixture-of-Graphs (MoG), leveraging the concept of Mixture-of-Experts (MoE), to dynamically select tailored pruning solutions for each node. Specifically, MoG incorporates multiple sparsifier experts, each characterized by unique sparsity levels and pruning criteria, and selects the appropriate experts for each node. Subsequently, MoG performs a mixture of the sparse graphs produced by different experts on the Grassmann manifold to derive an optimal sparse graph. One notable property of MoG is its entirely local nature, as it depends on the specific circumstances of each individual node. Extensive experiments on four large-scale OGB datasets and two superpixel datasets, equipped with five GNN backbones, demonstrate that MoG (I) identifies subgraphs at higher sparsity levels ($8.67%sim 50.85%$), with performance equal to or better than the dense graph, (II) achieves $1.47-2.62times$ speedup in GNN inference with negligible performance drop, and (III) boosts ``top-student'' GNN performance ($1.02%uparrow$ on RevGNN+textsc{ogbn-proteins} and $1.74%uparrow$ on DeeperGCN+textsc{ogbg-ppa}).
[ "['Guibin Zhang' 'Xiangguo Sun' 'Yanwei Yue' 'Kun Wang' 'Tianlong Chen'\n 'Shirui Pan']" ]
null
null
2405.14264
null
null
http://arxiv.org/pdf/2405.14264v1
2024-05-23T07:43:24Z
2024-05-23T07:43:24Z
Reassessing Evaluation Functions in Algorithmic Recourse: An Empirical Study from a Human-Centered Perspective
In this study, we critically examine the foundational premise of algorithmic recourse - a process of generating counterfactual action plans (i.e., recourses) assisting individuals to reverse adverse decisions made by AI systems. The assumption underlying algorithmic recourse is that individuals accept and act on recourses that minimize the gap between their current and desired states. This assumption, however, remains empirically unverified. To address this issue, we conducted a user study with 362 participants and assessed whether minimizing the distance function, a metric of the gap between the current and desired states, indeed prompts them to accept and act upon suggested recourses. Our findings reveal a nuanced landscape: participants' acceptance of recourses did not correlate with the recourse distance. Moreover, participants' willingness to act upon recourses peaked at the minimal recourse distance but was otherwise constant. These findings cast doubt on the prevailing assumption of algorithmic recourse research and signal the need to rethink the evaluation functions to pave the way for human-centered recourse generation.
[ "['Tomu Tominaga' 'Naomi Yamashita' 'Takeshi Kurashima']" ]