categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2405.15942
null
null
http://arxiv.org/pdf/2405.15942v2
2024-06-05T14:16:19Z
2024-05-24T21:09:53Z
Can Implicit Bias Imply Adversarial Robustness?
The implicit bias of gradient-based training algorithms has been considered mostly beneficial as it leads to trained networks that often generalize well. However, Frei et al. (2023) show that such implicit bias can harm adversarial robustness. Specifically, they show that if the data consists of clusters with small inter-cluster correlation, a shallow (two-layer) ReLU network trained by gradient flow generalizes well, but it is not robust to adversarial attacks of small radius. Moreover, this phenomenon occurs despite the existence of a much more robust classifier that can be explicitly constructed from a shallow network. In this paper, we extend recent analyses of neuron alignment to show that a shallow network with a polynomial ReLU activation (pReLU) trained by gradient flow not only generalizes well but is also robust to adversarial attacks. Our results highlight the importance of the interplay between data structure and architecture design in the implicit bias and robustness of trained networks.
[ "['Hancheng Min' 'René Vidal']" ]
null
null
2405.15943
null
null
http://arxiv.org/pdf/2405.15943v1
2024-05-24T21:14:10Z
2024-05-24T21:14:10Z
Transformers represent belief state geometry in their residual stream
What computational structure are we building into large language models when we train them on next-token prediction? Here, we present evidence that this structure is given by the meta-dynamics of belief updating over hidden states of the data-generating process. Leveraging the theory of optimal prediction, we anticipate and then find that belief states are linearly represented in the residual stream of transformers, even in cases where the predicted belief state geometry has highly nontrivial fractal structure. We investigate cases where the belief state geometry is represented in the final residual stream or distributed across the residual streams of multiple layers, providing a framework to explain these observations. Furthermore we demonstrate that the inferred belief states contain information about the entire future, beyond the local next-token prediction that the transformers are explicitly trained on. Our work provides a framework connecting the structure of training data to the computational structure and representations that transformers use to carry out their behavior.
[ "['Adam S. Shai' 'Sarah E. Marzen' 'Lucas Teixeira'\n 'Alexander Gietelink Oldenziel' 'Paul M. Riechers']" ]
null
null
2405.15950
null
null
http://arxiv.org/pdf/2405.15950v1
2024-05-24T21:34:16Z
2024-05-24T21:34:16Z
A Systematic Bias of Machine Learning Regression Models and Its Correction: an Application to Imaging-based Brain Age Prediction
Machine learning models for continuous outcomes often yield systematically biased predictions, particularly for values that largely deviate from the mean. Specifically, predictions for large-valued outcomes tend to be negatively biased, while those for small-valued outcomes are positively biased. We refer to this linear central tendency warped bias as the "systematic bias of machine learning regression". In this paper, we first demonstrate that this issue persists across various machine learning models, and then delve into its theoretical underpinnings. We propose a general constrained optimization approach designed to correct this bias and develop a computationally efficient algorithm to implement our method. Our simulation results indicate that our correction method effectively eliminates the bias from the predicted outcomes. We apply the proposed approach to the prediction of brain age using neuroimaging data. In comparison to competing machine learning models, our method effectively addresses the longstanding issue of "systematic bias of machine learning regression" in neuroimaging-based brain age calculation, yielding unbiased predictions of brain age.
[ "['Hwiyoung Lee' 'Shuo Chen']" ]
null
null
2405.15956
null
null
http://arxiv.org/pdf/2405.15956v1
2024-05-24T21:47:58Z
2024-05-24T21:47:58Z
CFGs: Causality Constrained Counterfactual Explanations using goal-directed ASP
Machine learning models that automate decision-making are increasingly used in consequential areas such as loan approvals, pretrial bail approval, and hiring. Unfortunately, most of these models are black boxes, i.e., they are unable to reveal how they reach these prediction decisions. A need for transparency demands justification for such predictions. An affected individual might also desire explanations to understand why a decision was made. Ethical and legal considerations require informing the individual of changes in the input attribute (s) that could be made to produce a desirable outcome. Our work focuses on the latter problem of generating counterfactual explanations by considering the causal dependencies between features. In this paper, we present the framework CFGs, CounterFactual Generation with s(CASP), which utilizes the goal-directed Answer Set Programming (ASP) system s(CASP) to automatically generate counterfactual explanations from models generated by rule-based machine learning algorithms in particular. We benchmark CFGs with the FOLD-SE model. Reaching the counterfactual state from the initial state is planned and achieved using a series of interventions. To validate our proposal, we show how counterfactual explanations are computed and justified by imagining worlds where some or all factual assumptions are altered/changed. More importantly, we show how CFGs navigates between these worlds, namely, go from our initial state where we obtain an undesired outcome to the imagined goal state where we obtain the desired decision, taking into account the causal relationships among features.
[ "['Sopam Dasgupta' 'Joaquín Arias' 'Elmer Salazar' 'Gopal Gupta']" ]
null
null
2405.15971
null
null
http://arxiv.org/pdf/2405.15971v1
2024-05-24T22:50:50Z
2024-05-24T22:50:50Z
Robust width: A lightweight and certifiable adversarial defense
Deep neural networks are vulnerable to so-called adversarial examples: inputs which are intentionally constructed to cause the model to make incorrect predictions or classifications. Adversarial examples are often visually indistinguishable from natural data samples, making them hard to detect. As such, they pose significant threats to the reliability of deep learning systems. In this work, we study an adversarial defense based on the robust width property (RWP), which was recently introduced for compressed sensing. We show that a specific input purification scheme based on the RWP gives theoretical robustness guarantees for images that are approximately sparse. The defense is easy to implement and can be applied to any existing model without additional training or finetuning. We empirically validate the defense on ImageNet against $L^infty$ perturbations at perturbation budgets ranging from $4/255$ to $32/255$. In the black-box setting, our method significantly outperforms the state-of-the-art, especially for large perturbations. In the white-box setting, depending on the choice of base classifier, we closely match the state of the art in robust ImageNet classification while avoiding the need for additional data, larger models or expensive adversarial training routines. Our code is available at https://github.com/peck94/robust-width-defense.
[ "['Jonathan Peck' 'Bart Goossens']" ]
null
null
2405.15973
null
null
http://arxiv.org/pdf/2405.15973v3
2024-06-07T20:15:24Z
2024-05-24T23:09:27Z
Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement
Large vision-language models (LVLMs) have achieved impressive results in various visual question-answering and reasoning tasks through vision instruction tuning on specific datasets. However, there is still significant room for improvement in the alignment between visual and language modalities. Previous methods to enhance this alignment typically require external models or data, heavily depending on their capabilities and quality, which inevitably sets an upper bound on performance. In this paper, we propose SIMA, a framework that enhances visual and language modality alignment through self-improvement, eliminating the needs for external models or data. SIMA leverages prompts from existing vision instruction tuning datasets to self-generate responses and employs an in-context self-critic mechanism to select response pairs for preference tuning. The key innovation is the introduction of three vision metrics during the in-context self-critic process, which can guide the LVLM in selecting responses that enhance image comprehension. Through experiments across 14 hallucination and comprehensive benchmarks, we demonstrate that SIMA not only improves model performance across all benchmarks but also achieves superior modality alignment, outperforming previous approaches.
[ "['Xiyao Wang' 'Jiuhai Chen' 'Zhaoyang Wang' 'Yuhang Zhou' 'Yiyang Zhou'\n 'Huaxiu Yao' 'Tianyi Zhou' 'Tom Goldstein' 'Parminder Bhatia'\n 'Furong Huang' 'Cao Xiao']" ]
null
null
2405.15975
null
null
http://arxiv.org/pdf/2405.15975v2
2024-06-03T18:40:20Z
2024-05-24T23:13:56Z
Inference of Utilities and Time Preference in Sequential Decision-Making
This paper introduces a novel stochastic control framework to enhance the capabilities of automated investment managers, or robo-advisors, by accurately inferring clients' investment preferences from past activities. Our approach leverages a continuous-time model that incorporates utility functions and a generic discounting scheme of a time-varying rate, tailored to each client's risk tolerance, valuation of daily consumption, and significant life goals. We address the resulting time inconsistency issue through state augmentation and the establishment of the dynamic programming principle and the verification theorem. Additionally, we provide sufficient conditions for the identifiability of client investment preferences. To complement our theoretical developments, we propose a learning algorithm based on maximum likelihood estimation within a discrete-time Markov Decision Process framework, augmented with entropy regularization. We prove that the log-likelihood function is locally concave, facilitating the fast convergence of our proposed algorithm. Practical effectiveness and efficiency are showcased through two numerical examples, including Merton's problem and an investment problem with unhedgeable risks. Our proposed framework not only advances financial technology by improving personalized investment advice but also contributes broadly to other fields such as healthcare, economics, and artificial intelligence, where understanding individual preferences is crucial.
[ "['Haoyang Cao' 'Zhengqi Wu' 'Renyuan Xu']" ]
null
null
2405.15979
null
null
http://arxiv.org/pdf/2405.15979v1
2024-05-24T23:39:45Z
2024-05-24T23:39:45Z
BadGD: A unified data-centric framework to identify gradient descent vulnerabilities
We present BadGD, a unified theoretical framework that exposes the vulnerabilities of gradient descent algorithms through strategic backdoor attacks. Backdoor attacks involve embedding malicious triggers into a training dataset to disrupt the model's learning process. Our framework introduces three novel constructs: Max RiskWarp Trigger, Max GradWarp Trigger, and Max GradDistWarp Trigger, each designed to exploit specific aspects of gradient descent by distorting empirical risk, deterministic gradients, and stochastic gradients respectively. We rigorously define clean and backdoored datasets and provide mathematical formulations for assessing the distortions caused by these malicious backdoor triggers. By measuring the impact of these triggers on the model training procedure, our framework bridges existing empirical findings with theoretical insights, demonstrating how a malicious party can exploit gradient descent hyperparameters to maximize attack effectiveness. In particular, we show that these exploitations can significantly alter the loss landscape and gradient calculations, leading to compromised model integrity and performance. This research underscores the severe threats posed by such data-centric attacks and highlights the urgent need for robust defenses in machine learning. BadGD sets a new standard for understanding and mitigating adversarial manipulations, ensuring the reliability and security of AI systems.
[ "['Chi-Hua Wang' 'Guang Cheng']" ]
null
null
2405.15983
null
null
http://arxiv.org/pdf/2405.15983v1
2024-05-24T23:46:24Z
2024-05-24T23:46:24Z
Hierarchical Clustering via Local Search
In this paper, we introduce a local search algorithm for hierarchical clustering. For the local step, we consider a tree re-arrangement operation, known as the {em interchange}, which involves swapping two closely positioned sub-trees within a tree hierarchy. The interchange operation has been previously used in the context of phylogenetic trees. As the objective function for evaluating the resulting hierarchies, we utilize the revenue function proposed by Moseley and Wang (NIPS 2017.) In our main result, we show that any locally optimal tree guarantees a revenue of at least $frac{n-2}{3}sum_{i < j}w(i,j)$ where is $n$ the number of objects and $w: [n] times [n] rightarrow mathbb{R}^+$ is the associated similarity function. This finding echoes the previously established bound for the average link algorithm as analyzed by Moseley and Wang. We demonstrate that this alignment is not coincidental, as the average link trees enjoy the property of being locally optimal with respect to the interchange operation. Consequently, our study provides an alternative insight into the average link algorithm and reveals the existence of a broader range of hierarchies with relatively high revenue achievable through a straightforward local search algorithm. Furthermore, we present an implementation of the local search framework, where each local step requires $O(n)$ computation time. Our empirical results indicate that the proposed method, used as post-processing step, can effectively generate a hierarchical clustering with substantial revenue.
[ "['Hossein Jowhari']" ]
null
null
2405.15986
null
null
http://arxiv.org/pdf/2405.15986v1
2024-05-24T23:59:41Z
2024-05-24T23:59:41Z
Accelerating Diffusion Models with Parallel Sampling: Inference at Sub-Linear Time Complexity
Diffusion models have become a leading method for generative modeling of both image and scientific data. As these models are costly to train and evaluate, reducing the inference cost for diffusion models remains a major goal. Inspired by the recent empirical success in accelerating diffusion models via the parallel sampling technique~cite{shih2024parallel}, we propose to divide the sampling process into $mathcal{O}(1)$ blocks with parallelizable Picard iterations within each block. Rigorous theoretical analysis reveals that our algorithm achieves $widetilde{mathcal{O}}(mathrm{poly} log d)$ overall time complexity, marking the first implementation with provable sub-linear complexity w.r.t. the data dimension $d$. Our analysis is based on a generalized version of Girsanov's theorem and is compatible with both the SDE and probability flow ODE implementations. Our results shed light on the potential of fast and efficient sampling of high-dimensional data on fast-evolving modern large-memory GPU clusters.
[ "['Haoxuan Chen' 'Yinuo Ren' 'Lexing Ying' 'Grant M. Rotskoff']" ]
null
null
2405.15988
null
null
http://arxiv.org/pdf/2405.15988v1
2024-05-25T00:02:15Z
2024-05-25T00:02:15Z
Transductive Confidence Machine and its application to Medical Data Sets
The Transductive Confidence Machine Nearest Neighbours (TCMNN) algorithm and a supporting, simple user interface was developed. Different settings of the TCMNN algorithms' parameters were tested on medical data sets, in addition to the use of different Minkowski metrics and polynomial kernels. The effect of increasing the number of nearest neighbours and marking results with significance was also investigated. SVM implementation of the Transductive Confidence Machine was compared with Nearest Neighbours implementation. The application of neural networks was investigated as a useful comparison to the transductive algorithms.
[ "['David Lindsay']" ]
null
null
2405.15991
null
null
http://arxiv.org/pdf/2405.15991v1
2024-05-25T00:14:55Z
2024-05-25T00:14:55Z
Rényi Neural Processes
Neural Processes (NPs) are variational frameworks that aim to represent stochastic processes with deep neural networks. Despite their obvious benefits in uncertainty estimation for complex distributions via data-driven priors, NPs enforce network parameter sharing between the conditional prior and posterior distributions, thereby risking introducing a misspecified prior. We hereby propose R'enyi Neural Processes (RNP) to relax the influence of the misspecified prior and optimize a tighter bound of the marginal likelihood. More specifically, by replacing the standard KL divergence with the R'enyi divergence between the posterior and the approximated prior, we ameliorate the impact of the misspecified prior via a parameter {alpha} so that the resulting posterior focuses more on tail samples and reduce density on overconfident regions. Our experiments showed log-likelihood improvements on several existing NP families. We demonstrated the superior performance of our approach on various benchmarks including regression and image inpainting tasks. We also validate the effectiveness of RNPs on real-world tabular regression problems.
[ "['Xuesong Wang' 'He Zhao' 'Edwin V. Bonilla']" ]
null
null
2405.15992
null
null
http://arxiv.org/pdf/2405.15992v1
2024-05-25T00:16:21Z
2024-05-25T00:16:21Z
Data Complexity Estimates for Operator Learning
Operator learning has emerged as a new paradigm for the data-driven approximation of nonlinear operators. Despite its empirical success, the theoretical underpinnings governing the conditions for efficient operator learning remain incomplete. The present work develops theory to study the data complexity of operator learning, complementing existing research on the parametric complexity. We investigate the fundamental question: How many input/output samples are needed in operator learning to achieve a desired accuracy $epsilon$? This question is addressed from the point of view of $n$-widths, and this work makes two key contributions. The first contribution is to derive lower bounds on $n$-widths for general classes of Lipschitz and Fr'echet differentiable operators. These bounds rigorously demonstrate a ``curse of data-complexity'', revealing that learning on such general classes requires a sample size exponential in the inverse of the desired accuracy $epsilon$. The second contribution of this work is to show that ``parametric efficiency'' implies ``data efficiency''; using the Fourier neural operator (FNO) as a case study, we show rigorously that on a narrower class of operators, efficiently approximated by FNO in terms of the number of tunable parameters, efficient operator learning is attainable in data complexity as well. Specifically, we show that if only an algebraically increasing number of tunable parameters is needed to reach a desired approximation accuracy, then an algebraically bounded number of data samples is also sufficient to achieve the same accuracy.
[ "['Nikola B. Kovachki' 'Samuel Lanthaler' 'Hrushikesh Mhaskar']" ]
null
null
2405.15994
null
null
http://arxiv.org/pdf/2405.15994v1
2024-05-25T00:35:39Z
2024-05-25T00:35:39Z
Verified Safe Reinforcement Learning for Neural Network Dynamic Models
Learning reliably safe autonomous control is one of the core problems in trustworthy autonomy. However, training a controller that can be formally verified to be safe remains a major challenge. We introduce a novel approach for learning verified safe control policies in nonlinear neural dynamical systems while maximizing overall performance. Our approach aims to achieve safety in the sense of finite-horizon reachability proofs, and is comprised of three key parts. The first is a novel curriculum learning scheme that iteratively increases the verified safe horizon. The second leverages the iterative nature of gradient-based learning to leverage incremental verification, reusing information from prior verification runs. Finally, we learn multiple verified initial-state-dependent controllers, an idea that is especially valuable for more complex domains where learning a single universal verified safe controller is extremely challenging. Our experiments on five safe control problems demonstrate that our trained controllers can achieve verified safety over horizons that are as much as an order of magnitude longer than state-of-the-art baselines, while maintaining high reward, as well as a perfect safety record over entire episodes.
[ "['Junlin Wu' 'Huan Zhang' 'Yevgeniy Vorobeychik']" ]
null
null
2405.16000
null
null
http://arxiv.org/abs/2405.16000v1
2024-05-25T01:31:58Z
2024-05-25T01:31:58Z
Carnatic Raga Identification System using Rigorous Time-Delay Neural Network
Large scale machine learning-based Raga identification continues to be a nontrivial issue in the computational aspects behind Carnatic music. Each raga consists of many unique and intrinsic melodic patterns that can be used to easily identify them from others. These ragas can also then be used to cluster songs within the same raga, as well as identify songs in other closely related ragas. In this case, the input sound is analyzed using a combination of steps including using a Discrete Fourier transformation and using Triangular Filtering to create custom bins of possible notes, extracting features from the presence of particular notes or lack thereof. Using a combination of Neural Networks including 1D Convolutional Neural Networks conventionally known as Time-Delay Neural Networks) and Long Short-Term Memory (LSTM), which are a form of Recurrent Neural Networks, the backbone of the classification strategy to build the model can be created. In addition, to help with variations in shruti, a long-time attention-based mechanism will be implemented to determine the relative changes in frequency rather than the absolute differences. This will provide a much more meaningful data point when training audio clips in different shrutis. To evaluate the accuracy of the classifier, a dataset of 676 recordings is used. The songs are distributed across the list of ragas. The goal of this program is to be able to effectively and efficiently label a much wider range of audio clips in more shrutis, ragas, and with more background noise.
[ "['Sanjay Natesan' 'Homayoon Beigi']" ]
null
null
2405.16002
null
null
http://arxiv.org/pdf/2405.16002v1
2024-05-25T01:44:35Z
2024-05-25T01:44:35Z
Does SGD really happen in tiny subspaces?
Understanding the training dynamics of deep neural networks is challenging due to their high-dimensional nature and intricate loss landscapes. Recent studies have revealed that, along the training trajectory, the gradient approximately aligns with a low-rank top eigenspace of the training loss Hessian, referred to as the dominant subspace. Given this alignment, this paper explores whether neural networks can be trained within the dominant subspace, which, if feasible, could lead to more efficient training methods. Our primary observation is that when the SGD update is projected onto the dominant subspace, the training loss does not decrease further. This suggests that the observed alignment between the gradient and the dominant subspace is spurious. Surprisingly, projecting out the dominant subspace proves to be just as effective as the original update, despite removing the majority of the original update component. Similar observations are made for the large learning rate regime (also known as Edge of Stability) and Sharpness-Aware Minimization. We discuss the main causes and implications of this spurious alignment, shedding light on the intricate dynamics of neural network training.
[ "['Minhak Song' 'Kwangjun Ahn' 'Chulhee Yun']" ]
null
null
2405.16003
null
null
http://arxiv.org/pdf/2405.16003v1
2024-05-25T01:49:54Z
2024-05-25T01:49:54Z
Disentangling Heterogeneous Knowledge Concept Embedding for Cognitive Diagnosis on Untested Knowledge
Cognitive diagnosis is a fundamental and critical task in learning assessment, which aims to infer students' proficiency on knowledge concepts from their response logs. Current works assume each knowledge concept will certainly be tested and covered by multiple exercises. However, whether online or offline courses, it's hardly feasible to completely cover all knowledge concepts in several exercises. Restricted tests lead to undiscovered knowledge deficits, especially untested knowledge concepts(UKCs). In this paper, we propose a novel underline{Dis}entangling Heterogeneous underline{K}nowledge underline{C}ognitive underline{D}iagnosis framework on untested knowledge(DisKCD). Specifically, we leverage course grades, exercise questions, and resources to learn the potential representations of students, exercises, and knowledge concepts. In particular, knowledge concepts are disentangled into tested and untested based on the limiting actual exercises. We construct a heterogeneous relation graph network via students, exercises, tested knowledge concepts(TKCs), and UKCs. Then, through a hierarchical heterogeneous message-passing mechanism, the fine-grained relations are incorporated into the embeddings of the entities. Finally, the embeddings will be applied to multiple existing cognitive diagnosis models to infer students' proficiency on UKCs. Experimental results on real-world datasets show that the proposed model can effectively improve the performance of the task of diagnosing students' proficiency on UKCs. Our anonymous code is available at https://anonymous.4open.science/r/DisKCD.
[ "['Kui Xiao' 'Runtian Xing' 'Miao Zhang' 'Shunfeng Tan' 'Ziming Wang'\n 'Xiaolian Zhu']" ]
null
null
2405.16012
null
null
http://arxiv.org/pdf/2405.16012v1
2024-05-25T02:30:46Z
2024-05-25T02:30:46Z
Pessimistic Backward Policy for GFlowNets
This paper studies Generative Flow Networks (GFlowNets), which learn to sample objects proportionally to a given reward function through the trajectory of state transitions. In this work, we observe that GFlowNets tend to under-exploit the high-reward objects due to training on insufficient number of trajectories, which may lead to a large gap between the estimated flow and the (known) reward value. In response to this challenge, we propose a pessimistic backward policy for GFlowNets (PBP-GFN), which maximizes the observed flow to align closely with the true reward for the object. We extensively evaluate PBP-GFN across eight benchmarks, including hyper-grid environment, bag generation, structured set generation, molecular generation, and four RNA sequence generation tasks. In particular, PBP-GFN enhances the discovery of high-reward objects, maintains the diversity of the objects, and consistently outperforms existing methods.
[ "['Hyosoon Jang' 'Yunhui Jang' 'Minsu Kim' 'Jinkyoo Park' 'Sungsoo Ahn']" ]
null
null
2405.16013
null
null
http://arxiv.org/pdf/2405.16013v1
2024-05-25T02:33:17Z
2024-05-25T02:33:17Z
Convergence Behavior of an Adversarial Weak Supervision Method
Labeling data via rules-of-thumb and minimal label supervision is central to Weak Supervision, a paradigm subsuming subareas of machine learning such as crowdsourced learning and semi-supervised ensemble learning. By using this labeled data to train modern machine learning methods, the cost of acquiring large amounts of hand labeled data can be ameliorated. Approaches to combining the rules-of-thumb falls into two camps, reflecting different ideologies of statistical estimation. The most common approach, exemplified by the Dawid-Skene model, is based on probabilistic modeling. The other, developed in the work of Balsubramani-Freund and others, is adversarial and game-theoretic. We provide a variety of statistical results for the adversarial approach under log-loss: we characterize the form of the solution, relate it to logistic regression, demonstrate consistency, and give rates of convergence. On the other hand, we find that probabilistic approaches for the same model class can fail to be consistent. Experimental results are provided to corroborate the theoretical results.
[ "['Steven An' 'Sanjoy Dasgupta']" ]
null
null
2405.16027
null
null
http://arxiv.org/pdf/2405.16027v1
2024-05-25T03:00:06Z
2024-05-25T03:00:06Z
Feature Protection For Out-of-distribution Generalization
With the availability of large pre-trained models, a modern workflow for building real-world machine learning solutions is to fine-tune such models on a downstream task with a relatively small domain-specific dataset. In such applications, one major challenge is that the small fine-tuning dataset does not have sufficient coverage of the distribution encountered when the model is deployed. It is thus important to design fine-tuning methods that are robust to out-of-distribution (OOD) data that are under-represented by the training data. This paper compares common fine-tuning methods to investigate their OOD performance and demonstrates that standard methods will result in a significant change to the pre-trained model so that the fine-tuned features overfit the fine-tuning dataset. However, this causes deteriorated OOD performance. To overcome this issue, we show that protecting pre-trained features leads to a fine-tuned model more robust to OOD generalization. We validate the feature protection methods with extensive experiments of fine-tuning CLIP on ImageNet and DomainNet.
[ "['Lu Tan' 'Huei Zhou' 'Yinxiang Huang' 'Zeming Zheng' 'Yujiu Yang']" ]
null
null
2405.16029
null
null
http://arxiv.org/pdf/2405.16029v1
2024-05-25T03:05:19Z
2024-05-25T03:05:19Z
Online Resource Allocation for Edge Intelligence with Colocated Model Retraining and Inference
With edge intelligence, AI models are increasingly pushed to the edge to serve ubiquitous users. However, due to the drift of model, data, and task, AI model deployed at the edge suffers from degraded accuracy in the inference serving phase. Model retraining handles such drifts by periodically retraining the model with newly arrived data. When colocating model retraining and model inference serving for the same model on resource-limited edge servers, a fundamental challenge arises in balancing the resource allocation for model retraining and inference, aiming to maximize long-term inference accuracy. This problem is particularly difficult due to the underlying mathematical formulation being time-coupled, non-convex, and NP-hard. To address these challenges, we introduce a lightweight and explainable online approximation algorithm, named ORRIC, designed to optimize resource allocation for adaptively balancing the accuracy of model training and inference. The competitive ratio of ORRIC outperforms that of the traditional Inference-Only paradigm, especially when data drift persists for a sufficiently lengthy time. This highlights the advantages and applicable scenarios of colocating model retraining and inference. Notably, ORRIC can be translated into several heuristic algorithms for different resource environments. Experiments conducted in real scenarios validate the effectiveness of ORRIC.
[ "['Huaiguang Cai' 'Zhi Zhou' 'Qianyi Huang']" ]
null
null
2405.16030
null
null
http://arxiv.org/pdf/2405.16030v1
2024-05-25T03:07:56Z
2024-05-25T03:07:56Z
Constrained Ensemble Exploration for Unsupervised Skill Discovery
Unsupervised Reinforcement Learning (RL) provides a promising paradigm for learning useful behaviors via reward-free per-training. Existing methods for unsupervised RL mainly conduct empowerment-driven skill discovery or entropy-based exploration. However, empowerment often leads to static skills, and pure exploration only maximizes the state coverage rather than learning useful behaviors. In this paper, we propose a novel unsupervised RL framework via an ensemble of skills, where each skill performs partition exploration based on the state prototypes. Thus, each skill can explore the clustered area locally, and the ensemble skills maximize the overall state coverage. We adopt state-distribution constraints for the skill occupancy and the desired cluster for learning distinguishable skills. Theoretical analysis is provided for the state entropy and the resulting skill distributions. Based on extensive experiments on several challenging tasks, we find our method learns well-explored ensemble skills and achieves superior performance in various downstream tasks compared to previous methods.
[ "['Chenjia Bai' 'Rushuai Yang' 'Qiaosheng Zhang' 'Kang Xu' 'Yi Chen'\n 'Ting Xiao' 'Xuelong Li']" ]
null
null
2405.16036
null
null
http://arxiv.org/pdf/2405.16036v1
2024-05-25T03:18:52Z
2024-05-25T03:18:52Z
Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness
Randomized smoothing has become a leading method for achieving certified robustness in deep classifiers against l_{p}-norm adversarial perturbations. Current approaches for achieving certified robustness, such as data augmentation with Gaussian noise and adversarial training, require expensive training procedures that tune large models for different Gaussian noise levels and thus cannot leverage high-performance pre-trained neural networks. In this work, we introduce a novel certifying adapters framework (CAF) that enables and enhances the certification of classifier adversarial robustness. Our approach makes few assumptions about the underlying training algorithm or feature extractor and is thus broadly applicable to different feature extractor architectures (e.g., convolutional neural networks or vision transformers) and smoothing algorithms. We show that CAF (a) enables certification in uncertified models pre-trained on clean datasets and (b) substantially improves the performance of certified classifiers via randomized smoothing and SmoothAdv at multiple radii in CIFAR-10 and ImageNet. We demonstrate that CAF achieves improved certified accuracies when compared to methods based on random or denoised smoothing, and that CAF is insensitive to certifying adapter hyperparameters. Finally, we show that an ensemble of adapters enables a single pre-trained feature extractor to defend against a range of noise perturbation scales.
[ "['Jieren Deng' 'Hanbin Hong' 'Aaron Palmer' 'Xin Zhou' 'Jinbo Bi'\n 'Kaleel Mahmood' 'Yuan Hong' 'Derek Aguiar']" ]
null
null
2405.16039
null
null
http://arxiv.org/pdf/2405.16039v1
2024-05-25T03:24:32Z
2024-05-25T03:24:32Z
MoEUT: Mixture-of-Experts Universal Transformers
Previous work on Universal Transformers (UTs) has demonstrated the importance of parameter sharing across layers. By allowing recurrence in depth, UTs have advantages over standard Transformers in learning compositional generalizations, but layer-sharing comes with a practical limitation of parameter-compute ratio: it drastically reduces the parameter count compared to the non-shared model with the same dimensionality. Naively scaling up the layer size to compensate for the loss of parameters makes its computational resource requirements prohibitive. In practice, no previous work has succeeded in proposing a shared-layer Transformer design that is competitive in parameter count-dominated tasks such as language modeling. Here we propose MoEUT (pronounced "moot"), an effective mixture-of-experts (MoE)-based shared-layer Transformer architecture, which combines several recent advances in MoEs for both feedforward and attention layers of standard Transformers together with novel layer-normalization and grouping schemes that are specific and crucial to UTs. The resulting UT model, for the first time, slightly outperforms standard Transformers on language modeling tasks such as BLiMP and PIQA, while using significantly less compute and memory.
[ "['Róbert Csordás' 'Kazuki Irie' 'Jürgen Schmidhuber' 'Christopher Potts'\n 'Christopher D. Manning']" ]
null
null
2405.16041
null
null
http://arxiv.org/pdf/2405.16041v2
2024-06-01T01:15:43Z
2024-05-25T03:27:04Z
Explainable Molecular Property Prediction: Aligning Chemical Concepts with Predictions via Language Models
Providing explainable molecule property predictions is critical for many scientific domains, such as drug discovery and material science. Though transformer-based language models have shown great potential in accurate molecular property prediction, they neither provide chemically meaningful explanations nor faithfully reveal the molecular structure-property relationships. In this work, we develop a new framework for explainable molecular property prediction based on language models, dubbed as Lamole, which can provide chemical concepts-aligned explanations. We first leverage a designated molecular representation -- the Group SELFIES -- as it can provide chemically meaningful semantics. Because attention mechanisms in Transformers can inherently capture relationships within the input, we further incorporate the attention weights and gradients together to generate explanations for capturing the functional group interactions. We then carefully craft a marginal loss to explicitly optimize the explanations to be able to align with the chemists' annotations. We bridge the manifold hypothesis with the elaborated marginal loss to prove that the loss can align the explanations with the tangent space of the data manifold, leading to concept-aligned explanations. Experimental results over six mutagenicity datasets and one hepatotoxicity dataset demonstrate Lamole can achieve comparable classification accuracy and boost the explanation accuracy by up to 14.8%, being the state-of-the-art in explainable molecular property prediction.
[ "['Zhenzhong Wang' 'Zehui Lin' 'Wanyu Lin' 'Ming Yang' 'Minggang Zeng'\n 'Kay Chen Tan']" ]
null
null
2405.16043
null
null
http://arxiv.org/pdf/2405.16043v1
2024-05-25T03:48:12Z
2024-05-25T03:48:12Z
Theoretical Analysis of Weak-to-Strong Generalization
Strong student models can learn from weaker teachers: when trained on the predictions of a weaker model, a strong pretrained student can learn to correct the weak model's errors and generalize to examples where the teacher is not confident, even when these examples are excluded from training. This enables learning from cheap, incomplete, and possibly incorrect label information, such as coarse logical rules or the generations of a language model. We show that existing weak supervision theory fails to account for both of these effects, which we call pseudolabel correction and coverage expansion, respectively. We give a new bound based on expansion properties of the data distribution and student hypothesis class that directly accounts for pseudolabel correction and coverage expansion. Our bounds capture the intuition that weak-to-strong generalization occurs when the strong model is unable to fit the mistakes of the weak teacher without incurring additional error. We show that these expansion properties can be checked from finite data and give empirical evidence that they hold in practice.
[ "['Hunter Lang' 'David Sontag' 'Aravindan Vijayaraghavan']" ]
null
null
2405.16051
null
null
http://arxiv.org/pdf/2405.16051v1
2024-05-25T04:25:00Z
2024-05-25T04:25:00Z
A Bi-Objective Approach to Last-Mile Delivery Routing Considering Driver Preferences
The Multi-Objective Vehicle Routing Problem (MOVRP) is a complex optimization problem in the transportation and logistics industry. This paper proposes a novel approach to the MOVRP that aims to create routes that consider drivers' and operators' decisions and preferences. We evaluate two approaches to address this objective: visually attractive route planning and data mining of historical driver behavior to plan similar routes. Using a real-world dataset provided by Amazon, we demonstrate that data mining of historical patterns is more effective than visual attractiveness metrics found in the literature. Furthermore, we propose a bi-objective problem to balance the similarity of routes to historical routes and minimize routing costs. We propose a two-stage GRASP algorithm with heuristic box splitting to solve this problem. The proposed algorithm aims to approximate the Pareto front and to present routes that cover a wide range of the objective function space. The results demonstrate that our approach can generate a small number of non-dominated solutions per instance, which can help decision-makers to identify trade-offs between routing costs and drivers' preferences. Our approach has the potential to enhance the last-mile delivery operations of logistics companies by balancing these conflicting objectives.
[ "['Juan Pablo Mesa' 'Alejandro Montoya' 'Raul Ramos-Pollán' 'Mauricio Toro']" ]
null
null
2405.16053
null
null
http://arxiv.org/pdf/2405.16053v1
2024-05-25T04:38:09Z
2024-05-25T04:38:09Z
Pausing Policy Learning in Non-stationary Reinforcement Learning
Real-time inference is a challenge of real-world reinforcement learning due to temporal differences in time-varying environments: the system collects data from the past, updates the decision model in the present, and deploys it in the future. We tackle a common belief that continually updating the decision is optimal to minimize the temporal gap. We propose forecasting an online reinforcement learning framework and show that strategically pausing decision updates yields better overall performance by effectively managing aleatoric uncertainty. Theoretically, we compute an optimal ratio between policy update and hold duration, and show that a non-zero policy hold duration provides a sharper upper bound on the dynamic regret. Our experimental evaluations on three different environments also reveal that a non-zero policy hold duration yields higher rewards compared to continuous decision updates.
[ "['Hyunin Lee' 'Ming Jin' 'Javad Lavaei' 'Somayeh Sojoudi']" ]
null
null
2405.16055
null
null
http://arxiv.org/pdf/2405.16055v1
2024-05-25T04:44:06Z
2024-05-25T04:44:06Z
Federated Learning for Non-factorizable Models using Deep Generative Prior Approximations
Federated learning (FL) allows for collaborative model training across decentralized clients while preserving privacy by avoiding data sharing. However, current FL methods assume conditional independence between client models, limiting the use of priors that capture dependence, such as Gaussian processes (GPs). We introduce the Structured Independence via deep Generative Model Approximation (SIGMA) prior which enables FL for non-factorizable models across clients, expanding the applicability of FL to fields such as spatial statistics, epidemiology, environmental science, and other domains where modeling dependencies is crucial. The SIGMA prior is a pre-trained deep generative model that approximates the desired prior and induces a specified conditional independence structure in the latent variables, creating an approximate model suitable for FL settings. We demonstrate the SIGMA prior's effectiveness on synthetic data and showcase its utility in a real-world example of FL for spatial data, using a conditional autoregressive prior to model spatial dependence across Australia. Our work enables new FL applications in domains where modeling dependent data is essential for accurate predictions and decision-making.
[ "['Conor Hassan' 'Joshua J Bon' 'Elizaveta Semenova' 'Antonietta Mira'\n 'Kerrie Mengersen']" ]
null
null
2405.16056
null
null
http://arxiv.org/pdf/2405.16056v3
2024-05-31T11:44:39Z
2024-05-25T04:51:41Z
FedSheafHN: Personalized Federated Learning on Graph-structured Data
Personalized subgraph Federated Learning (FL) is a task that customizes Graph Neural Networks (GNNs) to individual client needs, accommodating diverse data distributions. However, applying hypernetworks in FL, while aiming to facilitate model personalization, often encounters challenges due to inadequate representation of client-specific characteristics. To overcome these limitations, we propose a model called FedSheafHN, using enhanced collaboration graph embedding and efficient personalized model parameter generation. Specifically, our model embeds each client's local subgraph into a server-constructed collaboration graph. We utilize sheaf diffusion in the collaboration graph to learn client representations. Our model improves the integration and interpretation of complex client characteristics. Furthermore, our model ensures the generation of personalized models through advanced hypernetworks optimized for parallel operations across clients. Empirical evaluations demonstrate that FedSheafHN outperforms existing methods in most scenarios, in terms of client model performance on various graph-structured datasets. It also has fast model convergence and effective new clients generalization.
[ "['Wenfei Liang' 'Yanan Zhao' 'Rui She' 'Yiming Li' 'Wee Peng Tay']" ]
null
null
2405.16057
null
null
http://arxiv.org/pdf/2405.16057v1
2024-05-25T04:55:27Z
2024-05-25T04:55:27Z
SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models
Large Language Models (LLMs) have become pivotal in advancing the field of artificial intelligence, yet their immense sizes pose significant challenges for both fine-tuning and deployment. Current post-training pruning methods, while reducing the sizes of LLMs, often fail to maintain their original performance. To address these challenges, this paper introduces SPP, a Sparsity-Preserved Parameter-efficient fine-tuning method. Different from existing post-training pruning approaches that struggle with performance retention, SPP proposes to employ lightweight learnable column and row matrices to optimize sparse LLM weights, keeping the structure and sparsity of pruned pre-trained models intact. By element-wise multiplication and residual addition, SPP ensures the consistency of model sparsity pattern and ratio during both training and weight-merging processes. We demonstrate the effectiveness of SPP by applying it to the LLaMA and LLaMA-2 model families with recent post-training pruning methods. Our results show that SPP significantly enhances the performance of models with different sparsity patterns (i.e. unstructured and N:M sparsity), especially for those with high sparsity ratios (e.g. 75%), making it a promising solution for the efficient fine-tuning of sparse LLMs. Code will be made available at https://github.com/Lucky-Lance/SPP.
[ "['Xudong Lu' 'Aojun Zhou' 'Yuhui Xu' 'Renrui Zhang' 'Peng Gao'\n 'Hongsheng Li']" ]
null
null
2405.16069
null
null
http://arxiv.org/pdf/2405.16069v2
2024-05-31T14:25:58Z
2024-05-25T05:40:16Z
IncomeSCM: From tabular data set to time-series simulator and causal estimation benchmark
Evaluating observational estimators of causal effects demands information that is rarely available: unconfounded interventions and outcomes from the population of interest, created either by randomization or adjustment. As a result, it is customary to fall back on simulators when creating benchmark tasks. Simulators offer great control but are often too simplistic to make challenging tasks, either because they are hand-designed and lack the nuances of real-world data, or because they are fit to observational data without structural constraints. In this work, we propose a general, repeatable strategy for turning observational data into sequential structural causal models and challenging estimation tasks by following two simple principles: 1) fitting real-world data where possible, and 2) creating complexity by composing simple, hand-designed mechanisms. We implement these ideas in a highly configurable software package and apply it to the well-known Adult income data set to construct the tt IncomeSCM simulator. From this, we devise multiple estimation tasks and sample data sets to compare established estimators of causal effects. The tasks present a suitable challenge, with effect estimates varying greatly in quality between methods, despite similar performance in the modeling of factual outcomes, highlighting the need for dedicated causal estimators and model selection criteria.
[ "['Fredrik D. Johansson']" ]
null
null
2405.16075
null
null
http://arxiv.org/pdf/2405.16075v1
2024-05-25T05:52:04Z
2024-05-25T05:52:04Z
Continuous Temporal Domain Generalization
Temporal Domain Generalization (TDG) addresses the challenge of training predictive models under temporally varying data distributions. Traditional TDG approaches typically focus on domain data collected at fixed, discrete time intervals, which limits their capability to capture the inherent dynamics within continuous-evolving and irregularly-observed temporal domains. To overcome this, this work formalizes the concept of Continuous Temporal Domain Generalization (CTDG), where domain data are derived from continuous times and are collected at arbitrary times. CTDG tackles critical challenges including: 1) Characterizing the continuous dynamics of both data and models, 2) Learning complex high-dimensional nonlinear dynamics, and 3) Optimizing and controlling the generalization across continuous temporal domains. To address them, we propose a Koopman operator-driven continuous temporal domain generalization (Koodos) framework. We formulate the problem within a continuous dynamic system and leverage the Koopman theory to learn the underlying dynamics; the framework is further enhanced with a comprehensive optimization strategy equipped with analysis and control driven by prior knowledge of the dynamics patterns. Extensive experiments demonstrate the effectiveness and efficiency of our approach.
[ "['Zekun Cai' 'Guangji Bai' 'Renhe Jiang' 'Xuan Song' 'Liang Zhao']" ]
null
null
2405.16077
null
null
http://arxiv.org/pdf/2405.16077v2
2024-06-11T03:38:20Z
2024-05-25T05:57:46Z
Finite-Time Analysis for Conflict-Avoidant Multi-Task Reinforcement Learning
Multi-task reinforcement learning (MTRL) has shown great promise in many real-world applications. Existing MTRL algorithms often aim to learn a policy that optimizes individual objective functions simultaneously with a given prior preference (or weights) on different tasks. However, these methods often suffer from the issue of textit{gradient conflict} such that the tasks with larger gradients dominate the update direction, resulting in a performance degeneration on other tasks. In this paper, we develop a novel dynamic weighting multi-task actor-critic algorithm (MTAC) under two options of sub-procedures named as CA and FC in task weight updates. MTAC-CA aims to find a conflict-avoidant (CA) update direction that maximizes the minimum value improvement among tasks, and MTAC-FC targets at a much faster convergence rate. We provide a comprehensive finite-time convergence analysis for both algorithms. We show that MTAC-CA can find a $epsilon+epsilon_{text{app}}$-accurate Pareto stationary policy using $mathcal{O}({epsilon^{-5}})$ samples, while ensuring a small $epsilon+sqrt{epsilon_{text{app}}}$-level CA distance (defined as the distance to the CA direction), where $epsilon_{text{app}}$ is the function approximation error. The analysis also shows that MTAC-FC improves the sample complexity to $mathcal{O}(epsilon^{-3})$, but with a constant-level CA distance. Our experiments on MT10 demonstrate the improved performance of our algorithms over existing MTRL methods with fixed preference.
[ "['Yudan Wang' 'Peiyao Xiao' 'Hao Ban' 'Kaiyi Ji' 'Shaofeng Zou']" ]
null
null
2405.16083
null
null
http://arxiv.org/pdf/2405.16083v1
2024-05-25T06:26:02Z
2024-05-25T06:26:02Z
From Orthogonality to Dependency: Learning Disentangled Representation for Multi-Modal Time-Series Sensing Signals
Existing methods for multi-modal time series representation learning aim to disentangle the modality-shared and modality-specific latent variables. Although achieving notable performances on downstream tasks, they usually assume an orthogonal latent space. However, the modality-specific and modality-shared latent variables might be dependent on real-world scenarios. Therefore, we propose a general generation process, where the modality-shared and modality-specific latent variables are dependent, and further develop a textbf{M}ulti-modtextbf{A}l textbf{TE}mporal Disentanglement (textbf{MATE}) model. Specifically, our textbf{MATE} model is built on a temporally variational inference architecture with the modality-shared and modality-specific prior networks for the disentanglement of latent variables. Furthermore, we establish identifiability results to show that the extracted representation is disentangled. More specifically, we first achieve the subspace identifiability for modality-shared and modality-specific latent variables by leveraging the pairing of multi-modal data. Then we establish the component-wise identifiability of modality-specific latent variables by employing sufficient changes of historical latent variables. Extensive experimental studies on multi-modal sensors, human activity recognition, and healthcare datasets show a general improvement in different downstream tasks, highlighting the effectiveness of our method in real-world scenarios.
[ "['Ruichu Cai' 'Zhifang Jiang' 'Zijian Li' 'Weilin Chen' 'Xuexin Chen'\n 'Zhifeng Hao' 'Yifan Shen' 'Guangyi Chen' 'Kun Zhang']" ]
null
null
2405.16088
null
null
http://arxiv.org/pdf/2405.16088v2
2024-06-03T10:26:10Z
2024-05-25T06:39:39Z
Estimating the normal-inverse-Wishart distribution
The normal-inverse-Wishart (NIW) distribution is commonly used as a prior distribution for the mean and covariance parameters of a multivariate normal distribution. The family of NIW distributions is also a minimal exponential family. In this short note we describe a convergent procedure for converting from mean parameters to natural parameters in the NIW family, or -- equivalently -- for performing maximum likelihood estimation of the natural parameters given observed sufficient statistics. This is needed, for example, when using a NIW base family in expectation propagation.
[ "['Jonathan So']" ]
null
null
2405.16104
null
null
http://arxiv.org/pdf/2405.16104v1
2024-05-25T07:31:24Z
2024-05-25T07:31:24Z
Global Well-posedness and Convergence Analysis of Score-based Generative Models via Sharp Lipschitz Estimates
We establish global well-posedness and convergence of the score-based generative models (SGM) under minimal general assumptions of initial data for score estimation. For the smooth case, we start from a Lipschitz bound of the score function with optimal time length. The optimality is validated by an example whose Lipschitz constant of scores is bounded at initial but blows up in finite time. This necessitates the separation of time scales in conventional bounds for non-log-concave distributions. In contrast, our follow up analysis only relies on a local Lipschitz condition and is valid globally in time. This leads to the convergence of numerical scheme without time separation. For the non-smooth case, we show that the optimal Lipschitz bound is O(1/t) in the point-wise sense for distributions supported on a compact, smooth and low-dimensional manifold with boundary.
[ "['Connor Mooney' 'Zhongjian Wang' 'Jack Xin' 'Yifeng Yu']" ]
null
null
2405.16113
null
null
http://arxiv.org/pdf/2405.16113v1
2024-05-25T07:52:36Z
2024-05-25T07:52:36Z
Enabling On-Device Learning via Experience Replay with Efficient Dataset Condensation
Upon deployment to edge devices, it is often desirable for a model to further learn from streaming data to improve accuracy. However, extracting representative features from such data is challenging because it is typically unlabeled, non-independent and identically distributed (non-i.i.d), and is seen only once. To mitigate this issue, a common strategy is to maintain a small data buffer on the edge device to hold the most representative data for further learning. As most data is either never stored or quickly discarded, identifying the most representative data to avoid significant information loss becomes critical. In this paper, we propose an on-device framework that addresses this issue by condensing incoming data into more informative samples. Specifically, to effectively handle unlabeled incoming data, we propose a pseudo-labeling technique designed for unlabeled on-device learning environments. Additionally, we develop a dataset condensation technique that only requires little computation resources. To counteract the effects of noisy labels during the condensation process, we further utilize a contrastive learning objective to improve the purity of class data within the buffer. Our empirical results indicate substantial improvements over existing methods, particularly when buffer capacity is severely restricted. For instance, with a buffer capacity of just one sample per class, our method achieves an accuracy that outperforms the best existing baseline by 58.4% on the CIFAR-10 dataset.
[ "['Gelei Xu' 'Ningzhi Tang' 'Jun Xia' 'Wei Jin' 'Yiyu Shi']" ]
null
null
2405.16114
null
null
http://arxiv.org/abs/2405.16114v1
2024-05-25T07:55:02Z
2024-05-25T07:55:02Z
Multi-scale Quaternion CNN and BiGRU with Cross Self-attention Feature Fusion for Fault Diagnosis of Bearing
In recent years, deep learning has led to significant advances in bearing fault diagnosis (FD). Most techniques aim to achieve greater accuracy. However, they are sensitive to noise and lack robustness, resulting in insufficient domain adaptation and anti-noise ability. The comparison of studies reveals that giving equal attention to all features does not differentiate their significance. In this work, we propose a novel FD model by integrating multi-scale quaternion convolutional neural network (MQCNN), bidirectional gated recurrent unit (BiGRU), and cross self-attention feature fusion (CSAFF). We have developed innovative designs in two modules, namely MQCNN and CSAFF. Firstly, MQCNN applies quaternion convolution to multi-scale architecture for the first time, aiming to extract the rich hidden features of the original signal from multiple scales. Then, the extracted multi-scale information is input into CSAFF for feature fusion, where CSAFF innovatively incorporates cross self-attention mechanism to enhance discriminative interaction representation within features. Finally, BiGRU captures temporal dependencies while a softmax layer is employed for fault classification, achieving accurate FD. To assess the efficacy of our approach, we experiment on three public datasets (CWRU, MFPT, and Ottawa) and compare it with other excellent methods. The results confirm its state-of-the-art, which the average accuracies can achieve up to 99.99%, 100%, and 99.21% on CWRU, MFPT, and Ottawa datasets. Moreover, we perform practical tests and ablation experiments to validate the efficacy and robustness of the proposed approach. Code is available at https://github.com/mubai011/MQCCAF.
[ "['Huanbai Liu' 'Fanlong Zhang' 'Yin Tan' 'Lian Huang' 'Yan Li'\n 'Guoheng Huang' 'Shenghong Luo' 'An Zeng']" ]
null
null
2405.16115
null
null
http://arxiv.org/pdf/2405.16115v1
2024-05-25T08:00:44Z
2024-05-25T08:00:44Z
SNOBERT: A Benchmark for clinical notes entity linking in the SNOMED CT clinical terminology
The extraction and analysis of insights from medical data, primarily stored in free-text formats by healthcare workers, presents significant challenges due to its unstructured nature. Medical coding, a crucial process in healthcare, remains minimally automated due to the complexity of medical ontologies and restricted access to medical texts for training Natural Language Processing models. In this paper, we proposed a method, "SNOBERT," of linking text spans in clinical notes to specific concepts in the SNOMED CT using BERT-based models. The method consists of two stages: candidate selection and candidate matching. The models were trained on one of the largest publicly available dataset of labeled clinical notes. SNOBERT outperforms other classical methods based on deep learning, as confirmed by the results of a challenge in which it was applied.
[ "['Mikhail Kulyabin' 'Gleb Sokolov' 'Aleksandr Galaida' 'Andreas Maier'\n 'Tomas Arias-Vergara']" ]
null
null
2405.16118
null
null
http://arxiv.org/pdf/2405.16118v1
2024-05-25T08:09:36Z
2024-05-25T08:09:36Z
Beyond Primal-Dual Methods in Bandits with Stochastic and Adversarial Constraints
We address a generalization of the bandit with knapsacks problem, where a learner aims to maximize rewards while satisfying an arbitrary set of long-term constraints. Our goal is to design best-of-both-worlds algorithms that perform optimally under both stochastic and adversarial constraints. Previous works address this problem via primal-dual methods, and require some stringent assumptions, namely the Slater's condition, and in adversarial settings, they either assume knowledge of a lower bound on the Slater's parameter, or impose strong requirements on the primal and dual regret minimizers such as requiring weak adaptivity. We propose an alternative and more natural approach based on optimistic estimations of the constraints. Surprisingly, we show that estimating the constraints with an UCB-like approach guarantees optimal performances. Our algorithm consists of two main components: (i) a regret minimizer working on emph{moving strategy sets} and (ii) an estimate of the feasible set as an optimistic weighted empirical mean of previous samples. The key challenge in this approach is designing adaptive weights that meet the different requirements for stochastic and adversarial constraints. Our algorithm is significantly simpler than previous approaches, and has a cleaner analysis. Moreover, ours is the first best-of-both-worlds algorithm providing bounds logarithmic in the number of constraints. Additionally, in stochastic settings, it provides $widetilde O(sqrt{T})$ regret emph{without} Slater's condition.
[ "['Martino Bernasconi' 'Matteo Castiglioni' 'Andrea Celli' 'Federico Fusco']" ]
null
null
2405.16119
null
null
http://arxiv.org/pdf/2405.16119v1
2024-05-25T08:15:21Z
2024-05-25T08:15:21Z
Method and Software Tool for Generating Artificial Databases of Biomedical Images Based on Deep Neural Networks
A wide variety of biomedical image data, as well as methods for generating training images using basic deep neural networks, were analyzed. Additionally, all platforms for creating images were analyzed, considering their characteristics. The article develops a method for generating artificial biomedical images based on GAN. GAN architecture has been developed for biomedical image synthesis. The data foundation and module for generating training images were designed and implemented in a software system. A comparison of the generated image database with known databases was made.
[ "['Oleh Berezsky' 'Petro Liashchynskyi' 'Oleh Pitsun' 'Grygoriy Melnyk']" ]
null
null
2405.16122
null
null
http://arxiv.org/pdf/2405.16122v1
2024-05-25T08:23:05Z
2024-05-25T08:23:05Z
Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars
Large language models (LLMs) have shown impressive capabilities in real-world applications. The capability of in-context learning (ICL) allows us to adapt an LLM to downstream tasks by including input-label exemplars in the prompt without model fine-tuning. However, the quality of these exemplars in the prompt greatly impacts performance, highlighting the need for an effective automated exemplar selection method. Recent studies have explored retrieval-based approaches to select exemplars tailored to individual test queries, which can be undesirable due to extra test-time computation and an increased risk of data exposure. Moreover, existing methods fail to adequately account for the impact of exemplar ordering on the performance. On the other hand, the impact of the instruction, another essential component in the prompt given to the LLM, is often overlooked in existing exemplar selection methods. To address these challenges, we propose a novel method named EASE, which leverages the hidden embedding from a pre-trained language model to represent ordered sets of exemplars and uses a neural bandit algorithm to optimize the sets of exemplars while accounting for exemplar ordering. Our EASE can efficiently find an ordered set of exemplars that performs well for all test queries from a given task, thereby eliminating test-time computation. Importantly, EASE can be readily extended to jointly optimize both the exemplars and the instruction. Through extensive empirical evaluations (including novel tasks), we demonstrate the superiority of EASE over existing methods, and reveal practical insights about the impact of exemplar selection on ICL, which may be of independent interest. Our code is available at https://github.com/ZhaoxuanWu/EASE-Prompt-Optimization.
[ "['Zhaoxuan Wu' 'Xiaoqiang Lin' 'Zhongxiang Dai' 'Wenyang Hu' 'Yao Shu'\n 'See-Kiong Ng' 'Patrick Jaillet' 'Bryan Kian Hsiang Low']" ]
null
null
2405.16124
null
null
http://arxiv.org/pdf/2405.16124v1
2024-05-25T08:29:46Z
2024-05-25T08:29:46Z
Unsupervised Meta-Learning via In-Context Learning
Unsupervised meta-learning aims to learn feature representations from unsupervised datasets that can transfer to downstream tasks with limited labeled data. In this paper, we propose a novel approach to unsupervised meta-learning that leverages the generalization abilities of in-context learning observed in transformer architectures. Our method reframes meta-learning as a sequence modeling problem, enabling the transformer encoder to learn task context from support images and utilize it to predict query images. At the core of our approach lies the creation of diverse tasks generated using a combination of data augmentations and a mixing strategy that challenges the model during training while fostering generalization to unseen tasks at test time. Experimental results on benchmark datasets, including miniImageNet, CIFAR-fs, CUB, and Aircraft, showcase the superiority of our approach over existing unsupervised meta-learning baselines, establishing it as the new state-of-the-art in the field. Remarkably, our method achieves competitive results with supervised and self-supervised approaches, underscoring the efficacy of the model in leveraging generalization over memorization.
[ "['Anna Vettoruzzo' 'Lorenzo Braccaioli' 'Joaquin Vanschoren'\n 'Marlena Nowaczyk']" ]
null
null
2405.16126
null
null
http://arxiv.org/pdf/2405.16126v1
2024-05-25T08:34:49Z
2024-05-25T08:34:49Z
Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity
This paper considers the distributed convex-concave minimax optimization under the second-order similarity. We propose stochastic variance-reduced optimistic gradient sliding (SVOGS) method, which takes the advantage of the finite-sum structure in the objective by involving the mini-batch client sampling and variance reduction. We prove SVOGS can achieve the $varepsilon$-duality gap within communication rounds of ${mathcal O}(delta D^2/varepsilon)$, communication complexity of ${mathcal O}(n+sqrt{n}delta D^2/varepsilon)$, and local gradient calls of $tilde{mathcal O}(n+(sqrt{n}delta+L)D^2/varepsilonlog(1/varepsilon))$, where $n$ is the number of nodes, $delta$ is the degree of the second-order similarity, $L$ is the smoothness parameter and $D$ is the diameter of the constraint set. We can verify that all of above complexity (nearly) matches the corresponding lower bounds. For the specific $mu$-strongly-convex-$mu$-strongly-convex case, our algorithm has the upper bounds on communication rounds, communication complexity, and local gradient calls of $mathcal O(delta/mulog(1/varepsilon))$, ${mathcal O}((n+sqrt{n}delta/mu)log(1/varepsilon))$, and $tilde{mathcal O}(n+(sqrt{n}delta+L)/mu)log(1/varepsilon))$ respectively, which are also nearly tight. Furthermore, we conduct the numerical experiments to show the empirical advantages of proposed method.
[ "['Qihao Zhou' 'Haishan Ye' 'Luo Luo']" ]
null
null
2405.16130
null
null
http://arxiv.org/pdf/2405.16130v1
2024-05-25T08:53:49Z
2024-05-25T08:53:49Z
Automating the Selection of Proxy Variables of Unmeasured Confounders
Recently, interest has grown in the use of proxy variables of unobserved confounding for inferring the causal effect in the presence of unmeasured confounders from observational data. One difficulty inhibiting the practical use is finding valid proxy variables of unobserved confounding to a target causal effect of interest. These proxy variables are typically justified by background knowledge. In this paper, we investigate the estimation of causal effects among multiple treatments and a single outcome, all of which are affected by unmeasured confounders, within a linear causal model, without prior knowledge of the validity of proxy variables. To be more specific, we first extend the existing proxy variable estimator, originally addressing a single unmeasured confounder, to accommodate scenarios where multiple unmeasured confounders exist between the treatments and the outcome. Subsequently, we present two different sets of precise identifiability conditions for selecting valid proxy variables of unmeasured confounders, based on the second-order statistics and higher-order statistics of the data, respectively. Moreover, we propose two data-driven methods for the selection of proxy variables and for the unbiased estimation of causal effects. Theoretical analysis demonstrates the correctness of our proposed algorithms. Experimental results on both synthetic and real-world data show the effectiveness of the proposed approach.
[ "['Feng Xie' 'Zhengming Chen' 'Shanshan Luo' 'Wang Miao' 'Ruichu Cai'\n 'Zhi Geng']" ]
null
null
2405.16136
null
null
http://arxiv.org/pdf/2405.16136v1
2024-05-25T09:10:12Z
2024-05-25T09:10:12Z
C3LLM: Conditional Multimodal Content Generation Using Large Language Models
We introduce C3LLM (Conditioned-on-Three-Modalities Large Language Models), a novel framework combining three tasks of video-to-audio, audio-to-text, and text-to-audio together. C3LLM adapts the Large Language Model (LLM) structure as a bridge for aligning different modalities, synthesizing the given conditional information, and making multimodal generation in a discrete manner. Our contributions are as follows. First, we adapt a hierarchical structure for audio generation tasks with pre-trained audio codebooks. Specifically, we train the LLM to generate audio semantic tokens from the given conditions, and further use a non-autoregressive transformer to generate different levels of acoustic tokens in layers to better enhance the fidelity of the generated audio. Second, based on the intuition that LLMs were originally designed for discrete tasks with the next-word prediction method, we use the discrete representation for audio generation and compress their semantic meanings into acoustic tokens, similar to adding "acoustic vocabulary" to LLM. Third, our method combines the previous tasks of audio understanding, video-to-audio generation, and text-to-audio generation together into one unified model, providing more versatility in an end-to-end fashion. Our C3LLM achieves improved results through various automated evaluation metrics, providing better semantic alignment compared to previous methods.
[ "['Zixuan Wang' 'Qinkai Duan' 'Yu-Wing Tai' 'Chi-Keung Tang']" ]
null
null
2405.16141
null
null
http://arxiv.org/pdf/2405.16141v3
2024-06-28T03:59:15Z
2024-05-25T09:21:43Z
AIGB: Generative Auto-bidding via Diffusion Modeling
Auto-bidding plays a crucial role in facilitating online advertising by automatically providing bids for advertisers. Reinforcement learning (RL) has gained popularity for auto-bidding. However, most current RL auto-bidding methods are modeled through the Markovian Decision Process (MDP), which assumes the Markovian state transition. This assumption restricts the ability to perform in long horizon scenarios and makes the model unstable when dealing with highly random online advertising environments. To tackle this issue, this paper introduces AI-Generated Bidding (AIGB), a novel paradigm for auto-bidding through generative modeling. In this paradigm, we propose DiffBid, a conditional diffusion modeling approach for bid generation. DiffBid directly models the correlation between the return and the entire trajectory, effectively avoiding error propagation across time steps in long horizons. Additionally, DiffBid offers a versatile approach for generating trajectories that maximize given targets while adhering to specific constraints. Extensive experiments conducted on the real-world dataset and online A/B test on Alibaba advertising platform demonstrate the effectiveness of DiffBid, achieving 2.81% increase in GMV and 3.36% increase in ROI.
[ "['Jiayan Guo' 'Yusen Huo' 'Zhilin Zhang' 'Tianyu Wang' 'Chuan Yu'\n 'Jian Xu' 'Yan Zhang' 'Bo Zheng']" ]
null
null
2405.16148
null
null
http://arxiv.org/pdf/2405.16148v1
2024-05-25T09:37:01Z
2024-05-25T09:37:01Z
Accelerating Transformers with Spectrum-Preserving Token Merging
Increasing the throughput of the Transformer architecture, a foundational component used in numerous state-of-the-art models for vision and language tasks (e.g., GPT, LLaVa), is an important problem in machine learning. One recent and effective strategy is to merge token representations within Transformer models, aiming to reduce computational and memory requirements while maintaining accuracy. Prior works have proposed algorithms based on Bipartite Soft Matching (BSM), which divides tokens into distinct sets and merges the top k similar tokens. However, these methods have significant drawbacks, such as sensitivity to token-splitting strategies and damage to informative tokens in later layers. This paper presents a novel paradigm called PiToMe, which prioritizes the preservation of informative tokens using an additional metric termed the energy score. This score identifies large clusters of similar tokens as high-energy, indicating potential candidates for merging, while smaller (unique and isolated) clusters are considered as low-energy and preserved. Experimental findings demonstrate that PiToMe saved from 40-60% FLOPs of the base models while exhibiting superior off-the-shelf performance on image classification (0.5% average performance drop of ViT-MAE-H compared to 2.6% as baselines), image-text retrieval (0.3% average performance drop of CLIP on Flickr30k compared to 4.5% as others), and analogously in visual questions answering with LLaVa-7B. Furthermore, PiToMe is theoretically shown to preserve intrinsic spectral properties of the original token space under mild conditions
[ "['Hoai-Chau Tran' 'Duy M. H. Nguyen' 'Duy M. Nguyen' 'Trung-Tin Nguyen'\n 'Ngan Le' 'Pengtao Xie' 'Daniel Sonntag' 'James Y. Zou' 'Binh T. Nguyen'\n 'Mathias Niepert']" ]
null
null
2405.16153
null
null
http://arxiv.org/pdf/2405.16153v3
2024-06-19T13:26:53Z
2024-05-25T09:43:38Z
DefSent+: Improving sentence embeddings of language models by projecting definition sentences into a quasi-isotropic or isotropic vector space of unlimited dictionary entries
This paper presents a significant improvement on the previous conference paper known as DefSent. The prior study seeks to improve sentence embeddings of language models by projecting definition sentences into the vector space of dictionary entries. We discover that this approach is not fully explored due to the methodological limitation of using word embeddings of language models to represent dictionary entries. This leads to two hindrances. First, dictionary entries are constrained by the single-word vocabulary, and thus cannot be fully exploited. Second, semantic representations of language models are known to be anisotropic, but pre-processing word embeddings for DefSent is not allowed because its weight is frozen during training and tied to the prediction layer. In this paper, we propose a novel method to progressively build entry embeddings not subject to the limitations. As a result, definition sentences can be projected into a quasi-isotropic or isotropic vector space of unlimited dictionary entries, so that sentence embeddings of noticeably better quality are attainable. We abbreviate our approach as DefSent+ (a plus version of DefSent), involving the following strengths: 1) the task performance on measuring sentence similarities is significantly improved compared to DefSent; 2) when DefSent+ is used to further train data-augmented models like SIMCSE, SNCSE, and SynCSE, state-of-the-art performance on measuring sentence similarities can be achieved among the approaches without using manually labeled datasets; 3) DefSent+ is also competitive in feature-based transfer for NLP downstream tasks.
[ "['Xiaodong Liu']" ]
null
null
2405.16156
null
null
http://arxiv.org/pdf/2405.16156v1
2024-05-25T09:47:59Z
2024-05-25T09:47:59Z
Mixture of In-Context Prompters for Tabular PFNs
Recent benchmarks found In-Context Learning (ICL) outperforms both deep learning and tree-based algorithms on small tabular datasets. However, on larger datasets, ICL for tabular learning cannot run without severely compromising performance, due to its quadratic space and time complexity w.r.t. dataset size. We propose MIXTUREPFN, which both extends nearest-neighbor sampling to the state-of-the-art ICL for tabular learning model and uses bootstrapping to finetune said model on the inference-time dataset. MIXTUREPFN is the Condorcet winner across 36 diverse tabular datasets against 19 strong deep learning and tree-based baselines, achieving the highest mean rank among Top-10 aforementioned algorithms with statistical significance.
[ "['Derek Xu' 'Olcay Cirit' 'Reza Asadi' 'Yizhou Sun' 'Wei Wang']" ]
null
null
2405.16158
null
null
http://arxiv.org/pdf/2405.16158v1
2024-05-25T09:53:25Z
2024-05-25T09:53:25Z
Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control
Sample efficiency in Reinforcement Learning (RL) has traditionally been driven by algorithmic enhancements. In this work, we demonstrate that scaling can also lead to substantial improvements. We conduct a thorough investigation into the interplay of scaling model capacity and domain-specific RL enhancements. These empirical findings inform the design choices underlying our proposed BRO (Bigger, Regularized, Optimistic) algorithm. The key innovation behind BRO is that strong regularization allows for effective scaling of the critic networks, which, paired with optimistic exploration, leads to superior performance. BRO achieves state-of-the-art results, significantly outperforming the leading model-based and model-free algorithms across 40 complex tasks from the DeepMind Control, MetaWorld, and MyoSuite benchmarks. BRO is the first model-free algorithm to achieve near-optimal policies in the notoriously challenging Dog and Humanoid tasks.
[ "['Michal Nauman' 'Mateusz Ostaszewski' 'Krzysztof Jankowski' 'Piotr Miłoś'\n 'Marek Cygan']" ]
null
null
2405.16159
null
null
http://arxiv.org/pdf/2405.16159v1
2024-05-25T09:58:33Z
2024-05-25T09:58:33Z
A Declarative Query Language for Scientific Machine Learning
The popularity of data science as a discipline and its importance in the emerging economy and industrial progress dictate that machine learning be democratized for the masses. This also means that the current practice of workforce training using machine learning tools, which requires low-level statistical and algorithmic details, is a barrier that needs to be addressed. Similar to data management languages such as SQL, machine learning needs to be practiced at a conceptual level to help make it a staple tool for general users. In particular, the technical sophistication demanded by existing machine learning frameworks is prohibitive for many scientists who are not computationally savvy or well versed in machine learning techniques. The learning curve to use the needed machine learning tools is also too high for them to take advantage of these powerful platforms to rapidly advance science. In this paper, we introduce a new declarative machine learning query language, called {em MQL}, for naive users. We discuss its merit and possible ways of implementing it over a traditional relational database system. We discuss two materials science experiments implemented using MQL on a materials science workflow system called MatFlow.
[ "['Hasan M Jamil']" ]
null
null
2405.16164
null
null
http://arxiv.org/pdf/2405.16164v2
2024-05-28T12:36:23Z
2024-05-25T10:15:51Z
Acquiring Better Load Estimates by Combining Anomaly and Change-point Detection in Power Grid Time-series Measurements
In this paper we present novel methodology for automatic anomaly and switch event filtering to improve load estimation in power grid systems. By leveraging unsupervised methods with supervised optimization, our approach prioritizes interpretability while ensuring robust and generalizable performance on unseen data. Through experimentation, a combination of binary segmentation for change point detection and statistical process control for anomaly detection emerges as the most effective strategy, specifically when ensembled in a novel sequential manner. Results indicate the clear wasted potential when filtering is not applied. The automatic load estimation is also fairly accurate, with approximately 90% of estimates falling within a 10% error margin, with only a single significant failure in both the minimum and maximum load estimates across 60 measurements in the test set. Our methodology's interpretability makes it particularly suitable for critical infrastructure planning, thereby enhancing decision-making processes.
[ "['Roel Bouman' 'Linda Schmeitz' 'Luco Buise' 'Jacco Heres'\n 'Yuliya Shapovalova' 'Tom Heskes']" ]
null
null
2405.16168
null
null
http://arxiv.org/pdf/2405.16168v1
2024-05-25T10:25:48Z
2024-05-25T10:25:48Z
Multi-Player Approaches for Dueling Bandits
Various approaches have emerged for multi-armed bandits in distributed systems. The multiplayer dueling bandit problem, common in scenarios with only preference-based information like human feedback, introduces challenges related to controlling collaborative exploration of non-informative arm pairs, but has received little attention. To fill this gap, we demonstrate that the direct use of a Follow Your Leader black-box approach matches the lower bound for this setting when utilizing known dueling bandit algorithms as a foundation. Additionally, we analyze a message-passing fully distributed approach with a novel Condorcet-winner recommendation protocol, resulting in expedited exploration in many cases. Our experimental comparisons reveal that our multiplayer algorithms surpass single-player benchmark algorithms, underscoring their efficacy in addressing the nuanced challenges of the multiplayer dueling bandit setting.
[ "['Or Raveh' 'Junya Honda' 'Masashi Sugiyama']" ]
null
null
2405.16173
null
null
http://arxiv.org/pdf/2405.16173v1
2024-05-25T10:45:46Z
2024-05-25T10:45:46Z
Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization
Diffusion models have garnered widespread attention in Reinforcement Learning (RL) for their powerful expressiveness and multimodality. It has been verified that utilizing diffusion policies can significantly improve the performance of RL algorithms in continuous control tasks by overcoming the limitations of unimodal policies, such as Gaussian policies, and providing the agent with enhanced exploration capabilities. However, existing works mainly focus on the application of diffusion policies in offline RL, while their incorporation into online RL is less investigated. The training objective of the diffusion model, known as the variational lower bound, cannot be optimized directly in online RL due to the unavailability of 'good' actions. This leads to difficulties in conducting diffusion policy improvement. To overcome this, we propose a novel model-free diffusion-based online RL algorithm, Q-weighted Variational Policy Optimization (QVPO). Specifically, we introduce the Q-weighted variational loss, which can be proved to be a tight lower bound of the policy objective in online RL under certain conditions. To fulfill these conditions, the Q-weight transformation functions are introduced for general scenarios. Additionally, to further enhance the exploration capability of the diffusion policy, we design a special entropy regularization term. We also develop an efficient behavior policy to enhance sample efficiency by reducing the variance of the diffusion policy during online interactions. Consequently, the QVPO algorithm leverages the exploration capabilities and multimodality of diffusion policies, preventing the RL agent from converging to a sub-optimal policy. To verify the effectiveness of QVPO, we conduct comprehensive experiments on MuJoCo benchmarks. The final results demonstrate that QVPO achieves state-of-the-art performance on both cumulative reward and sample efficiency.
[ "['Shutong Ding' 'Ke Hu' 'Zhenhao Zhang' 'Kan Ren' 'Weinan Zhang'\n 'Jingyi Yu' 'Jingya Wang' 'Ye Shi']" ]
null
null
2405.16183
null
null
http://arxiv.org/pdf/2405.16183v1
2024-05-25T11:18:27Z
2024-05-25T11:18:27Z
Graph Neural PDE Solvers with Conservation and Similarity-Equivariance
Utilizing machine learning to address partial differential equations (PDEs) presents significant challenges due to the diversity of spatial domains and their corresponding state configurations, which complicates the task of encompassing all potential scenarios through data-driven methodologies alone. Moreover, there are legitimate concerns regarding the generalization and reliability of such approaches, as they often overlook inherent physical constraints. In response to these challenges, this study introduces a novel machine-learning architecture that is highly generalizable and adheres to conservation laws and physical symmetries, thereby ensuring greater reliability. The foundation of this architecture is graph neural networks (GNNs), which are adept at accommodating a variety of shapes and forms. Additionally, we explore the parallels between GNNs and traditional numerical solvers, facilitating a seamless integration of conservative principles and symmetries into machine learning models. Our findings from experiments demonstrate that the model's inclusion of physical laws significantly enhances its generalizability, i.e., no significant accuracy degradation for unseen spatial domains while other models degrade. The code is available at https://github.com/yellowshippo/fluxgnn-icml2024.
[ "['Masanobu Horie' 'Naoto Mitsume']" ]
null
null
2405.16184
null
null
http://arxiv.org/pdf/2405.16184v1
2024-05-25T11:21:12Z
2024-05-25T11:21:12Z
Safe Deep Model-Based Reinforcement Learning with Lyapunov Functions
Model-based Reinforcement Learning (MBRL) has shown many desirable properties for intelligent control tasks. However, satisfying safety and stability constraints during training and rollout remains an open question. We propose a new Model-based RL framework to enable efficient policy learning with unknown dynamics based on learning model predictive control (LMPC) framework with mathematically provable guarantees of stability. We introduce and explore a novel method for adding safety constraints for model-based RL during training and policy learning. The new stability-augmented framework consists of a neural-network-based learner that learns to construct a Lyapunov function, and a model-based RL agent to consistently complete the tasks while satisfying user-specified constraints given only sub-optimal demonstrations and sparse-cost feedback. We demonstrate the capability of the proposed framework through simulated experiments.
[ "['Harry Zhang']" ]
null
null
2405.16185
null
null
http://arxiv.org/pdf/2405.16185v1
2024-05-25T11:23:39Z
2024-05-25T11:23:39Z
Differentiable Cluster Graph Neural Network
Graph Neural Networks often struggle with long-range information propagation and in the presence of heterophilous neighborhoods. We address both challenges with a unified framework that incorporates a clustering inductive bias into the message passing mechanism, using additional cluster-nodes. Central to our approach is the formulation of an optimal transport based implicit clustering objective function. However, the algorithm for solving the implicit objective function needs to be differentiable to enable end-to-end learning of the GNN. To facilitate this, we adopt an entropy regularized objective function and propose an iterative optimization process, alternating between solving for the cluster assignments and updating the node/cluster-node embeddings. Notably, our derived closed-form optimization steps are themselves simple yet elegant message passing steps operating seamlessly on a bipartite graph of nodes and cluster-nodes. Our clustering-based approach can effectively capture both local and global information, demonstrated by extensive experiments on both heterophilous and homophilous datasets.
[ "['Yanfei Dong' 'Mohammed Haroon Dupty' 'Lambert Deng' 'Zhuanghua Liu'\n 'Yong Liang Goh' 'Wee Sun Lee']" ]
null
null
2405.16194
null
null
http://arxiv.org/pdf/2405.16194v1
2024-05-25T11:53:23Z
2024-05-25T11:53:23Z
Diffusion-Reward Adversarial Imitation Learning
Imitation learning aims to learn a policy from observing expert demonstrations without access to reward signals from environments. Generative adversarial imitation learning (GAIL) formulates imitation learning as adversarial learning, employing a generator policy learning to imitate expert behaviors and discriminator learning to distinguish the expert demonstrations from agent trajectories. Despite its encouraging results, GAIL training is often brittle and unstable. Inspired by the recent dominance of diffusion models in generative modeling, this work proposes Diffusion-Reward Adversarial Imitation Learning (DRAIL), which integrates a diffusion model into GAIL, aiming to yield more precise and smoother rewards for policy learning. Specifically, we propose a diffusion discriminative classifier to construct an enhanced discriminator; then, we design diffusion rewards based on the classifier's output for policy learning. We conduct extensive experiments in navigation, manipulation, and locomotion, verifying DRAIL's effectiveness compared to prior imitation learning methods. Moreover, additional experimental results demonstrate the generalizability and data efficiency of DRAIL. Visualized learned reward functions of GAIL and DRAIL suggest that DRAIL can produce more precise and smoother rewards.
[ "['Chun-Mao Lai' 'Hsiang-Chun Wang' 'Ping-Chun Hsieh'\n 'Yu-Chiang Frank Wang' 'Min-Hung Chen' 'Shao-Hua Sun']" ]
null
null
2405.16195
null
null
http://arxiv.org/pdf/2405.16195v1
2024-05-25T11:57:43Z
2024-05-25T11:57:43Z
Adaptive $Q$-Network: On-the-fly Target Selection for Deep Reinforcement Learning
Deep Reinforcement Learning (RL) is well known for being highly sensitive to hyperparameters, requiring practitioners substantial efforts to optimize them for the problem at hand. In recent years, the field of automated Reinforcement Learning (AutoRL) has grown in popularity by trying to address this issue. However, these approaches typically hinge on additional samples to select well-performing hyperparameters, hindering sample-efficiency and practicality in RL. Furthermore, most AutoRL methods are heavily based on already existing AutoML methods, which were originally developed neglecting the additional challenges inherent to RL due to its non-stationarities. In this work, we propose a new approach for AutoRL, called Adaptive $Q$-Network (AdaQN), that is tailored to RL to take into account the non-stationarity of the optimization procedure without requiring additional samples. AdaQN learns several $Q$-functions, each one trained with different hyperparameters, which are updated online using the $Q$-function with the smallest approximation error as a shared target. Our selection scheme simultaneously handles different hyperparameters while coping with the non-stationarity induced by the RL optimization procedure and being orthogonal to any critic-based RL algorithm. We demonstrate that AdaQN is theoretically sound and empirically validate it in MuJoCo control problems, showing benefits in sample-efficiency, overall performance, training stability, and robustness to stochasticity.
[ "['Théo Vincent' 'Fabian Wahren' 'Jan Peters' 'Boris Belousov'\n \"Carlo D'Eramo\"]" ]
null
null
2405.16196
null
null
http://arxiv.org/pdf/2405.16196v1
2024-05-25T11:57:50Z
2024-05-25T11:57:50Z
Maintaining and Managing Road Quality:Using MLP and DNN
Poor roads are a major issue for cars, drivers, and pedestrians since they are a major cause of vehicle damage and can occasionally be quite dangerous for both groups of people (pedestrians and drivers), this makes road surface condition monitoring systems essential for traffic safety, reducing accident rates ad also protecting vehicles from getting damaged. The primary objective is to develop and evaluate machine learning models that can accurately classify road conditions into four categories: good, satisfactory, poor, and very poor, using a Kaggle dataset of road images. To address this, we implemented a variety of machine learning approaches. Firstly, a baseline model was created using a Multilayer Perceptron (MLP) implemented from scratch. Secondly, a more sophisticated Deep Neural Network (DNN) was constructed using Keras. Additionally, we developed a Logistic Regression model from scratch to compare performance. Finally, a wide model incorporating extensive feature engineering was built using the K-Nearest Neighbors (KNN) algorithm with sklearn.The study compared different models for image-based road quality assessment. Deep learning models, the DNN with Keras achieved the best accuracy, while the baseline MLP provided a solid foundation. The Logistic Regression although it is simpler, but it provided interpretability and insights into important features. The KNN model, with the help of feature engineering, achieved the best results. The research shows that machine learning can automate road condition monitoring, saving time and money on maintenance. The next step is to improve these models and test them in real cities, which will make our cities better managed and safer.
[ "['Makgotso Jacqueline Maotwana']" ]
null
null
2405.16203
null
null
http://arxiv.org/pdf/2405.16203v1
2024-05-25T12:27:21Z
2024-05-25T12:27:21Z
Evolutionary Large Language Model for Automated Feature Transformation
Feature transformation aims to reconstruct the feature space of raw features to enhance the performance of downstream models. However, the exponential growth in the combinations of features and operations poses a challenge, making it difficult for existing methods to efficiently explore a wide space. Additionally, their optimization is solely driven by the accuracy of downstream models in specific domains, neglecting the acquisition of general feature knowledge. To fill this research gap, we propose an evolutionary LLM framework for automated feature transformation. This framework consists of two parts: 1) constructing a multi-population database through an RL data collector while utilizing evolutionary algorithm strategies for database maintenance, and 2) utilizing the ability of Large Language Model (LLM) in sequence understanding, we employ few-shot prompts to guide LLM in generating superior samples based on feature transformation sequence distinction. Leveraging the multi-population database initially provides a wide search scope to discover excellent populations. Through culling and evolution, the high-quality populations are afforded greater opportunities, thereby furthering the pursuit of optimal individuals. Through the integration of LLMs with evolutionary algorithms, we achieve efficient exploration within a vast space, while harnessing feature knowledge to propel optimization, thus realizing a more adaptable search paradigm. Finally, we empirically demonstrate the effectiveness and generality of our proposed method.
[ "['Nanxu Gong' 'Chandan K. Reddy' 'Wangyang Ying' 'Yanjie Fu']" ]
null
null
2405.16206
null
null
http://arxiv.org/pdf/2405.16206v1
2024-05-25T12:35:31Z
2024-05-25T12:35:31Z
GlycanML: A Multi-Task and Multi-Structure Benchmark for Glycan Machine Learning
Glycans are basic biomolecules and perform essential functions within living organisms. The rapid increase of functional glycan data provides a good opportunity for machine learning solutions to glycan understanding. However, there still lacks a standard machine learning benchmark for glycan function prediction. In this work, we fill this blank by building a comprehensive benchmark for Glycan Machine Learning (GlycanML). The GlycanML benchmark consists of diverse types of tasks including glycan taxonomy prediction, glycan immunogenicity prediction, glycosylation type prediction, and protein-glycan interaction prediction. Glycans can be represented by both sequences and graphs in GlycanML, which enables us to extensively evaluate sequence-based models and graph neural networks (GNNs) on benchmark tasks. Furthermore, by concurrently performing eight glycan taxonomy prediction tasks, we introduce the GlycanML-MTL testbed for multi-task learning (MTL) algorithms. Experimental results show the superiority of modeling glycans with multi-relational GNNs, and suitable MTL methods can further boost model performance. We provide all datasets and source codes at https://github.com/GlycanML/GlycanML and maintain a leaderboard at https://GlycanML.github.io/project
[ "['Minghao Xu' 'Yunteng Geng' 'Yihang Zhang' 'Ling Yang' 'Jian Tang'\n 'Wentao Zhang']" ]
null
null
2405.16213
null
null
http://arxiv.org/pdf/2405.16213v1
2024-05-25T12:51:38Z
2024-05-25T12:51:38Z
Learning Visual-Semantic Subspace Representations for Propositional Reasoning
Learning representations that capture rich semantic relationships and accommodate propositional calculus poses a significant challenge. Existing approaches are either contrastive, lacking theoretical guarantees, or fall short in effectively representing the partial orders inherent to rich visual-semantic hierarchies. In this paper, we propose a novel approach for learning visual representations that not only conform to a specified semantic structure but also facilitate probabilistic propositional reasoning. Our approach is based on a new nuclear norm-based loss. We show that its minimum encodes the spectral geometry of the semantics in a subspace lattice, where logical propositions can be represented by projection operators.
[ "['Gabriel Moreira' 'Alexander Hauptmann' 'Manuel Marques'\n 'João Paulo Costeira']" ]
null
null
2405.16219
null
null
http://arxiv.org/pdf/2405.16219v1
2024-05-25T13:07:27Z
2024-05-25T13:07:27Z
Deep Causal Generative Models with Property Control
Generating data with properties of interest by external users while following the right causation among its intrinsic factors is important yet has not been well addressed jointly. This is due to the long-lasting challenge of jointly identifying key latent variables, their causal relations, and their correlation with properties of interest, as well as how to leverage their discoveries toward causally controlled data generation. To address these challenges, we propose a novel deep generative framework called the Correlation-aware Causal Variational Auto-encoder (C2VAE). This framework simultaneously recovers the correlation and causal relationships between properties using disentangled latent vectors. Specifically, causality is captured by learning the causal graph on latent variables through a structural causal model, while correlation is learned via a novel correlation pooling algorithm. Extensive experiments demonstrate C2VAE's ability to accurately recover true causality and correlation, as well as its superiority in controllable data generation compared to baseline models.
[ "['Qilong Zhao' 'Shiyu Wang' 'Guangji Bai' 'Bo Pan' 'Zhaohui Qin'\n 'Liang Zhao']" ]
null
null
2405.16224
null
null
http://arxiv.org/abs/2405.16224v1
2024-05-25T13:29:31Z
2024-05-25T13:29:31Z
Negative as Positive: Enhancing Out-of-distribution Generalization for Graph Contrastive Learning
Graph contrastive learning (GCL), standing as the dominant paradigm in the realm of graph pre-training, has yielded considerable progress. Nonetheless, its capacity for out-of-distribution (OOD) generalization has been relatively underexplored. In this work, we point out that the traditional optimization of InfoNCE in GCL restricts the cross-domain pairs only to be negative samples, which inevitably enlarges the distribution gap between different domains. This violates the requirement of domain invariance under OOD scenario and consequently impairs the model's OOD generalization performance. To address this issue, we propose a novel strategy "Negative as Positive", where the most semantically similar cross-domain negative pairs are treated as positive during GCL. Our experimental results, spanning a wide array of datasets, confirm that this method substantially improves the OOD generalization performance of GCL.
[ "['Zixu Wang' 'Bingbing Xu' 'Yige Yuan' 'Huawei Shen' 'Xueqi Cheng']" ]
null
null
2405.16225
null
null
http://arxiv.org/pdf/2405.16225v2
2024-06-06T07:44:36Z
2024-05-25T13:31:05Z
Local Causal Structure Learning in the Presence of Latent Variables
Discovering causal relationships from observational data, particularly in the presence of latent variables, poses a challenging problem. While current local structure learning methods have proven effective and efficient when the focus lies solely on the local relationships of a target variable, they operate under the assumption of causal sufficiency. This assumption implies that all the common causes of the measured variables are observed, leaving no room for latent variables. Such a premise can be easily violated in various real-world applications, resulting in inaccurate structures that may adversely impact downstream tasks. In light of this, our paper delves into the primary investigation of locally identifying potential parents and children of a target from observational data that may include latent variables. Specifically, we harness the causal information from m-separation and V-structures to derive theoretical consistency results, effectively bridging the gap between global and local structure learning. Together with the newly developed stop rules, we present a principled method for determining whether a variable is a direct cause or effect of a target. Further, we theoretically demonstrate the correctness of our approach under the standard causal Markov and faithfulness conditions, with infinite samples. Experimental results on both synthetic and real-world data validate the effectiveness and efficiency of our approach.
[ "['Feng Xie' 'Zheng Li' 'Peng Wu' 'Yan Zeng' 'Chunchen Liu' 'Zhi Geng']" ]
null
null
2405.16226
null
null
http://arxiv.org/pdf/2405.16226v1
2024-05-25T13:34:16Z
2024-05-25T13:34:16Z
Detecting Adversarial Data via Perturbation Forgery
As a defense strategy against adversarial attacks, adversarial detection aims to identify and filter out adversarial data from the data flow based on discrepancies in distribution and noise patterns between natural and adversarial data. Although previous detection methods achieve high performance in detecting gradient-based adversarial attacks, new attacks based on generative models with imbalanced and anisotropic noise patterns evade detection. Even worse, existing techniques either necessitate access to attack data before deploying a defense or incur a significant time cost for inference, rendering them impractical for defending against newly emerging attacks that are unseen by defenders. In this paper, we explore the proximity relationship between adversarial noise distributions and demonstrate the existence of an open covering for them. By learning to distinguish this open covering from the distribution of natural data, we can develop a detector with strong generalization capabilities against all types of adversarial attacks. Based on this insight, we heuristically propose Perturbation Forgery, which includes noise distribution perturbation, sparse mask generation, and pseudo-adversarial data production, to train an adversarial detector capable of detecting unseen gradient-based, generative-model-based, and physical adversarial attacks, while remaining agnostic to any specific models. Comprehensive experiments conducted on multiple general and facial datasets, with a wide spectrum of attacks, validate the strong generalization of our method.
[ "['Qian Wang' 'Chen Li' 'Yuchen Luo' 'Hefei Ling' 'Ping Li' 'Jiazhong Chen'\n 'Shijuan Huang' 'Ning Yu']" ]
null
null
2405.16233
null
null
http://arxiv.org/pdf/2405.16233v1
2024-05-25T13:49:23Z
2024-05-25T13:49:23Z
Client2Vec: Improving Federated Learning by Distribution Shifts Aware Client Indexing
Federated Learning (FL) is a privacy-preserving distributed machine learning paradigm. Nonetheless, the substantial distribution shifts among clients pose a considerable challenge to the performance of current FL algorithms. To mitigate this challenge, various methods have been proposed to enhance the FL training process. This paper endeavors to tackle the issue of data heterogeneity from another perspective -- by improving FL algorithms prior to the actual training stage. Specifically, we introduce the Client2Vec mechanism, which generates a unique client index for each client before the commencement of FL training. Subsequently, we leverage the generated client index to enhance the subsequent FL training process. To demonstrate the effectiveness of the proposed Client2Vec method, we conduct three case studies that assess the impact of the client index on the FL training process. These case studies encompass enhanced client sampling, model aggregation, and local training. Extensive experiments conducted on diverse datasets and model architectures show the efficacy of Client2Vec across all three case studies. Our code is avaliable at url{https://github.com/LINs-lab/client2vec}.
[ "['Yongxin Guo' 'Lin Wang' 'Xiaoying Tang' 'Tao Lin']" ]
null
null
2405.16236
null
null
http://arxiv.org/pdf/2405.16236v1
2024-05-25T13:54:05Z
2024-05-25T13:54:05Z
A statistical framework for weak-to-strong generalization
Modern large language model (LLM) alignment techniques rely on human feedback, but it is unclear whether the techniques fundamentally limit the capabilities of aligned LLMs. In particular, it is unclear whether it is possible to align (stronger) LLMs with superhuman capabilities with (weaker) human feedback without degrading their capabilities. This is an instance of the weak-to-strong generalization problem: using weaker (less capable) feedback to train a stronger (more capable) model. We prove that weak-to-strong generalization is possible by eliciting latent knowledge from pre-trained LLMs. In particular, we cast the weak-to-strong generalization problem as a transfer learning problem in which we wish to transfer a latent concept from a weak model to a strong pre-trained model. We prove that a naive fine-tuning approach suffers from fundamental limitations, but an alternative refinement-based approach suggested by the problem structure provably overcomes the limitations of fine-tuning. Finally, we demonstrate the practical applicability of the refinement approach with three LLM alignment tasks.
[ "['Seamus Somerstep' 'Felipe Maia Polo' 'Moulinath Banerjee'\n \"Ya'acov Ritov\" 'Mikhail Yurochkin' 'Yuekai Sun']" ]
null
null
2405.16240
null
null
http://arxiv.org/pdf/2405.16240v1
2024-05-25T13:58:38Z
2024-05-25T13:58:38Z
Analytic Federated Learning
In this paper, we introduce analytic federated learning (AFL), a new training paradigm that brings analytical (i.e., closed-form) solutions to the federated learning (FL) community. Our AFL draws inspiration from analytic learning -- a gradient-free technique that trains neural networks with analytical solutions in one epoch. In the local client training stage, the AFL facilitates a one-epoch training, eliminating the necessity for multi-epoch updates. In the aggregation stage, we derive an absolute aggregation (AA) law. This AA law allows a single-round aggregation, removing the need for multiple aggregation rounds. More importantly, the AFL exhibits a textit{weight-invariant} property, meaning that regardless of how the full dataset is distributed among clients, the aggregated result remains identical. This could spawn various potentials, such as data heterogeneity invariance, client-number invariance, absolute convergence, and being hyperparameter-free (our AFL is the first hyperparameter-free method in FL history). We conduct experiments across various FL settings including extremely non-IID ones, and scenarios with a large number of clients (e.g., $ge 1000$). In all these settings, our AFL constantly performs competitively while existing FL techniques encounter various obstacles. Code is available at url{https://github.com/ZHUANGHP/Analytic-federated-learning}
[ "['Huiping Zhuang' 'Run He' 'Kai Tong' 'Di Fang' 'Han Sun' 'Haoran Li'\n 'Tianyi Chen' 'Ziqian Zeng']" ]
null
null
2405.16248
null
null
http://arxiv.org/pdf/2405.16248v1
2024-05-25T14:14:00Z
2024-05-25T14:14:00Z
Combining Radiomics and Machine Learning Approaches for Objective ASD Diagnosis: Verifying White Matter Associations with ASD
Autism Spectrum Disorder is a condition characterized by a typical brain development leading to impairments in social skills, communication abilities, repetitive behaviors, and sensory processing. There have been many studies combining brain MRI images with machine learning algorithms to achieve objective diagnosis of autism, but the correlation between white matter and autism has not been fully utilized. To address this gap, we develop a computer-aided diagnostic model focusing on white matter regions in brain MRI by employing radiomics and machine learning methods. This study introduced a MultiUNet model for segmenting white matter, leveraging the UNet architecture and utilizing manually segmented MRI images as the training data. Subsequently, we extracted white matter features using the Pyradiomics toolkit and applied different machine learning models such as Support Vector Machine, Random Forest, Logistic Regression, and K-Nearest Neighbors to predict autism. The prediction sets all exceeded 80% accuracy. Additionally, we employed Convolutional Neural Network to analyze segmented white matter images, achieving a prediction accuracy of 86.84%. Notably, Support Vector Machine demonstrated the highest prediction accuracy at 89.47%. These findings not only underscore the efficacy of the models but also establish a link between white matter abnormalities and autism. Our study contributes to a comprehensive evaluation of various diagnostic models for autism and introduces a computer-aided diagnostic algorithm for early and objective autism diagnosis based on MRI white matter regions.
[ "['Junlin Song' 'Yuzhuo Chen' 'Yuan Yao' 'Zetong Chen' 'Renhao Guo'\n 'Lida Yang' 'Xinyi Sui' 'Qihang Wang' 'Xijiao Li' 'Aihua Cao' 'Wei Li']" ]
null
null
2405.16255
null
null
http://arxiv.org/pdf/2405.16255v1
2024-05-25T14:36:33Z
2024-05-25T14:36:33Z
GeoAdaLer: Geometric Insights into Adaptive Stochastic Gradient Descent Algorithms
The Adam optimization method has achieved remarkable success in addressing contemporary challenges in stochastic optimization. This method falls within the realm of adaptive sub-gradient techniques, yet the underlying geometric principles guiding its performance have remained shrouded in mystery, and have long confounded researchers. In this paper, we introduce GeoAdaLer (Geometric Adaptive Learner), a novel adaptive learning method for stochastic gradient descent optimization, which draws from the geometric properties of the optimization landscape. Beyond emerging as a formidable contender, the proposed method extends the concept of adaptive learning by introducing a geometrically inclined approach that enhances the interpretability and effectiveness in complex optimization scenarios
[ "['Chinedu Eleh' 'Masuzyo Mwanza' 'Ekene Aguegboh' 'Hans-Werner van Wyk']" ]
null
null
2405.16258
null
null
http://arxiv.org/pdf/2405.16258v1
2024-05-25T14:48:04Z
2024-05-25T14:48:04Z
USD: Unsupervised Soft Contrastive Learning for Fault Detection in Multivariate Time Series
Unsupervised fault detection in multivariate time series is critical for maintaining the integrity and efficiency of complex systems, with current methodologies largely focusing on statistical and machine learning techniques. However, these approaches often rest on the assumption that data distributions conform to Gaussian models, overlooking the diversity of patterns that can manifest in both normal and abnormal states, thereby diminishing discriminative performance. Our innovation addresses this limitation by introducing a combination of data augmentation and soft contrastive learning, specifically designed to capture the multifaceted nature of state behaviors more accurately. The data augmentation process enriches the dataset with varied representations of normal states, while soft contrastive learning fine-tunes the model's sensitivity to the subtle differences between normal and abnormal patterns, enabling it to recognize a broader spectrum of anomalies. This dual strategy significantly boosts the model's ability to distinguish between normal and abnormal states, leading to a marked improvement in fault detection performance across multiple datasets and settings, thereby setting a new benchmark for unsupervised fault detection in complex systems. The code of our method is available at url{https://github.com/zangzelin/code_USD.git}.
[ "['Hong Liu' 'Xiuxiu Qiu' 'Yiming Shi' 'Zelin Zang']" ]
null
null
2405.16259
null
null
http://arxiv.org/pdf/2405.16259v1
2024-05-25T14:50:23Z
2024-05-25T14:50:23Z
Front-propagation Algorithm: Explainable AI Technique for Extracting Linear Function Approximations from Neural Networks
This paper introduces the front-propagation algorithm, a novel eXplainable AI (XAI) technique designed to elucidate the decision-making logic of deep neural networks. Unlike other popular explainability algorithms such as Integrated Gradients or Shapley Values, the proposed algorithm is able to extract an accurate and consistent linear function explanation of the network in a single forward pass of the trained model. This nuance sets apart the time complexity of the front-propagation as it could be running real-time and in parallel with deployed models. We packaged this algorithm in a software called $texttt{front-prop}$ and we demonstrate its efficacy in providing accurate linear functions with three different neural network architectures trained on publicly available benchmark datasets.
[ "['Javier Viaña']" ]
null
null
2405.16260
null
null
http://arxiv.org/pdf/2405.16260v1
2024-05-25T14:53:52Z
2024-05-25T14:53:52Z
Enhancing Consistency-Based Image Generation via Adversarialy-Trained Classification and Energy-Based Discrimination
The recently introduced Consistency models pose an efficient alternative to diffusion algorithms, enabling rapid and good quality image synthesis. These methods overcome the slowness of diffusion models by directly mapping noise to data, while maintaining a (relatively) simpler training. Consistency models enable a fast one- or few-step generation, but they typically fall somewhat short in sample quality when compared to their diffusion origins. In this work we propose a novel and highly effective technique for post-processing Consistency-based generated images, enhancing their perceptual quality. Our approach utilizes a joint classifier-discriminator model, in which both portions are trained adversarially. While the classifier aims to grade an image based on its assignment to a designated class, the discriminator portion of the very same network leverages the softmax values to assess the proximity of the input image to the targeted data manifold, thereby serving as an Energy-based Model. By employing example-specific projected gradient iterations under the guidance of this joint machine, we refine synthesized images and achieve an improved FID scores on the ImageNet 64x64 dataset for both Consistency-Training and Consistency-Distillation techniques.
[ "['Shelly Golan' 'Roy Ganz' 'Michael Elad']" ]
null
null
2405.16262
null
null
http://arxiv.org/pdf/2405.16262v1
2024-05-25T14:56:30Z
2024-05-25T14:56:30Z
Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency
Catastrophic overfitting (CO) presents a significant challenge in single-step adversarial training (AT), manifesting as highly distorted deep neural networks (DNNs) that are vulnerable to multi-step adversarial attacks. However, the underlying factors that lead to the distortion of decision boundaries remain unclear. In this work, we delve into the specific changes within different DNN layers and discover that during CO, the former layers are more susceptible, experiencing earlier and greater distortion, while the latter layers show relative insensitivity. Our analysis further reveals that this increased sensitivity in former layers stems from the formation of pseudo-robust shortcuts, which alone can impeccably defend against single-step adversarial attacks but bypass genuine-robust learning, resulting in distorted decision boundaries. Eliminating these shortcuts can partially restore robustness in DNNs from the CO state, thereby verifying that dependence on them triggers the occurrence of CO. This understanding motivates us to implement adaptive weight perturbations across different layers to hinder the generation of pseudo-robust shortcuts, consequently mitigating CO. Extensive experiments demonstrate that our proposed method, Layer-Aware Adversarial Weight Perturbation (LAP), can effectively prevent CO and further enhance robustness.
[ "['Runqi Lin' 'Chaojian Yu' 'Bo Han' 'Hang Su' 'Tongliang Liu']" ]
null
null
2405.16265
null
null
http://arxiv.org/pdf/2405.16265v4
2024-06-26T14:01:15Z
2024-05-25T15:07:33Z
MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time
Although Large Language Models (LLMs) achieve remarkable performance across various tasks, they often struggle with complex reasoning tasks, such as answering mathematical questions. Recent efforts to address this issue have primarily focused on leveraging mathematical datasets through supervised fine-tuning or self-improvement techniques. However, these methods often depend on high-quality datasets that are difficult to prepare, or they require substantial computational resources for fine-tuning. Inspired by findings that LLMs know how to produce the right answer but struggle to select the correct reasoning path, we propose a purely inference-based searching method -- MindStar (M*). This method formulates reasoning tasks as searching problems and proposes two search ideas to identify the optimal reasoning paths. We evaluate the M* framework on both the GSM8K and MATH datasets, comparing its performance with existing open and closed-source LLMs. Our results demonstrate that M* significantly enhances the reasoning abilities of open-source models, such as Llama-2-13B and Mistral-7B, and achieves comparable performance to GPT-3.5 and Grok-1, but with substantially reduced model size and computational costs.
[ "['Jikun Kang' 'Xin Zhe Li' 'Xi Chen' 'Amirreza Kazemi' 'Qianyi Sun'\n 'Boxing Chen' 'Dong Li' 'Xu He' 'Quan He' 'Feng Wen' 'Jianye Hao'\n 'Jun Yao']" ]
null
null
2405.16266
null
null
http://arxiv.org/pdf/2405.16266v1
2024-05-25T15:08:36Z
2024-05-25T15:08:36Z
Deep Reinforcement Learning with Enhanced PPO for Safe Mobile Robot Navigation
Collision-free motion is essential for mobile robots. Most approaches to collision-free and efficient navigation with wheeled robots require parameter tuning by experts to obtain good navigation behavior. This study investigates the application of deep reinforcement learning to train a mobile robot for autonomous navigation in a complex environment. The robot utilizes LiDAR sensor data and a deep neural network to generate control signals guiding it toward a specified target while avoiding obstacles. We employ two reinforcement learning algorithms in the Gazebo simulation environment: Deep Deterministic Policy Gradient and proximal policy optimization. The study introduces an enhanced neural network structure in the Proximal Policy Optimization algorithm to boost performance, accompanied by a well-designed reward function to improve algorithm efficacy. Experimental results conducted in both obstacle and obstacle-free environments underscore the effectiveness of the proposed approach. This research significantly contributes to the advancement of autonomous robotics in complex environments through the application of deep reinforcement learning.
[ "['Hamid Taheri' 'Seyed Rasoul Hosseini']" ]
null
null
2405.16267
null
null
http://arxiv.org/pdf/2405.16267v2
2024-06-26T07:27:56Z
2024-05-25T15:11:34Z
A GPU-Accelerated Bi-linear ADMM Algorithm for Distributed Sparse Machine Learning
This paper introduces the Bi-linear consensus Alternating Direction Method of Multipliers (Bi-cADMM), aimed at solving large-scale regularized Sparse Machine Learning (SML) problems defined over a network of computational nodes. Mathematically, these are stated as minimization problems with convex local loss functions over a global decision vector, subject to an explicit $ell_0$ norm constraint to enforce the desired sparsity. The considered SML problem generalizes different sparse regression and classification models, such as sparse linear and logistic regression, sparse softmax regression, and sparse support vector machines. Bi-cADMM leverages a bi-linear consensus reformulation of the original non-convex SML problem and a hierarchical decomposition strategy that divides the problem into smaller sub-problems amenable to parallel computing. In Bi-cADMM, this decomposition strategy is based on a two-phase approach. Initially, it performs a sample decomposition of the data and distributes local datasets across computational nodes. Subsequently, a delayed feature decomposition of the data is conducted on Graphics Processing Units (GPUs) available to each node. This methodology allows Bi-cADMM to undertake computationally intensive data-centric computations on GPUs, while CPUs handle more cost-effective computations. The proposed algorithm is implemented within an open-source Python package called Parallel Sparse Fitting Toolbox (PsFiT), which is publicly available. Finally, computational experiments demonstrate the efficiency and scalability of our algorithm through numerical benchmarks across various SML problems featuring distributed datasets.
[ "['Alireza Olama' 'Andreas Lundell' 'Jan Kronqvist' 'Elham Ahmadi'\n 'Eduardo Camponogara']" ]
null
null
2405.16277
null
null
http://arxiv.org/pdf/2405.16277v3
2024-06-03T16:42:55Z
2024-05-25T15:28:22Z
Picturing Ambiguity: A Visual Twist on the Winograd Schema Challenge
Large Language Models (LLMs) have demonstrated remarkable success in tasks like the Winograd Schema Challenge (WSC), showcasing advanced textual common-sense reasoning. However, applying this reasoning to multimodal domains, where understanding text and images together is essential, remains a substantial challenge. To address this, we introduce WinoVis, a novel dataset specifically designed to probe text-to-image models on pronoun disambiguation within multimodal contexts. Utilizing GPT-4 for prompt generation and Diffusion Attentive Attribution Maps (DAAM) for heatmap analysis, we propose a novel evaluation framework that isolates the models' ability in pronoun disambiguation from other visual processing challenges. Evaluation of successive model versions reveals that, despite incremental advancements, Stable Diffusion 2.0 achieves a precision of 56.7% on WinoVis, only marginally surpassing random guessing. Further error analysis identifies important areas for future research aimed at advancing text-to-image models in their ability to interpret and interact with the complex visual world.
[ "['Brendan Park' 'Madeline Janecek' 'Naser Ezzati-Jivan' 'Yifeng Li'\n 'Ali Emami']" ]
null
null
2405.16282
null
null
http://arxiv.org/pdf/2405.16282v5
2024-06-15T22:18:06Z
2024-05-25T15:42:04Z
Confidence Under the Hood: An Investigation into the Confidence-Probability Alignment in Large Language Models
As the use of Large Language Models (LLMs) becomes more widespread, understanding their self-evaluation of confidence in generated responses becomes increasingly important as it is integral to the reliability of the output of these models. We introduce the concept of Confidence-Probability Alignment, that connects an LLM's internal confidence, quantified by token probabilities, to the confidence conveyed in the model's response when explicitly asked about its certainty. Using various datasets and prompting techniques that encourage model introspection, we probe the alignment between models' internal and expressed confidence. These techniques encompass using structured evaluation scales to rate confidence, including answer options when prompting, and eliciting the model's confidence level for outputs it does not recognize as its own. Notably, among the models analyzed, OpenAI's GPT-4 showed the strongest confidence-probability alignment, with an average Spearman's $hat{rho}$ of 0.42, across a wide range of tasks. Our work contributes to the ongoing efforts to facilitate risk assessment in the application of LLMs and to further our understanding of model trustworthiness.
[ "['Abhishek Kumar' 'Robert Morabito' 'Sanzhar Umbet' 'Jad Kabbara'\n 'Ali Emami']" ]
null
null
2405.16285
null
null
http://arxiv.org/pdf/2405.16285v1
2024-05-25T15:52:34Z
2024-05-25T15:52:34Z
ModelLock: Locking Your Model With a Spell
This paper presents a novel model protection paradigm ModelLock that locks (destroys) the performance of a model on normal clean data so as to make it unusable or unextractable without the right key. Specifically, we proposed a diffusion-based framework dubbed ModelLock that explores text-guided image editing to transform the training data into unique styles or add new objects in the background. A model finetuned on this edited dataset will be locked and can only be unlocked by the key prompt, i.e., the text prompt used to transform the data. We conduct extensive experiments on both image classification and segmentation tasks, and show that 1) ModelLock can effectively lock the finetuned models without significantly reducing the expected performance, and more importantly, 2) the locked model cannot be easily unlocked without knowing both the key prompt and the diffusion model. Our work opens up a new direction for intellectual property protection of private models.
[ "['Yifeng Gao' 'Yuhua Sun' 'Xingjun Ma' 'Zuxuan Wu' 'Yu-Gang Jiang']" ]
null
null
2405.16286
null
null
http://arxiv.org/pdf/2405.16286v1
2024-05-25T15:53:27Z
2024-05-25T15:53:27Z
Generation of synthetic data using breast cancer dataset and classification with resnet18
Since technology is advancing so quickly in the modern era of information, data is becoming an essential resource in many fields. Correct data collection, organization, and analysis make it a potent tool for successful decision-making, process improvement, and success across a wide range of sectors. Synthetic data is required for a number of reasons, including the constraints of real data, the expense of collecting labeled data, and privacy and security problems in specific situations and domains. For a variety of reasons, including security, ethics, legal restrictions, sensitivity and privacy issues, and ethics, synthetic data is a valuable tool, particularly in the health sector. A deep learning model called GAN (Generative Adversarial Networks) has been developed with the intention of generating synthetic data. In this study, the Breast Histopathology dataset was used to generate malignant and negatively labeled synthetic patch images using MSG-GAN (Multi-Scale Gradients for Generative Adversarial Networks), a form of GAN, to aid in cancer identification. After that, the ResNet18 model was used to classify both synthetic and real data via Transfer Learning. Following the investigation, an attempt was made to ascertain whether the synthetic images behaved like the real data or if they are comparable to the original data.
[ "['Dilsat Berin Aytar' 'Semra Gunduc']" ]
null
null
2405.16287
null
null
http://arxiv.org/pdf/2405.16287v1
2024-05-25T15:56:15Z
2024-05-25T15:56:15Z
LoGAH: Predicting 774-Million-Parameter Transformers using Graph HyperNetworks with 1/100 Parameters
A good initialization of deep learning models is essential since it can help them converge better and faster. However, pretraining large models is unaffordable for many researchers, which makes a desired prediction for initial parameters more necessary nowadays. Graph HyperNetworks (GHNs), one approach to predicting model parameters, have recently shown strong performance in initializing large vision models. Unfortunately, predicting parameters of very wide networks relies on copying small chunks of parameters multiple times and requires an extremely large number of parameters to support full prediction, which greatly hinders its adoption in practice. To address this limitation, we propose LoGAH (Low-rank GrAph Hypernetworks), a GHN with a low-rank parameter decoder that expands to significantly wider networks without requiring as excessive increase of parameters as in previous attempts. LoGAH allows us to predict the parameters of 774-million large neural networks in a memory-efficient manner. We show that vision and language models (i.e., ViT and GPT-2) initialized with LoGAH achieve better performance than those initialized randomly or using existing hypernetworks. Furthermore, we show promising transfer learning results w.r.t. training LoGAH on small datasets and using the predicted parameters to initialize for larger tasks. We provide the codes in https://github.com/Blackzxy/LoGAH .
[ "['Xinyu Zhou' 'Boris Knyazev' 'Alexia Jolicoeur-Martineau' 'Jie Fu']" ]
null
null
2405.16288
null
null
http://arxiv.org/pdf/2405.16288v1
2024-05-25T16:00:37Z
2024-05-25T16:00:37Z
Generating configurations of increasing lattice size with machine learning and the inverse renormalization group
We review recent developments of machine learning algorithms pertinent to the inverse renormalization group, which was originally established as a generative numerical method by Ron-Swendsen-Brandt via the implementation of compatible Monte Carlo simulations. Inverse renormalization group methods enable the iterative generation of configurations for increasing lattice size without the critical slowing down effect. We discuss the construction of inverse renormalization group transformations with the use of convolutional neural networks and present applications in models of statistical mechanics, lattice field theory, and disordered systems. We highlight the case of the three-dimensional Edwards-Anderson spin glass, where the inverse renormalization group can be employed to construct configurations for lattice volumes that have not yet been accessed by dedicated supercomputers.
[ "['Dimitrios Bachtis']" ]
null
null
2405.16295
null
null
http://arxiv.org/pdf/2405.16295v3
2024-05-29T20:40:32Z
2024-05-25T16:16:22Z
Comparative Analysis of Open-Source Language Models in Summarizing Medical Text Data
Unstructured text in medical notes and dialogues contains rich information. Recent advancements in Large Language Models (LLMs) have demonstrated superior performance in question answering and summarization tasks on unstructured text data, outperforming traditional text analysis approaches. However, there is a lack of scientific studies in the literature that methodically evaluate and report on the performance of different LLMs, specifically for domain-specific data such as medical chart notes. We propose an evaluation approach to analyze the performance of open-source LLMs such as Llama2 and Mistral for medical summarization tasks, using GPT-4 as an assessor. Our innovative approach to quantitative evaluation of LLMs can enable quality control, support the selection of effective LLMs for specific tasks, and advance knowledge discovery in digital health.
[ "['Yuhao Chen' 'Zhimu Wang' 'Bo Wen' 'Farhana Zulkernine']" ]
null
null
2405.16297
null
null
http://arxiv.org/pdf/2405.16297v1
2024-05-25T16:27:13Z
2024-05-25T16:27:13Z
LUCIE: A Lightweight Uncoupled ClImate Emulator with long-term stability and physical consistency for O(1000)-member ensembles
We present LUCIE, a $1000$- member ensemble data-driven atmospheric emulator that remains stable during autoregressive inference for thousands of years without a drifting climatology. LUCIE has been trained on $9.5$ years of coarse-resolution ERA5 data with $4$ prognostic variables on a single A100 GPU for $2.4$ h. Owing to the cheap computational cost of inference, $1000$ model ensembles are executed for $5$ years to compute an uncertainty-quantified climatology for the prognostic variables that closely match the climatology obtained from ERA5. Unlike all the other state-of-the-art AI weather models, LUCIE is neither unstable nor does it produce hallucinations that result in unphysical drift of the emulated climate. Furthermore, LUCIE textbf{does not impose} ``true" sea-surface temperature (SST) from a coupled numerical model to enforce the annual cycle in temperature. We demonstrate the long-term climatology obtained from LUCIE as well as subseasonal-to-seasonal scale prediction skills on the prognostic variables. We also demonstrate a $20$-year emulation with LUCIE here: https://drive.google.com/file/d/1mRmhx9RRGiF3uGo_mRQK8RpwQatrCiMn/view
[ "['Haiwen Guan' 'Troy Arcomano' 'Ashesh Chattopadhyay' 'Romit Maulik']" ]
null
null
2405.16301
null
null
http://arxiv.org/pdf/2405.16301v1
2024-05-25T16:50:33Z
2024-05-25T16:50:33Z
Active Learning for Finely-Categorized Image-Text Retrieval by Selecting Hard Negative Unpaired Samples
Securing a sufficient amount of paired data is important to train an image-text retrieval (ITR) model, but collecting paired data is very expensive. To address this issue, in this paper, we propose an active learning algorithm for ITR that can collect paired data cost-efficiently. Previous studies assume that image-text pairs are given and their category labels are asked to the annotator. However, in the recent ITR studies, the importance of category label is decreased since a retrieval model can be trained with only image-text pairs. For this reason, we set up an active learning scenario where unpaired images (or texts) are given and the annotator provides corresponding texts (or images) to make paired data. The key idea of the proposed AL algorithm is to select unpaired images (or texts) that can be hard negative samples for existing texts (or images). To this end, we introduce a novel scoring function to choose hard negative samples. We validate the effectiveness of the proposed method on Flickr30K and MS-COCO datasets.
[ "['Dae Ung Jo' 'Kyuewang Lee' 'JaeHo Chung' 'Jin Young Choi']" ]
null
null
2405.16304
null
null
http://arxiv.org/pdf/2405.16304v1
2024-05-25T17:12:54Z
2024-05-25T17:12:54Z
Federated Unsupervised Domain Generalization using Global and Local Alignment of Gradients
We address the problem of federated domain generalization in an unsupervised setting for the first time. We first theoretically establish a connection between domain shift and alignment of gradients in unsupervised federated learning and show that aligning the gradients at both client and server levels can facilitate the generalization of the model to new (target) domains. Building on this insight, we propose a novel method named FedGaLA, which performs gradient alignment at the client level to encourage clients to learn domain-invariant features, as well as global gradient alignment at the server to obtain a more generalized aggregated model. To empirically evaluate our method, we perform various experiments on four commonly used multi-domain datasets, PACS, OfficeHome, DomainNet, and TerraInc. The results demonstrate the effectiveness of our method which outperforms comparable baselines. Ablation and sensitivity studies demonstrate the impact of different components and parameters in our approach. The source code will be available online upon publication.
[ "['Farhad Pourpanah' 'Mahdiyar Molahasani' 'Milad Soltany'\n 'Michael Greenspan' 'Ali Etemad']" ]
null
null
2405.16305
null
null
http://arxiv.org/pdf/2405.16305v2
2024-05-28T20:42:38Z
2024-05-25T17:14:23Z
Efficiently Parameterized Neural Metriplectic Systems
Metriplectic systems are learned from data in a way that scales quadratically in both the size of the state and the rank of the metriplectic data. Besides being provably energy conserving and entropy stable, the proposed approach comes with approximation results demonstrating its ability to accurately learn metriplectic dynamics from data as well as an error estimate indicating its potential for generalization to unseen timescales when approximation error is low. Examples are provided which illustrate performance in the presence of both full state information as well as when entropic variables are unknown, confirming that the proposed approach exhibits superior accuracy and scalability without compromising on model expressivity.
[ "['Anthony Gruber' 'Kookjin Lee' 'Haksoo Lim' 'Noseong Park'\n 'Nathaniel Trask']" ]
null
null
2405.16312
null
null
http://arxiv.org/pdf/2405.16312v2
2024-07-14T14:40:20Z
2024-05-25T17:42:40Z
Time-SSM: Simplifying and Unifying State Space Models for Time Series Forecasting
State Space Models (SSMs) have emerged as a potent tool in sequence modeling tasks in recent years. These models approximate continuous systems using a set of basis functions and discretize them to handle input data, making them well-suited for modeling time series data collected at specific frequencies from continuous systems. Despite its potential, the application of SSMs in time series forecasting remains underexplored, with most existing models treating SSMs as a black box for capturing temporal or channel dependencies. To address this gap, this paper proposes a novel theoretical framework termed Dynamic Spectral Operator, offering more intuitive and general guidance on applying SSMs to time series data. Building upon our theory, we introduce Time-SSM, a novel SSM-based foundation model with only one-seventh of the parameters compared to Mamba. Various experiments validate both our theoretical framework and the superior performance of Time-SSM.
[ "['Jiaxi Hu' 'Disen Lan' 'Ziyu Zhou' 'Qingsong Wen' 'Yuxuan Liang']" ]
null
null
2405.16325
null
null
http://arxiv.org/pdf/2405.16325v2
2024-06-14T16:43:26Z
2024-05-25T18:43:05Z
SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs
We propose SLoPe, a Double-Pruned Sparse Plus Lazy Low-rank Adapter Pretraining method for LLMs that improves the accuracy of sparse LLMs while accelerating their pretraining and inference and reducing their memory footprint. Sparse pretraining of LLMs reduces the accuracy of the model, to overcome this, prior work uses dense models during fine-tuning. SLoPe improves the accuracy of sparsely pretrained models by adding low-rank adapters in the final 1% iterations of pretraining without adding significant overheads to the model pretraining and inference. In addition, SLoPe uses a double-pruned backward pass formulation that prunes the transposed weight matrix using N:M sparsity structures to enable an accelerated sparse backward pass. SLoPe accelerates the training and inference of models with billions of parameters up to $1.14times$ and $1.34times$ respectively (OPT-33B and OPT-66B) while reducing their memory usage by up to $0.77times$ and $0.51times$ for training and inference respectively.
[ "['Mohammad Mozaffari' 'Amir Yazdanbakhsh' 'Zhao Zhang'\n 'Maryam Mehri Dehnavi']" ]
null
null
2405.16335
null
null
http://arxiv.org/pdf/2405.16335v1
2024-05-25T19:28:11Z
2024-05-25T19:28:11Z
RoboArm-NMP: a Learning Environment for Neural Motion Planning
We present RoboArm-NMP, a learning and evaluation environment that allows simple and thorough evaluations of Neural Motion Planning (NMP) algorithms, focused on robotic manipulators. Our Python-based environment provides baseline implementations for learning control policies (either supervised or reinforcement learning based), a simulator based on PyBullet, data of solved instances using a classical motion planning solver, various representation learning methods for encoding the obstacles, and a clean interface between the learning and planning frameworks. Using RoboArm-NMP, we compare several prominent NMP design points, and demonstrate that the best methods mostly succeed in generalizing to unseen goals in a scene with fixed obstacles, but have difficulty in generalizing to unseen obstacle configurations, suggesting focus points for future research.
[ "['Tom Jurgenson' 'Matan Sudry' 'Gal Avineri' 'Aviv Tamar']" ]
null
null
2405.16339
null
null
http://arxiv.org/pdf/2405.16339v1
2024-05-25T19:50:23Z
2024-05-25T19:50:23Z
BOLD: Boolean Logic Deep Learning
Deep learning is computationally intensive, with significant efforts focused on reducing arithmetic complexity, particularly regarding energy consumption dominated by data movement. While existing literature emphasizes inference, training is considerably more resource-intensive. This paper proposes a novel mathematical principle by introducing the notion of Boolean variation such that neurons made of Boolean weights and inputs can be trained -- for the first time -- efficiently in Boolean domain using Boolean logic instead of gradient descent and real arithmetic. We explore its convergence, conduct extensively experimental benchmarking, and provide consistent complexity evaluation by considering chip architecture, memory hierarchy, dataflow, and arithmetic precision. Our approach achieves baseline full-precision accuracy in ImageNet classification and surpasses state-of-the-art results in semantic segmentation, with notable performance in image super-resolution, and natural language understanding with transformer-based models. Moreover, it significantly reduces energy consumption during both training and inference.
[ "['Van Minh Nguyen' 'Cristian Ocampo' 'Aymen Askri' 'Louis Leconte'\n 'Ba-Hien Tran']" ]
null
null
2405.16350
null
null
http://arxiv.org/pdf/2405.16350v1
2024-05-25T20:56:54Z
2024-05-25T20:56:54Z
A Second-Order perspective on Compositionality and Incremental Learning
The fine-tuning of deep pre-trained models has recently revealed compositional properties. This enables the arbitrary composition of multiple specialized modules into a single, multi-task model. However, identifying the conditions that promote compositionality remains an open issue, with recent efforts concentrating mainly on linearized networks. We conduct a theoretical study that attempts to demystify compositionality in standard non-linear networks through the second-order Taylor approximation of the loss function. The proposed formulation highlights the importance of staying within the pre-training basin for achieving composable modules. Moreover, it provides the basis for two dual incremental training algorithms: the one from the perspective of multiple models trained individually, while the other aims to optimize the composed model as a whole. We probe their application in incremental classification tasks and highlight some valuable skills. In fact, the pool of incrementally learned modules not only supports the creation of an effective multi-task model but also enables unlearning and specialization in specific tasks.
[ "['Angelo Porrello' 'Lorenzo Bonicelli' 'Pietro Buzzega' 'Monica Millunzi'\n 'Simone Calderara' 'Rita Cucchiara']" ]
null
null
2405.16351
null
null
http://arxiv.org/pdf/2405.16351v1
2024-05-25T21:03:39Z
2024-05-25T21:03:39Z
A Differential Equation Approach for Wasserstein GANs and Beyond
We propose a new theoretical lens to view Wasserstein generative adversarial networks (WGANs). In our framework, we define a discretization inspired by a distribution-dependent ordinary differential equation (ODE). We show that such a discretization is convergent and propose a viable class of adversarial training methods to implement this discretization, which we call W1 Forward Euler (W1-FE). In particular, the ODE framework allows us to implement persistent training, a novel training technique that cannot be applied to typical WGAN algorithms without the ODE interpretation. Remarkably, when we do not implement persistent training, we prove that our algorithms simplify to existing WGAN algorithms; when we increase the level of persistent training appropriately, our algorithms outperform existing WGAN algorithms in both low- and high-dimensional examples.
[ "['Zachariah Malik' 'Yu-Jui Huang']" ]
null
null
2405.16361
null
null
http://arxiv.org/pdf/2405.16361v1
2024-05-25T21:53:58Z
2024-05-25T21:53:58Z
LDPKiT: Recovering Utility in LDP Schemes by Training with Noise^2
The adoption of large cloud-based models for inference has been hampered by concerns about the privacy leakage of end-user data. One method to mitigate this leakage is to add local differentially private noise to queries before sending them to the cloud, but this degrades utility as a side effect. Our key insight is that knowledge available in the noisy labels returned from performing inference on noisy inputs can be aggregated and used to recover the correct labels. We implement this insight in LDPKiT, which stands for Local Differentially-Private and Utility-Preserving Inference via Knowledge Transfer. LDPKiT uses the noisy labels returned from querying a set of noised inputs to train a local model (noise^2), which is then used to perform inference on the original set of inputs. Our experiments on CIFAR-10, Fashion-MNIST, SVHN, and CARER NLP datasets demonstrate that LDPKiT can improve utility without compromising privacy. For instance, on CIFAR-10, compared to a standard $epsilon$-LDP scheme with $epsilon=15$, which provides a weak privacy guarantee, LDPKiT can achieve nearly the same accuracy (within 1% drop) with $epsilon=7$, offering an enhanced privacy guarantee. Moreover, the benefits of using LDPKiT increase at higher, more privacy-protective noise levels. For Fashion-MNIST and CARER, LDPKiT's accuracy on the sensitive dataset with $epsilon=7$ not only exceeds the average accuracy of the standard $epsilon$-LDP scheme with $epsilon=7$ by roughly 20% and 9% but also outperforms the standard $epsilon$-LDP scheme with $epsilon=15$, a scenario with less noise and minimal privacy protection. We also perform Zest distance measurements to demonstrate that the type of distillation performed by LDPKiT is different from a model extraction attack.
[ "['Kexin Li' 'Yang Xi' 'Aastha Mehta' 'David Lie']" ]
null
null
2405.16368
null
null
http://arxiv.org/pdf/2405.16368v1
2024-05-25T22:37:43Z
2024-05-25T22:37:43Z
Qsco: A Quantum Scoring Module for Open-set Supervised Anomaly Detection
Open set anomaly detection (OSAD) is a crucial task that aims to identify abnormal patterns or behaviors in data sets, especially when the anomalies observed during training do not represent all possible classes of anomalies. The recent advances in quantum computing in handling complex data structures and improving machine learning models herald a paradigm shift in anomaly detection methodologies. This study proposes a Quantum Scoring Module (Qsco), embedding quantum variational circuits into neural networks to enhance the model's processing capabilities in handling uncertainty and unlabeled data. Extensive experiments conducted across eight real-world anomaly detection datasets demonstrate our model's superior performance in detecting anomalies across varied settings and reveal that integrating quantum simulators does not result in prohibitive time complexities. Our study validates the feasibility of quantum-enhanced anomaly detection methods in practical applications.
[ "['Yifeng Peng' 'Xinyi Li' 'Zhiding Liang' 'Ying Wang']" ]