categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.15508 | null | null | http://arxiv.org/pdf/2405.15508v1 | 2024-05-24T12:52:46Z | 2024-05-24T12:52:46Z | Human-in-the-loop Reinforcement Learning for Data Quality Monitoring in
Particle Physics Experiments | Data Quality Monitoring (DQM) is a crucial task in large particle physics experiments, since detector malfunctioning can compromise the data. DQM is currently performed by human shifters, which is costly and results in limited accuracy. In this work, we provide a proof-of-concept for applying human-in-the-loop Reinforcement Learning (RL) to automate the DQM process while adapting to operating conditions that change over time. We implement a prototype based on the Proximal Policy Optimization (PPO) algorithm and validate it on a simplified synthetic dataset. We demonstrate how a multi-agent system can be trained for continuous automated monitoring during data collection, with human intervention actively requested only when relevant. We show that random, unbiased noise in human classification can be reduced, leading to an improved accuracy over the baseline. Additionally, we propose data augmentation techniques to deal with scarce data and to accelerate the learning process. Finally, we discuss further steps needed to implement the approach in the real world, including protocols for periodic control of the algorithm's outputs. | [
"['Olivia Jullian Parra' 'Julián García Pardiñas'\n 'Lorenzo Del Pianta Pérez' 'Maximilian Janisch' 'Suzanne Klaver'\n 'Thomas Lehéricy' 'Nicola Serra']"
] |
null | null | 2405.15509 | null | null | http://arxiv.org/pdf/2405.15509v1 | 2024-05-24T12:53:07Z | 2024-05-24T12:53:07Z | Randomized algorithms and PAC bounds for inverse reinforcement learning
in continuous spaces | This work studies discrete-time discounted Markov decision processes with continuous state and action spaces and addresses the inverse problem of inferring a cost function from observed optimal behavior. We first consider the case in which we have access to the entire expert policy and characterize the set of solutions to the inverse problem by using occupation measures, linear duality, and complementary slackness conditions. To avoid trivial solutions and ill-posedness, we introduce a natural linear normalization constraint. This results in an infinite-dimensional linear feasibility problem, prompting a thorough analysis of its properties. Next, we use linear function approximators and adopt a randomized approach, namely the scenario approach and related probabilistic feasibility guarantees, to derive epsilon-optimal solutions for the inverse problem. We further discuss the sample complexity for a desired approximation accuracy. Finally, we deal with the more realistic case where we only have access to a finite set of expert demonstrations and a generative model and provide bounds on the error made when working with samples. | [
"['Angeliki Kamoutsi' 'Peter Schmitt-Förster' 'Tobias Sutter'\n 'Volkan Cevher' 'John Lygeros']"
] |
null | null | 2405.15512 | null | null | http://arxiv.org/abs/2405.15512v2 | 2024-07-03T10:23:01Z | 2024-05-24T12:56:18Z | ChatGPT Code Detection: Techniques for Uncovering the Source of Code | In recent times, large language models (LLMs) have made significant strides in generating computer code, blurring the lines between code created by humans and code produced by artificial intelligence (AI). As these technologies evolve rapidly, it is crucial to explore how they influence code generation, especially given the risk of misuse in areas like higher education. This paper explores this issue by using advanced classification techniques to differentiate between code written by humans and that generated by ChatGPT, a type of LLM. We employ a new approach that combines powerful embedding features (black-box) with supervised learning algorithms - including Deep Neural Networks, Random Forests, and Extreme Gradient Boosting - to achieve this differentiation with an impressive accuracy of 98%. For the successful combinations, we also examine their model calibration, showing that some of the models are extremely well calibrated. Additionally, we present white-box features and an interpretable Bayes classifier to elucidate critical differences between the code sources, enhancing the explainability and transparency of our approach. Both approaches work well but provide at most 85-88% accuracy. We also show that untrained humans solve the same task not better than random guessing. This study is crucial in understanding and mitigating the potential risks associated with using AI in code generation, particularly in the context of higher education, software development, and competitive programming. | [
"['Marc Oedingen' 'Raphael C. Engelhardt' 'Robin Denz' 'Maximilian Hammer'\n 'Wolfgang Konen']"
] |
null | null | 2405.15514 | null | null | http://arxiv.org/pdf/2405.15514v1 | 2024-05-24T12:57:40Z | 2024-05-24T12:57:40Z | On the Convexity and Reliability of the Bethe Free Energy Approximation | The Bethe free energy approximation provides an effective way for relaxing NP-hard problems of probabilistic inference. However, its accuracy depends on the model parameters and particularly degrades if a phase transition in the model occurs. In this work, we analyze when the Bethe approximation is reliable and how this can be verified. We argue and show by experiment that it is mostly accurate if it is convex on a submanifold of its domain, the 'Bethe box'. For verifying its convexity, we derive two sufficient conditions that are based on the definiteness properties of the Bethe Hessian matrix: the first uses the concept of diagonal dominance, and the second decomposes the Bethe Hessian matrix into a sum of sparse matrices and characterizes the definiteness properties of the individual matrices in that sum. These theoretical results provide a simple way to estimate the critical phase transition temperature of a model. As a practical contribution we propose $texttt{BETHE-MIN}$, a projected quasi-Newton method to efficiently find a minimum of the Bethe free energy. | [
"['Harald Leisenberger' 'Christian Knoll' 'Franz Pernkopf']"
] |
null | null | 2405.15517 | null | null | http://arxiv.org/pdf/2405.15517v2 | 2024-06-18T13:20:08Z | 2024-05-24T13:01:35Z | Erase to Enhance: Data-Efficient Machine Unlearning in MRI
Reconstruction | Machine unlearning is a promising paradigm for removing unwanted data samples from a trained model, towards ensuring compliance with privacy regulations and limiting harmful biases. Although unlearning has been shown in, e.g., classification and recommendation systems, its potential in medical image-to-image translation, specifically in image recon-struction, has not been thoroughly investigated. This paper shows that machine unlearning is possible in MRI tasks and has the potential to benefit for bias removal. We set up a protocol to study how much shared knowledge exists between datasets of different organs, allowing us to effectively quantify the effect of unlearning. Our study reveals that combining training data can lead to hallucinations and reduced image quality in the reconstructed data. We use unlearning to remove hallucinations as a proxy exemplar of undesired data removal. Indeed, we show that machine unlearning is possible without full retraining. Furthermore, our observations indicate that maintaining high performance is feasible even when using only a subset of retain data. We have made our code publicly accessible. | [
"['Yuyang Xue' 'Jingshuai Liu' 'Steven McDonagh' 'Sotirios A. Tsaftaris']"
] |
null | null | 2405.15523 | null | null | http://arxiv.org/pdf/2405.15523v1 | 2024-05-24T13:05:05Z | 2024-05-24T13:05:05Z | Mosaic Memory: Fuzzy Duplication in Copyright Traps for Large Language
Models | The immense datasets used to develop Large Language Models (LLMs) often include copyright-protected content, typically without the content creator's consent. Copyright traps have been proposed to be injected into the original content, improving content detectability in newly released LLMs. Traps, however, rely on the exact duplication of a unique text sequence, leaving them vulnerable to commonly deployed data deduplication techniques. We here propose the generation of fuzzy copyright traps, featuring slight modifications across duplication. When injected in the fine-tuning data of a 1.3B LLM, we show fuzzy trap sequences to be memorized nearly as well as exact duplicates. Specifically, the Membership Inference Attack (MIA) ROC AUC only drops from 0.90 to 0.87 when 4 tokens are replaced across the fuzzy duplicates. We also find that selecting replacement positions to minimize the exact overlap between fuzzy duplicates leads to similar memorization, while making fuzzy duplicates highly unlikely to be removed by any deduplication process. Lastly, we argue that the fact that LLMs memorize across fuzzy duplicates challenges the study of LLM memorization relying on naturally occurring duplicates. Indeed, we find that the commonly used training dataset, The Pile, contains significant amounts of fuzzy duplicates. This introduces a previously unexplored confounding factor in post-hoc studies of LLM memorization, and questions the effectiveness of (exact) data deduplication as a privacy protection technique. | [
"['Igor Shilov' 'Matthieu Meeus' 'Yves-Alexandre de Montjoye']"
] |
null | null | 2405.15524 | null | null | http://arxiv.org/pdf/2405.15524v1 | 2024-05-24T13:09:52Z | 2024-05-24T13:09:52Z | Polyp Segmentation Generalisability of Pretrained Backbones | It has recently been demonstrated that pretraining backbones in a self-supervised manner generally provides better fine-tuned polyp segmentation performance, and that models with ViT-B backbones typically perform better than models with ResNet50 backbones. In this paper, we extend this recent work to consider generalisability. I.e., we assess the performance of models on a different dataset to that used for fine-tuning, accounting for variation in network architecture and pretraining pipeline (algorithm and dataset). This reveals how well models with different pretrained backbones generalise to data of a somewhat different distribution to the training data, which will likely arise in deployment due to different cameras and demographics of patients, amongst other factors. We observe that the previous findings, regarding pretraining pipelines for polyp segmentation, hold true when considering generalisability. However, our results imply that models with ResNet50 backbones typically generalise better, despite being outperformed by models with ViT-B backbones in evaluation on the test set from the same dataset used for fine-tuning. | [
"['Edward Sanderson' 'Bogdan J. Matuszewski']"
] |
null | null | 2405.15539 | null | null | http://arxiv.org/pdf/2405.15539v1 | 2024-05-24T13:27:23Z | 2024-05-24T13:27:23Z | A generalized neural tangent kernel for surrogate gradient learning | State-of-the-art neural network training methods depend on the gradient of the network function. Therefore, they cannot be applied to networks whose activation functions do not have useful derivatives, such as binary and discrete-time spiking neural networks. To overcome this problem, the activation function's derivative is commonly substituted with a surrogate derivative, giving rise to surrogate gradient learning (SGL). This method works well in practice but lacks theoretical foundation. The neural tangent kernel (NTK) has proven successful in the analysis of gradient descent. Here, we provide a generalization of the NTK, which we call the surrogate gradient NTK, that enables the analysis of SGL. First, we study a naive extension of the NTK to activation functions with jumps, demonstrating that gradient descent for such activation functions is also ill-posed in the infinite-width limit. To address this problem, we generalize the NTK to gradient descent with surrogate derivatives, i.e., SGL. We carefully define this generalization and expand the existing key theorems on the NTK with mathematical rigor. Further, we illustrate our findings with numerical experiments. Finally, we numerically compare SGL in networks with sign activation function and finite width to kernel regression with the surrogate gradient NTK; the results confirm that the surrogate gradient NTK provides a good characterization of SGL. | [
"['Luke Eilers' 'Raoul-Martin Memmesheimer' 'Sven Goedeke']"
] |
null | null | 2405.15540 | null | null | http://arxiv.org/pdf/2405.15540v1 | 2024-05-24T13:28:48Z | 2024-05-24T13:28:48Z | Bundle Neural Networks for message diffusion on graphs | The dominant paradigm for learning on graph-structured data is message passing. Despite being a strong inductive bias, the local message passing mechanism suffers from pathological issues such as over-smoothing, over-squashing, and limited node-level expressivity. To address these limitations we propose Bundle Neural Networks (BuNN), a new type of GNN that operates via message diffusion over flat vector bundles - structures analogous to connections on Riemannian manifolds that augment the graph by assigning to each node a vector space and an orthogonal map. A BuNN layer evolves the features according to a diffusion-type partial differential equation. When discretized, BuNNs are a special case of Sheaf Neural Networks (SNNs), a recently proposed MPNN capable of mitigating over-smoothing. The continuous nature of message diffusion enables BuNNs to operate on larger scales of the graph and, therefore, to mitigate over-squashing. Finally, we prove that BuNN can approximate any feature transformation over nodes on any (potentially infinite) family of graphs given injective positional encodings, resulting in universal node-level expressivity. We support our theory via synthetic experiments and showcase the strong empirical performance of BuNNs over a range of real-world tasks, achieving state-of-the-art results on several standard benchmarks in transductive and inductive settings. | [
"['Jacob Bamberger' 'Federico Barbero' 'Xiaowen Dong' 'Michael Bronstein']"
] |
null | null | 2405.15542 | null | null | http://arxiv.org/pdf/2405.15542v1 | 2024-05-24T13:29:57Z | 2024-05-24T13:29:57Z | SATSense: Multi-Satellite Collaborative Framework for Spectrum Sensing | Low Earth Orbit satellite Internet has recently been deployed, providing worldwide service with non-terrestrial networks. With the large-scale deployment of both non-terrestrial and terrestrial networks, limited spectrum resources will not be allocated enough. Consequently, dynamic spectrum sharing is crucial for their coexistence in the same spectrum, where accurate spectrum sensing is essential. However, spectrum sensing in space is more challenging than in terrestrial networks due to variable channel conditions, making single-satellite sensing unstable. Therefore, we first attempt to design a collaborative sensing scheme utilizing diverse data from multiple satellites. However, it is non-trivial to achieve this collaboration due to heterogeneous channel quality, considerable raw sampling data, and packet loss. To address the above challenges, we first establish connections between the satellites by modeling their sensing data as a graph and devising a graph neural network-based algorithm to achieve effective spectrum sensing. Meanwhile, we establish a joint sub-Nyquist sampling and autoencoder data compression framework to reduce the amount of transmitted sensing data. Finally, we propose a contrastive learning-based mechanism compensates for missing packets. Extensive experiments demonstrate that our proposed strategy can achieve efficient spectrum sensing performance and outperform the conventional deep learning algorithm in spectrum sensing accuracy. | [
"['Haoxuan Yuan' 'Zhe Chen' 'Zheng Lin' 'Jinbo Peng' 'Zihan Fang'\n 'Yuhang Zhong' 'Zihang Song' 'Yue Gao']"
] |
null | null | 2405.15544 | null | null | http://arxiv.org/pdf/2405.15544v1 | 2024-05-24T13:31:19Z | 2024-05-24T13:31:19Z | Knowledge-enhanced Relation Graph and Task Sampling for Few-shot
Molecular Property Prediction | Recently, few-shot molecular property prediction (FSMPP) has garnered increasing attention. Despite impressive breakthroughs achieved by existing methods, they often overlook the inherent many-to-many relationships between molecules and properties, which limits their performance. For instance, similar substructures of molecules can inspire the exploration of new compounds. Additionally, the relationships between properties can be quantified, with high-related properties providing more information in exploring the target property than those low-related. To this end, this paper proposes a novel meta-learning FSMPP framework (KRGTS), which comprises the Knowledge-enhanced Relation Graph module and the Task Sampling module. The knowledge-enhanced relation graph module constructs the molecule-property multi-relation graph (MPMRG) to capture the many-to-many relationships between molecules and properties. The task sampling module includes a meta-training task sampler and an auxiliary task sampler, responsible for scheduling the meta-training process and sampling high-related auxiliary tasks, respectively, thereby achieving efficient meta-knowledge learning and reducing noise introduction. Empirically, extensive experiments on five datasets demonstrate the superiority of KRGTS over a variety of state-of-the-art methods. The code is available in https://github.com/Vencent-Won/KRGTS-public. | [
"['Zeyu Wang' 'Tianyi Jiang' 'Yao Lu' 'Xiaoze Bao' 'Shanqing Yu' 'Bin Wei'\n 'Qi Xuan']"
] |
null | null | 2405.15545 | null | null | http://arxiv.org/pdf/2405.15545v1 | 2024-05-24T13:33:30Z | 2024-05-24T13:33:30Z | Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex
Finite-Sum Optimization with Heterogeneous Asynchronous Computations | In practical distributed systems, workers are typically not homogeneous, and due to differences in hardware configurations and network conditions, can have highly varying processing times. We consider smooth nonconvex finite-sum (empirical risk minimization) problems in this setup and introduce a new parallel method, Freya PAGE, designed to handle arbitrarily heterogeneous and asynchronous computations. By being robust to "stragglers" and adaptively ignoring slow computations, Freya PAGE offers significantly improved time complexity guarantees compared to all previous methods, including Asynchronous SGD, Rennala SGD, SPIDER, and PAGE, while requiring weaker assumptions. The algorithm relies on novel generic stochastic gradient collection strategies with theoretical guarantees that can be of interest on their own, and may be used in the design of future optimization methods. Furthermore, we establish a lower bound for smooth nonconvex finite-sum problems in the asynchronous setup, providing a fundamental time complexity limit. This lower bound is tight and demonstrates the optimality of Freya PAGE in the large-scale regime, i.e., when $sqrt{m} geq n$, where $n$ is # of workers, and $m$ is # of data samples. | [
"['Alexander Tyurin' 'Kaja Gruntkowska' 'Peter Richtárik']"
] |
null | null | 2405.15551 | null | null | http://arxiv.org/pdf/2405.15551v1 | 2024-05-24T13:37:48Z | 2024-05-24T13:37:48Z | Thinking Forward: Memory-Efficient Federated Finetuning of Language
Models | Finetuning large language models (LLMs) in federated learning (FL) settings has become important as it allows resource-constrained devices to finetune a model using private data. However, finetuning LLMs using backpropagation requires excessive memory (especially from intermediate activations) for resource-constrained devices. While Forward-mode Auto-Differentiation (AD) can reduce memory footprint from activations, we observe that directly applying it to LLM finetuning results in slow convergence and poor accuracy. This work introduces Spry, an FL algorithm that splits trainable weights of an LLM among participating clients, such that each client computes gradients using Forward-mode AD that are closer estimates of the true gradients. Spry achieves a low memory footprint, high accuracy, and fast convergence. We theoretically show that the global gradients in Spry are unbiased estimates of true global gradients for homogeneous data distributions across clients, while heterogeneity increases bias of the estimates. We also derive Spry's convergence rate, showing that the gradients decrease inversely proportional to the number of FL rounds, indicating the convergence up to the limits of heterogeneity. Empirically, Spry reduces the memory footprint during training by 1.4-7.1$times$ in contrast to backpropagation, while reaching comparable accuracy, across a wide range of language tasks, models, and FL settings. Spry reduces the convergence time by 1.2-20.3$times$ and achieves 5.2-13.5% higher accuracy against state-of-the-art zero-order methods. When finetuning Llama2-7B with LoRA, compared to the peak memory usage of 33.9GB of backpropagation, Spry only consumes 6.2GB of peak memory. For OPT13B, the reduction is from 76.5GB to 10.8GB. Spry makes feasible previously impossible FL deployments on commodity mobile and edge devices. Source code is available at https://github.com/Astuary/Spry. | [
"['Kunjal Panchal' 'Nisarg Parikh' 'Sunav Choudhary' 'Lijun Zhang'\n 'Yuriy Brun' 'Hui Guan']"
] |
null | null | 2405.15556 | null | null | http://arxiv.org/pdf/2405.15556v1 | 2024-05-24T13:44:25Z | 2024-05-24T13:44:25Z | Certifiably Robust RAG against Retrieval Corruption | Retrieval-augmented generation (RAG) has been shown vulnerable to retrieval corruption attacks: an attacker can inject malicious passages into retrieval results to induce inaccurate responses. In this paper, we propose RobustRAG as the first defense framework against retrieval corruption attacks. The key insight of RobustRAG is an isolate-then-aggregate strategy: we get LLM responses from each passage in isolation and then securely aggregate these isolated responses. To instantiate RobustRAG, we design keyword-based and decoding-based algorithms for securely aggregating unstructured text responses. Notably, RobustRAG can achieve certifiable robustness: we can formally prove and certify that, for certain queries, RobustRAG can always return accurate responses, even when the attacker has full knowledge of our defense and can arbitrarily inject a small number of malicious passages. We evaluate RobustRAG on open-domain QA and long-form text generation datasets and demonstrate its effectiveness and generalizability across various tasks and datasets. | [
"['Chong Xiang' 'Tong Wu' 'Zexuan Zhong' 'David Wagner' 'Danqi Chen'\n 'Prateek Mittal']"
] |
null | null | 2405.15557 | null | null | http://arxiv.org/pdf/2405.15557v1 | 2024-05-24T13:44:30Z | 2024-05-24T13:44:30Z | Learning from Linear Algebra: A Graph Neural Network Approach to
Preconditioner Design for Conjugate Gradient Solvers | Large linear systems are ubiquitous in modern computational science. The main recipe for solving them is iterative solvers with well-designed preconditioners. Deep learning models may be used to precondition residuals during iteration of such linear solvers as the conjugate gradient (CG) method. Neural network models require an enormous number of parameters to approximate well in this setup. Another approach is to take advantage of small graph neural networks (GNNs) to construct preconditioners of the predefined sparsity pattern. In our work, we recall well-established preconditioners from linear algebra and use them as a starting point for training the GNN. Numerical experiments demonstrate that our approach outperforms both classical methods and neural network-based preconditioning. We also provide a heuristic justification for the loss function used and validate our approach on complex datasets. | [
"['Vladislav Trifonov' 'Alexander Rudikov' 'Oleg Iliev' 'Ivan Oseledets'\n 'Ekaterina Muravleva']"
] |
null | null | 2405.15564 | null | null | http://arxiv.org/pdf/2405.15564v2 | 2024-05-27T01:42:32Z | 2024-05-24T13:52:41Z | Rethinking Independent Cross-Entropy Loss For Graph-Structured Data | Graph neural networks (GNNs) have exhibited prominent performance in learning graph-structured data. Considering node classification task, based on the i.i.d assumption among node labels, the traditional supervised learning simply sums up cross-entropy losses of the independent training nodes and applies the average loss to optimize GNNs' weights. But different from other data formats, the nodes are naturally connected. It is found that the independent distribution modeling of node labels restricts GNNs' capability to generalize over the entire graph and defend adversarial attacks. In this work, we propose a new framework, termed joint-cluster supervised learning, to model the joint distribution of each node with its corresponding cluster. We learn the joint distribution of node and cluster labels conditioned on their representations, and train GNNs with the obtained joint loss. In this way, the data-label reference signals extracted from the local cluster explicitly strengthen the discrimination ability on the target node. The extensive experiments demonstrate that our joint-cluster supervised learning can effectively bolster GNNs' node classification accuracy. Furthermore, being benefited from the reference signals which may be free from spiteful interference, our learning paradigm significantly protects the node classification from being affected by the adversarial attack. | [
"['Rui Miao' 'Kaixiong Zhou' 'Yili Wang' 'Ninghao Liu' 'Ying Wang'\n 'Xin Wang']"
] |
null | null | 2405.15579 | null | null | http://arxiv.org/pdf/2405.15579v1 | 2024-05-24T14:06:08Z | 2024-05-24T14:06:08Z | Generating density nowcasts for U.S. GDP growth with deep learning:
Bayes by Backprop and Monte Carlo dropout | Recent results in the literature indicate that artificial neural networks (ANNs) can outperform the dynamic factor model (DFM) in terms of the accuracy of GDP nowcasts. Compared to the DFM, the performance advantage of these highly flexible, nonlinear estimators is particularly evident in periods of recessions and structural breaks. From the perspective of policy-makers, however, nowcasts are the most useful when they are conveyed with uncertainty attached to them. While the DFM and other classical time series approaches analytically derive the predictive (conditional) distribution for GDP growth, ANNs can only produce point nowcasts based on their default training procedure (backpropagation). To fill this gap, first in the literature, we adapt two different deep learning algorithms that enable ANNs to generate density nowcasts for U.S. GDP growth: Bayes by Backprop and Monte Carlo dropout. The accuracy of point nowcasts, defined as the mean of the empirical predictive distribution, is evaluated relative to a naive constant growth model for GDP and a benchmark DFM specification. Using a 1D CNN as the underlying ANN architecture, both algorithms outperform those benchmarks during the evaluation period (2012:Q1 -- 2022:Q4). Furthermore, both algorithms are able to dynamically adjust the location (mean), scale (variance), and shape (skew) of the empirical predictive distribution. The results indicate that both Bayes by Backprop and Monte Carlo dropout can effectively augment the scope and functionality of ANNs, rendering them a fully compatible and competitive alternative for classical time series approaches. | [
"['Kristóf Németh' 'Dániel Hadházi']"
] |
null | null | 2405.15583 | null | null | http://arxiv.org/pdf/2405.15583v1 | 2024-05-24T14:12:23Z | 2024-05-24T14:12:23Z | Transfer Learning with Informative Priors: Simple Baselines Better than
Previously Reported | We pursue transfer learning to improve classifier accuracy on a target task with few labeled examples available for training. Recent work suggests that using a source task to learn a prior distribution over neural net weights, not just an initialization, can boost target task performance. In this study, we carefully compare transfer learning with and without source task informed priors across 5 datasets. We find that standard transfer learning informed by an initialization only performs far better than reported in previous comparisons. The relative gains of methods using informative priors over standard transfer learning vary in magnitude across datasets. For the scenario of 5-300 examples per class, we find negative or negligible gains on 2 datasets, modest gains (between 1.5-3 points of accuracy) on 2 other datasets, and substantial gains (>8 points) on one dataset. Among methods using informative priors, we find that an isotropic covariance appears competitive with learned low-rank covariance matrix while being substantially simpler to understand and tune. Further analysis suggests that the mechanistic justification for informed priors -- hypothesized improved alignment between train and test loss landscapes -- is not consistently supported due to high variability in empirical landscapes. We release code to allow independent reproduction of all experiments. | [
"['Ethan Harvey' 'Mikhail Petrov' 'Michael C. Hughes']"
] |
null | null | 2405.15586 | null | null | http://arxiv.org/pdf/2405.15586v1 | 2024-05-24T14:14:24Z | 2024-05-24T14:14:24Z | DAGER: Exact Gradient Inversion for Large Language Models | Federated learning works by aggregating locally computed gradients from multiple clients, thus enabling collaborative training without sharing private client data. However, prior work has shown that the data can actually be recovered by the server using so-called gradient inversion attacks. While these attacks perform well when applied on images, they are limited in the text domain and only permit approximate reconstruction of small batches and short input sequences. In this work, we propose DAGER, the first algorithm to recover whole batches of input text exactly. DAGER leverages the low-rank structure of self-attention layer gradients and the discrete nature of token embeddings to efficiently check if a given token sequence is part of the client data. We use this check to exactly recover full batches in the honest-but-curious setting without any prior on the data for both encoder- and decoder-based architectures using exhaustive heuristic search and a greedy approach, respectively. We provide an efficient GPU implementation of DAGER and show experimentally that it recovers full batches of size up to 128 on large language models (LLMs), beating prior attacks in speed (20x at same batch size), scalability (10x larger batches), and reconstruction quality (ROUGE-1/2 > 0.99). | [
"['Ivo Petrov' 'Dimitar I. Dimitrov' 'Maximilian Baader'\n 'Mark Niklas Müller' 'Martin Vechev']"
] |
null | null | 2405.15589 | null | null | http://arxiv.org/pdf/2405.15589v2 | 2024-06-21T19:59:31Z | 2024-05-24T14:20:09Z | Efficient Adversarial Training in LLMs with Continuous Attacks | Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails. In many domains, adversarial training has proven to be one of the most promising methods to reliably improve robustness against such attacks. Yet, in the context of LLMs, current methods for adversarial training are hindered by the high computational costs required to perform discrete adversarial attacks at each training iteration. We address this problem by instead calculating adversarial attacks in the continuous embedding space of the LLM, which is orders of magnitudes more efficient. We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses: the first makes the model robust on continuous embedding attacks computed on an adversarial behaviour dataset; the second ensures the usefulness of the final model by fine-tuning on utility data. Moreover, we introduce C-AdvIPO, an adversarial variant of IPO that does not require utility data for adversarially robust alignment. Our empirical evaluation on four models from different families (Gemma, Phi3, Mistral, Zephyr) and at different scales (2B, 3.8B, 7B) shows that both algorithms substantially enhance LLM robustness against discrete attacks (GCG, AutoDAN, PAIR), while maintaining utility. Our results demonstrate that robustness to continuous perturbations can extrapolate to discrete threat models. Thereby, we present a path toward scalable adversarial training algorithms for robustly aligning LLMs. | [
"['Sophie Xhonneux' 'Alessandro Sordoni' 'Stephan Günnemann'\n 'Gauthier Gidel' 'Leo Schwinn']"
] |
null | null | 2405.15593 | null | null | http://arxiv.org/pdf/2405.15593v1 | 2024-05-24T14:25:23Z | 2024-05-24T14:25:23Z | MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and
Provable Convergence | We propose a new variant of the Adam optimizer [Kingma and Ba, 2014] called MICROADAM that specifically minimizes memory overheads, while maintaining theoretical convergence guarantees. We achieve this by compressing the gradient information before it is fed into the optimizer state, thereby reducing its memory footprint significantly. We control the resulting compression error via a novel instance of the classical error feedback mechanism from distributed optimization [Seide et al., 2014, Alistarh et al., 2018, Karimireddy et al., 2019] in which the error correction information is itself compressed to allow for practical memory gains. We prove that the resulting approach maintains theoretical convergence guarantees competitive to those of AMSGrad, while providing good practical performance. Specifically, we show that MICROADAM can be implemented efficiently on GPUs: on both million-scale (BERT) and billion-scale (LLaMA) models, MicroAdam provides practical convergence competitive to that of the uncompressed Adam baseline, with lower memory usage and similar running time. Our code is available at https://github.com/IST-DASLab/MicroAdam. | [
"['Ionut-Vlad Modoranu' 'Mher Safaryan' 'Grigory Malinovsky' 'Eldar Kurtic'\n 'Thomas Robert' 'Peter Richtarik' 'Dan Alistarh']"
] |
null | null | 2405.15598 | null | null | http://arxiv.org/pdf/2405.15598v2 | 2024-06-23T17:11:28Z | 2024-05-24T14:30:00Z | MCDFN: Supply Chain Demand Forecasting via an Explainable Multi-Channel
Data Fusion Network Model | Accurate demand forecasting is crucial for optimizing supply chain management. Traditional methods often fail to capture complex patterns from seasonal variability and special events. Despite advancements in deep learning, interpretable forecasting models remain a challenge. To address this, we introduce the Multi-Channel Data Fusion Network (MCDFN), a hybrid architecture that integrates Convolutional Neural Networks (CNN), Long Short-Term Memory networks (LSTM), and Gated Recurrent Units (GRU) to enhance predictive performance by extracting spatial and temporal features from time series data. Our rigorous benchmarking demonstrates that MCDFN outperforms seven other deep-learning models, achieving superior metrics: MSE (23.5738%), RMSE (4.8553%), MAE (3.9991%), and MAPE (20.1575%). Additionally, MCDFN's predictions were statistically indistinguishable from actual values, confirmed by a paired t-test with a 5% p-value and a 10-fold cross-validated statistical paired t-test. We apply explainable AI techniques like ShapTime and Permutation Feature Importance to enhance interpretability. This research advances demand forecasting methodologies and offers practical guidelines for integrating MCDFN into supply chain systems, highlighting future research directions for scalability and user-friendly deployment. | [
"['Md Abrar Jahin' 'Asef Shahriar' 'Md Al Amin']"
] |
null | null | 2405.15599 | null | null | http://arxiv.org/pdf/2405.15599v1 | 2024-05-24T14:30:40Z | 2024-05-24T14:30:40Z | On the Computational Landscape of Replicable Learning | We study computational aspects of algorithmic replicability, a notion of stability introduced by Impagliazzo, Lei, Pitassi, and Sorrell [2022]. Motivated by a recent line of work that established strong statistical connections between replicability and other notions of learnability such as online learning, private learning, and SQ learning, we aim to understand better the computational connections between replicability and these learning paradigms. Our first result shows that there is a concept class that is efficiently replicably PAC learnable, but, under standard cryptographic assumptions, no efficient online learner exists for this class. Subsequently, we design an efficient replicable learner for PAC learning parities when the marginal distribution is far from uniform, making progress on a question posed by Impagliazzo et al. [2022]. To obtain this result, we design a replicable lifting framework inspired by Blanc, Lange, Malik, and Tan [2023] that transforms in a black-box manner efficient replicable PAC learners under the uniform marginal distribution over the Boolean hypercube to replicable PAC learners under any marginal distribution, with sample and time complexity that depends on a certain measure of the complexity of the distribution. Finally, we show that any pure DP learner can be transformed to a replicable one in time polynomial in the accuracy, confidence parameters and exponential in the representation dimension of the underlying hypothesis class. | [
"['Alkis Kalavasis' 'Amin Karbasi' 'Grigoris Velegkas' 'Felix Zhou']"
] |
null | null | 2405.15600 | null | null | http://arxiv.org/pdf/2405.15600v1 | 2024-05-20T03:14:15Z | 2024-05-20T03:14:15Z | Transfer Learning for Spatial Autoregressive Models | The spatial autoregressive (SAR) model has been widely applied in various empirical economic studies to characterize the spatial dependence among subjects. However, the precision of estimating the SAR model diminishes when the sample size of the target data is limited. In this paper, we propose a new transfer learning framework for the SAR model to borrow the information from similar source data to improve both estimation and prediction. When the informative source data sets are known, we introduce a two-stage algorithm, including a transferring stage and a debiasing stage, to estimate the unknown parameters and also establish the theoretical convergence rates for the resulting estimators. If we do not know which sources to transfer, a transferable source detection algorithm is proposed to detect informative sources data based on spatial residual bootstrap to retain the necessary spatial dependence. Its detection consistency is also derived. Simulation studies demonstrate that using informative source data, our transfer learning algorithm significantly enhances the performance of the classical two-stage least squares estimator. In the empirical application, we apply our method to the election prediction in swing states in the 2020 U.S. presidential election, utilizing polling data from the 2016 U.S. presidential election along with other demographic and geographical data. The empirical results show that our method outperforms traditional estimation methods. | [
"['Hao Zeng' 'Wei Zhong' 'Xingbai Xu']"
] |
null | null | 2405.15603 | null | null | http://arxiv.org/pdf/2405.15603v2 | 2024-05-27T14:23:46Z | 2024-05-24T14:36:02Z | Kronecker-Factored Approximate Curvature for Physics-Informed Neural
Networks | Physics-informed neural networks (PINNs) are infamous for being hard to train. Recently, second-order methods based on natural gradient and Gauss-Newton methods have shown promising performance, improving the accuracy achieved by first-order methods by several orders of magnitude. While promising, the proposed methods only scale to networks with a few thousand parameters due to the high computational cost to evaluate, store, and invert the curvature matrix. We propose Kronecker-factored approximate curvature (KFAC) for PINN losses that greatly reduces the computational cost and allows scaling to much larger networks. Our approach goes beyond the established KFAC for traditional deep learning problems as it captures contributions from a PDE's differential operator that are crucial for optimization. To establish KFAC for such losses, we use Taylor-mode automatic differentiation to describe the differential operator's computation graph as a forward network with shared weights. This allows us to apply KFAC thanks to a recently-developed general formulation for networks with weight sharing. Empirically, we find that our KFAC-based optimizers are competitive with expensive second-order methods on small problems, scale more favorably to higher-dimensional neural networks and PDEs, and consistently outperform first-order methods and LBFGS. | [
"['Felix Dangel' 'Johannes Müller' 'Marius Zeinhofer']"
] |
null | null | 2405.15605 | null | null | http://arxiv.org/pdf/2405.15605v2 | 2024-05-28T07:54:38Z | 2024-05-24T14:43:37Z | Fast-PGM: Fast Probabilistic Graphical Model Learning and Inference | Probabilistic graphical models (PGMs) serve as a powerful framework for modeling complex systems with uncertainty and extracting valuable insights from data. However, users face challenges when applying PGMs to their problems in terms of efficiency and usability. This paper presents Fast-PGM, an efficient and open-source library for PGM learning and inference. Fast-PGM supports comprehensive tasks on PGMs, including structure and parameter learning, as well as exact and approximate inference, and enhances efficiency of the tasks through computational and memory optimizations and parallelization techniques. Concurrently, Fast-PGM furnishes developers with flexible building blocks, furnishes learners with detailed documentation, and affords non-experts user-friendly interfaces, thereby ameliorating the usability of PGMs to users across a spectrum of expertise levels. The source code of Fast-PGM is available at https://github.com/jjiantong/FastPGM. | [
"['Jiantong Jiang' 'Zeyi Wen' 'Peiyu Yang' 'Atif Mansoor' 'Ajmal Mian']"
] |
null | null | 2405.15613 | null | null | http://arxiv.org/pdf/2405.15613v2 | 2024-06-28T09:22:38Z | 2024-05-24T14:58:51Z | Automatic Data Curation for Self-Supervised Learning: A Clustering-Based
Approach | Self-supervised features are the cornerstone of modern machine learning systems. They are typically pre-trained on data collections whose construction and curation typically require extensive human effort. This manual process has some limitations similar to those encountered in supervised learning, e.g., the crowd-sourced selection of data is costly and time-consuming, preventing scaling the dataset size. In this work, we consider the problem of automatic curation of high-quality datasets for self-supervised pre-training. We posit that such datasets should be large, diverse and balanced, and propose a clustering-based approach for building ones satisfying all these criteria. Our method involves successive and hierarchical applications of $k$-means on a large and diverse data repository to obtain clusters that distribute uniformly among data concepts, followed by a hierarchical, balanced sampling step from these clusters. Extensive experiments on three different data domains including web-based images, satellite images and text show that features trained on our automatically curated datasets outperform those trained on uncurated data while being on par or better than ones trained on manually curated data. Code is available at https://github.com/facebookresearch/ssl-data-curation. | [
"['Huy V. Vo' 'Vasil Khalidov' 'Timothée Darcet' 'Théo Moutakanni'\n 'Nikita Smetanin' 'Marc Szafraniec' 'Hugo Touvron' 'Camille Couprie'\n 'Maxime Oquab' 'Armand Joulin' 'Hervé Jégou' 'Patrick Labatut'\n 'Piotr Bojanowski']"
] |
null | null | 2405.15616 | null | null | http://arxiv.org/pdf/2405.15616v1 | 2024-05-24T15:03:56Z | 2024-05-24T15:03:56Z | Neuromorphic dreaming: A pathway to efficient learning in artificial
agents | Achieving energy efficiency in learning is a key challenge for artificial intelligence (AI) computing platforms. Biological systems demonstrate remarkable abilities to learn complex skills quickly and efficiently. Inspired by this, we present a hardware implementation of model-based reinforcement learning (MBRL) using spiking neural networks (SNNs) on mixed-signal analog/digital neuromorphic hardware. This approach leverages the energy efficiency of mixed-signal neuromorphic chips while achieving high sample efficiency through an alternation of online learning, referred to as the "awake" phase, and offline learning, known as the "dreaming" phase. The model proposed includes two symbiotic networks: an agent network that learns by combining real and simulated experiences, and a learned world model network that generates the simulated experiences. We validate the model by training the hardware implementation to play the Atari game Pong. We start from a baseline consisting of an agent network learning without a world model and dreaming, which successfully learns to play the game. By incorporating dreaming, the number of required real game experiences are reduced significantly compared to the baseline. The networks are implemented using a mixed-signal neuromorphic processor, with the readout layers trained using a computer in-the-loop, while the other layers remain fixed. These results pave the way toward energy-efficient neuromorphic learning systems capable of rapid learning in real world applications and use-cases. | [
"['Ingo Blakowski' 'Dmitrii Zendrikov' 'Cristiano Capone'\n 'Giacomo Indiveri']"
] |
null | null | 2405.15618 | null | null | http://arxiv.org/pdf/2405.15618v1 | 2024-05-24T15:04:36Z | 2024-05-24T15:04:36Z | MLPs Learn In-Context | In-context learning (ICL), the remarkable ability to solve a task from only input exemplars, has commonly been assumed to be a unique hallmark of Transformer models. In this study, we demonstrate that multi-layer perceptrons (MLPs) can also learn in-context. Moreover, we find that MLPs, and the closely related MLP-Mixer models, learn in-context competitively with Transformers given the same compute budget. We further show that MLPs outperform Transformers on a subset of ICL tasks designed to test relational reasoning. These results suggest that in-context learning is not exclusive to Transformers and highlight the potential of exploring this phenomenon beyond attention-based architectures. In addition, MLPs' surprising success on relational tasks challenges prior assumptions about simple connectionist models. Altogether, our results endorse the broad trend that ``less inductive bias is better" and contribute to the growing interest in all-MLP alternatives to task-specific architectures. | [
"['William L. Tong' 'Cengiz Pehlevan']"
] |
null | null | 2405.15624 | null | null | http://arxiv.org/pdf/2405.15624v1 | 2024-05-24T15:13:53Z | 2024-05-24T15:13:53Z | Inverse-RLignment: Inverse Reinforcement Learning from Demonstrations
for LLM Alignment | Aligning Large Language Models (LLMs) is crucial for enhancing their safety and utility. However, existing methods, primarily based on preference datasets, face challenges such as noisy labels, high annotation costs, and privacy concerns. In this work, we introduce Alignment from Demonstrations (AfD), a novel approach leveraging high-quality demonstration data to overcome these challenges. We formalize AfD within a sequential decision-making framework, highlighting its unique challenge of missing reward signals. Drawing insights from forward and inverse reinforcement learning, we introduce divergence minimization objectives for AfD. Analytically, we elucidate the mass-covering and mode-seeking behaviors of various approaches, explaining when and why certain methods are superior. Practically, we propose a computationally efficient algorithm that extrapolates over a tailored reward model for AfD. We validate our key insights through experiments on the Harmless and Helpful tasks, demonstrating their strong empirical performance while maintaining simplicity. | [
"['Hao Sun' 'Mihaela van der Schaar']"
] |
null | null | 2405.15625 | null | null | http://arxiv.org/pdf/2405.15625v1 | 2024-05-24T15:14:23Z | 2024-05-24T15:14:23Z | Nonlinear denoising score matching for enhanced learning of structured
distributions | We present a novel method for training score-based generative models which uses nonlinear noising dynamics to improve learning of structured distributions. Generalizing to a nonlinear drift allows for additional structure to be incorporated into the dynamics, thus making the training better adapted to the data, e.g., in the case of multimodality or (approximate) symmetries. Such structure can be obtained from the data by an inexpensive preprocessing step. The nonlinear dynamics introduces new challenges into training which we address in two ways: 1) we develop a new nonlinear denoising score matching (NDSM) method, 2) we introduce neural control variates in order to reduce the variance of the NDSM training objective. We demonstrate the effectiveness of this method on several examples: a) a collection of low-dimensional examples, motivated by clustering in latent space, b) high-dimensional images, addressing issues with mode collapse, small training sets, and approximate symmetries, the latter being a challenge for methods based on equivariant neural networks, which require exact symmetries. | [
"['Jeremiah Birrell' 'Markos A. Katsoulakis' 'Luc Rey-Bellet'\n 'Benjamin Zhang' 'Wei Zhu']"
] |
null | null | 2405.15632 | null | null | http://arxiv.org/pdf/2405.15632v1 | 2024-05-24T15:17:51Z | 2024-05-24T15:17:51Z | Federated Behavioural Planes: Explaining the Evolution of Client
Behaviour in Federated Learning | Federated Learning (FL), a privacy-aware approach in distributed deep learning environments, enables many clients to collaboratively train a model without sharing sensitive data, thereby reducing privacy risks. However, enabling human trust and control over FL systems requires understanding the evolving behaviour of clients, whether beneficial or detrimental for the training, which still represents a key challenge in the current literature. To address this challenge, we introduce Federated Behavioural Planes (FBPs), a novel method to analyse, visualise, and explain the dynamics of FL systems, showing how clients behave under two different lenses: predictive performance (error behavioural space) and decision-making processes (counterfactual behavioural space). Our experiments demonstrate that FBPs provide informative trajectories describing the evolving states of clients and their contributions to the global model, thereby enabling the identification of clusters of clients with similar behaviours. Leveraging the patterns identified by FBPs, we propose a robust aggregation technique named Federated Behavioural Shields to detect malicious or noisy client models, thereby enhancing security and surpassing the efficacy of existing state-of-the-art FL defense mechanisms. | [
"['Dario Fenoglio' 'Gabriele Dominici' 'Pietro Barbiero' 'Alberto Tonda'\n 'Martin Gjoreski' 'Marc Langheinrich']"
] |
null | null | 2405.15636 | null | null | http://arxiv.org/pdf/2405.15636v2 | 2024-06-25T19:05:11Z | 2024-05-24T15:22:58Z | Visualize and Paint GAN Activations | We investigate how generated structures of GANs correlate with their activations in hidden layers, with the purpose of better understanding the inner workings of those models and being able to paint structures with unconditionally trained GANs. This gives us more control over the generated images, allowing to generate them from a semantic segmentation map while not requiring such a segmentation in the training data. To this end we introduce the concept of tileable features, allowing us to identify activations that work well for painting. | [
"['Rudolf Herdt' 'Peter Maass']"
] |
null | null | 2405.15642 | null | null | http://arxiv.org/abs/2405.15642v1 | 2024-05-24T15:33:08Z | 2024-05-24T15:33:08Z | Effective Confidence Region Prediction Using Probability Forecasters | Confidence region prediction is a practically useful extension to the commonly studied pattern recognition problem. Instead of predicting a single label, the constraint is relaxed to allow prediction of a subset of labels given a desired confidence level 1-delta. Ideally, effective region predictions should be (1) well calibrated - predictive regions at confidence level 1-delta should err with relative frequency at most delta and (2) be as narrow (or certain) as possible. We present a simple technique to generate confidence region predictions from conditional probability estimates (probability forecasts). We use this 'conversion' technique to generate confidence region predictions from probability forecasts output by standard machine learning algorithms when tested on 15 multi-class datasets. Our results show that approximately 44% of experiments demonstrate well-calibrated confidence region predictions, with the K-Nearest Neighbour algorithm tending to perform consistently well across all data. Our results illustrate the practical benefits of effective confidence region prediction with respect to medical diagnostics, where guarantees of capturing the true disease label can be given. | [
"['David Lindsay' 'Sian Lindsay']"
] |
null | null | 2405.15643 | null | null | http://arxiv.org/pdf/2405.15643v1 | 2024-05-24T15:33:27Z | 2024-05-24T15:33:27Z | Reducing the cost of posterior sampling in linear inverse problems via
task-dependent score learning | Score-based diffusion models (SDMs) offer a flexible approach to sample from the posterior distribution in a variety of Bayesian inverse problems. In the literature, the prior score is utilized to sample from the posterior by different methods that require multiple evaluations of the forward mapping in order to generate a single posterior sample. These methods are often designed with the objective of enabling the direct use of the unconditional prior score and, therefore, task-independent training. In this paper, we focus on linear inverse problems, when evaluation of the forward mapping is computationally expensive and frequent posterior sampling is required for new measurement data, such as in medical imaging. We demonstrate that the evaluation of the forward mapping can be entirely bypassed during posterior sample generation. Instead, without introducing any error, the computational effort can be shifted to an offline task of training the score of a specific diffusion-like random process. In particular, the training is task-dependent requiring information about the forward mapping but not about the measurement data. It is shown that the conditional score corresponding to the posterior can be obtained from the auxiliary score by suitable affine transformations. We prove that this observation generalizes to the framework of infinite-dimensional diffusion models introduced recently and provide numerical analysis of the method. Moreover, we validate our findings with numerical experiments. | [
"['Fabian Schneider' 'Duc-Lam Duong' 'Matti Lassas' 'Maarten V. de Hoop'\n 'Tapio Helin']"
] |
null | null | 2405.15644 | null | null | http://arxiv.org/pdf/2405.15644v1 | 2024-05-24T15:34:09Z | 2024-05-24T15:34:09Z | Harnessing Increased Client Participation with Cohort-Parallel Federated
Learning | Federated Learning (FL) is a machine learning approach where nodes collaboratively train a global model. As more nodes participate in a round of FL, the effectiveness of individual model updates by nodes also diminishes. In this study, we increase the effectiveness of client updates by dividing the network into smaller partitions, or cohorts. We introduce Cohort-Parallel Federated Learning (CPFL): a novel learning approach where each cohort independently trains a global model using FL, until convergence, and the produced models by each cohort are then unified using one-shot Knowledge Distillation (KD) and a cross-domain, unlabeled dataset. The insight behind CPFL is that smaller, isolated networks converge quicker than in a one-network setting where all nodes participate. Through exhaustive experiments involving realistic traces and non-IID data distributions on the CIFAR-10 and FEMNIST image classification tasks, we investigate the balance between the number of cohorts, model accuracy, training time, and compute and communication resources. Compared to traditional FL, CPFL with four cohorts, non-IID data distribution, and CIFAR-10 yields a 1.9$times$ reduction in train time and a 1.3$times$ reduction in resource usage, with a minimal drop in test accuracy. | [
"['Akash Dhasade' 'Anne-Marie Kermarrec' 'Tuan-Anh Nguyen' 'Rafael Pires'\n 'Martijn de Vos']"
] |
null | null | 2405.15655 | null | null | http://arxiv.org/pdf/2405.15655v2 | 2024-05-27T02:33:22Z | 2024-05-24T15:49:00Z | HiddenSpeaker: Generate Imperceptible Unlearnable Audios for Speaker
Verification System | In recent years, the remarkable advancements in deep neural networks have brought tremendous convenience. However, the training process of a highly effective model necessitates a substantial quantity of samples, which brings huge potential threats, like unauthorized exploitation with privacy leakage. In response, we propose a framework named HiddenSpeaker, embedding imperceptible perturbations within the training speech samples and rendering them unlearnable for deep-learning-based speaker verification systems that employ large-scale speakers for efficient training. The HiddenSpeaker utilizes a simplified error-minimizing method named Single-Level Error-Minimizing (SLEM) to generate specific and effective perturbations. Additionally, a hybrid objective function is employed for human perceptual optimization, ensuring the perturbation is indistinguishable from human listeners. We conduct extensive experiments on multiple state-of-the-art (SOTA) models in the speaker verification domain to evaluate HiddenSpeaker. Our results demonstrate that HiddenSpeaker not only deceives the model with unlearnable samples but also enhances the imperceptibility of the perturbations, showcasing strong transferability across different models. | [
"['Zhisheng Zhang' 'Pengyang Huang']"
] |
null | null | 2405.15662 | null | null | http://arxiv.org/pdf/2405.15662v1 | 2024-05-24T15:59:17Z | 2024-05-24T15:59:17Z | Class Machine Unlearning for Complex Data via Concepts Inference and
Data Poisoning | In current AI era, users may request AI companies to delete their data from the training dataset due to the privacy concerns. As a model owner, retraining a model will consume significant computational resources. Therefore, machine unlearning is a new emerged technology to allow model owner to delete requested training data or a class with little affecting on the model performance. However, for large-scaling complex data, such as image or text data, unlearning a class from a model leads to a inferior performance due to the difficulty to identify the link between classes and model. An inaccurate class deleting may lead to over or under unlearning. In this paper, to accurately defining the unlearning class of complex data, we apply the definition of Concept, rather than an image feature or a token of text data, to represent the semantic information of unlearning class. This new representation can cut the link between the model and the class, leading to a complete erasing of the impact of a class. To analyze the impact of the concept of complex data, we adopt a Post-hoc Concept Bottleneck Model, and Integrated Gradients to precisely identify concepts across different classes. Next, we take advantage of data poisoning with random and targeted labels to propose unlearning methods. We test our methods on both image classification models and large language models (LLMs). The results consistently show that the proposed methods can accurately erase targeted information from models and can largely maintain the performance of the models. | [
"['Wenhan Chang' 'Tianqing Zhu' 'Heng Xu' 'Wenjian Liu' 'Wanlei Zhou']"
] |
null | null | 2405.15673 | null | null | http://arxiv.org/pdf/2405.15673v1 | 2024-05-24T16:12:39Z | 2024-05-24T16:12:39Z | Consistency of Neural Causal Partial Identification | Recent progress in Neural Causal Models (NCMs) showcased how identification and partial identification of causal effects can be automatically carried out via training of neural generative models that respect the constraints encoded in a given causal graph [Xia et al. 2022, Balazadeh et al. 2022]. However, formal consistency of these methods has only been proven for the case of discrete variables or only for linear causal models. In this work, we prove consistency of partial identification via NCMs in a general setting with both continuous and categorical variables. Further, our results highlight the impact of the design of the underlying neural network architecture in terms of depth and connectivity as well as the importance of applying Lipschitz regularization in the training phase. In particular, we provide a counterexample showing that without Lipschitz regularization the NCM may not be asymptotically consistent. Our results are enabled by new results on the approximability of structural causal models via neural generative models, together with an analysis of the sample complexity of the resulting architectures and how that translates into an error in the constrained optimization problem that defines the partial identification bounds. | [
"['Jiyuan Tan' 'Jose Blanchet' 'Vasilis Syrgkanis']"
] |
null | null | 2405.15676 | null | null | http://arxiv.org/pdf/2405.15676v1 | 2024-05-24T16:17:01Z | 2024-05-24T16:17:01Z | Taming Score-Based Diffusion Priors for Infinite-Dimensional Nonlinear
Inverse Problems | This work introduces a sampling method capable of solving Bayesian inverse problems in function space. It does not assume the log-concavity of the likelihood, meaning that it is compatible with nonlinear inverse problems. The method leverages the recently defined infinite-dimensional score-based diffusion models as a learning-based prior, while enabling provable posterior sampling through a Langevin-type MCMC algorithm defined on function spaces. A novel convergence analysis is conducted, inspired by the fixed-point methods established for traditional regularization-by-denoising algorithms and compatible with weighted annealing. The obtained convergence bound explicitly depends on the approximation error of the score; a well-approximated score is essential to obtain a well-approximated posterior. Stylized and PDE-based examples are provided, demonstrating the validity of our convergence analysis. We conclude by presenting a discussion of the method's challenges related to learning the score and computational complexity. | [
"['Lorenzo Baldassari' 'Ali Siahkoohi' 'Josselin Garnier' 'Knut Solna'\n 'Maarten V. de Hoop']"
] |
null | null | 2405.15682 | null | null | http://arxiv.org/pdf/2405.15682v2 | 2024-05-30T21:50:15Z | 2024-05-24T16:20:46Z | The Road Less Scheduled | Existing learning rate schedules that do not require specification of the optimization stopping step T are greatly out-performed by learning rate schedules that depend on T. We propose an approach that avoids the need for this stopping time by eschewing the use of schedules entirely, while exhibiting state-of-the-art performance compared to schedules across a wide family of problems ranging from convex problems to large-scale deep learning problems. Our Schedule-Free approach introduces no additional hyper-parameters over standard optimizers with momentum. Our method is a direct consequence of a new theory we develop that unifies scheduling and iterate averaging. An open source implementation of our method is available (https://github.com/facebookresearch/schedule_free). | [
"['Aaron Defazio' 'Xingyu' 'Yang' 'Harsh Mehta' 'Konstantin Mishchenko'\n 'Ahmed Khaled' 'Ashok Cutkosky']"
] |
null | null | 2405.15687 | null | null | http://arxiv.org/pdf/2405.15687v1 | 2024-05-24T16:26:56Z | 2024-05-24T16:26:56Z | Chain-of-Thought Prompting for Demographic Inference with Large
Multimodal Models | Conventional demographic inference methods have predominantly operated under the supervision of accurately labeled data, yet struggle to adapt to shifting social landscapes and diverse cultural contexts, leading to narrow specialization and limited accuracy in applications. Recently, the emergence of large multimodal models (LMMs) has shown transformative potential across various research tasks, such as visual comprehension and description. In this study, we explore the application of LMMs to demographic inference and introduce a benchmark for both quantitative and qualitative evaluation. Our findings indicate that LMMs possess advantages in zero-shot learning, interpretability, and handling uncurated 'in-the-wild' inputs, albeit with a propensity for off-target predictions. To enhance LMM performance and achieve comparability with supervised learning baselines, we propose a Chain-of-Thought augmented prompting approach, which effectively mitigates the off-target prediction issue. | [
"['Yongsheng Yu' 'Jiebo Luo']"
] |
null | null | 2405.15699 | null | null | http://arxiv.org/pdf/2405.15699v1 | 2024-05-24T16:43:26Z | 2024-05-24T16:43:26Z | Dimension-free deterministic equivalents for random feature regression | In this work we investigate the generalization performance of random feature ridge regression (RFRR). Our main contribution is a general deterministic equivalent for the test error of RFRR. Specifically, under a certain concentration property, we show that the test error is well approximated by a closed-form expression that only depends on the feature map eigenvalues. Notably, our approximation guarantee is non-asymptotic, multiplicative, and independent of the feature map dimension -- allowing for infinite-dimensional features. We expect this deterministic equivalent to hold broadly beyond our theoretical analysis, and we empirically validate its predictions on various real and synthetic datasets. As an application, we derive sharp excess error rates under standard power-law assumptions of the spectrum and target decay. In particular, we provide a tight result for the smallest number of features achieving optimal minimax error rate. | [
"['Leonardo Defilippis' 'Bruno Loureiro' 'Theodor Misiakiewicz']"
] |
null | null | 2405.15706 | null | null | http://arxiv.org/pdf/2405.15706v2 | 2024-05-28T14:17:51Z | 2024-05-24T16:52:09Z | The Impact of Geometric Complexity on Neural Collapse in Transfer
Learning | Many of the recent remarkable advances in computer vision and language models can be attributed to the success of transfer learning via the pre-training of large foundation models. However, a theoretical framework which explains this empirical success is incomplete and remains an active area of research. Flatness of the loss surface and neural collapse have recently emerged as useful pre-training metrics which shed light on the implicit biases underlying pre-training. In this paper, we explore the geometric complexity of a model's learned representations as a fundamental mechanism that relates these two concepts. We show through experiments and theory that mechanisms which affect the geometric complexity of the pre-trained network also influence the neural collapse. Furthermore, we show how this effect of the geometric complexity generalizes to the neural collapse of new classes as well, thus encouraging better performance on downstream tasks, particularly in the few-shot setting. | [
"['Michael Munn' 'Benoit Dherin' 'Javier Gonzalvo']"
] |
null | null | 2405.15709 | null | null | http://arxiv.org/pdf/2405.15709v1 | 2024-05-24T16:59:29Z | 2024-05-24T16:59:29Z | Information-theoretic Generalization Analysis for Expected Calibration
Error | While the expected calibration error (ECE), which employs binning, is widely adopted to evaluate the calibration performance of machine learning models, theoretical understanding of its estimation bias is limited. In this paper, we present the first comprehensive analysis of the estimation bias in the two common binning strategies, uniform mass and uniform width binning. Our analysis establishes upper bounds on the bias, achieving an improved convergence rate. Moreover, our bounds reveal, for the first time, the optimal number of bins to minimize the estimation bias. We further extend our bias analysis to generalization error analysis based on the information-theoretic approach, deriving upper bounds that enable the numerical evaluation of how small the ECE is for unknown data. Experiments using deep learning models show that our bounds are nonvacuous thanks to this information-theoretic generalization analysis approach. | [
"['Futoshi Futami' 'Masahiro Fujisawa']"
] |
null | null | 2405.15712 | null | null | http://arxiv.org/pdf/2405.15712v1 | 2024-05-24T17:01:37Z | 2024-05-24T17:01:37Z | Infinite Limits of Multi-head Transformer Dynamics | In this work, we analyze various scaling limits of the training dynamics of transformer models in the feature learning regime. We identify the set of parameterizations that admit well-defined infinite width and depth limits, allowing the attention layers to update throughout training--a relevant notion of feature learning in these models. We then use tools from dynamical mean field theory (DMFT) to analyze various infinite limits (infinite key/query dimension, infinite heads, and infinite depth) which have different statistical descriptions depending on which infinite limit is taken and how attention layers are scaled. We provide numerical evidence of convergence to the limits and discuss how the parameterization qualitatively influences learned features. | [
"['Blake Bordelon' 'Hamza Tahir Chaudhry' 'Cengiz Pehlevan']"
] |
null | null | 2405.15719 | null | null | http://arxiv.org/pdf/2405.15719v1 | 2024-05-24T17:06:51Z | 2024-05-24T17:06:51Z | Hierarchical Uncertainty Exploration via Feedforward Posterior Trees | When solving ill-posed inverse problems, one often desires to explore the space of potential solutions rather than be presented with a single plausible reconstruction. Valuable insights into these feasible solutions and their associated probabilities are embedded in the posterior distribution. However, when confronted with data of high dimensionality (such as images), visualizing this distribution becomes a formidable challenge, necessitating the application of effective summarization techniques before user examination. In this work, we introduce a new approach for visualizing posteriors across multiple levels of granularity using tree-valued predictions. Our method predicts a tree-valued hierarchical summarization of the posterior distribution for any input measurement, in a single forward pass of a neural network. We showcase the efficacy of our approach across diverse datasets and image restoration challenges, highlighting its prowess in uncertainty quantification and visualization. Our findings reveal that our method performs comparably to a baseline that hierarchically clusters samples from a diffusion-based posterior sampler, yet achieves this with orders of magnitude greater speed. | [
"['Elias Nehme' 'Rotem Mulayoff' 'Tomer Michaeli']"
] |
null | null | 2405.15722 | null | null | http://arxiv.org/pdf/2405.15722v2 | 2024-06-07T21:00:05Z | 2024-05-24T17:10:08Z | Models That Prove Their Own Correctness | How can we trust the correctness of a learned model on a particular input of interest? Model accuracy is typically measured *on average* over a distribution of inputs, giving no guarantee for any fixed input. This paper proposes a theoretically-founded solution to this problem: to train *Self-Proving models* that prove the correctness of their output to a verification algorithm $V$ via an Interactive Proof. Self-Proving models satisfy that, with high probability over a random input, the model generates a correct output *and* successfully proves its correctness to $V!$. The *soundness* property of $V$ guarantees that, for *every* input, no model can convince $V$ of the correctness of an incorrect output. Thus, a Self-Proving model proves correctness of most of its outputs, while *all* incorrect outputs (of any model) are detected by $V$. We devise a generic method for learning Self-Proving models, and we prove convergence bounds under certain assumptions. The theoretical framework and results are complemented by experiments on an arithmetic capability: computing the greatest common divisor (GCD) of two integers. Our learning method is used to train a Self-Proving transformer that computes the GCD *and* proves the correctness of its answer. | [
"['Noga Amit' 'Shafi Goldwasser' 'Orr Paradise' 'Guy Rothblum']"
] |
null | null | 2405.15723 | null | null | http://arxiv.org/pdf/2405.15723v1 | 2024-05-24T17:11:27Z | 2024-05-24T17:11:27Z | Bisimulation Learning | We introduce a data-driven approach to computing finite bisimulations for state transition systems with very large, possibly infinite state space. Our novel technique computes stutter-insensitive bisimulations of deterministic systems, which we characterize as the problem of learning a state classifier together with a ranking function for each class. Our procedure learns a candidate state classifier and candidate ranking functions from a finite dataset of sample states; then, it checks whether these generalise to the entire state space using satisfiability modulo theory solving. Upon the affirmative answer, the procedure concludes that the classifier constitutes a valid stutter-insensitive bisimulation of the system. Upon a negative answer, the solver produces a counterexample state for which the classifier violates the claim, adds it to the dataset, and repeats learning and checking in a counterexample-guided inductive synthesis loop until a valid bisimulation is found. We demonstrate on a range of benchmarks from reactive verification and software model checking that our method yields faster verification results than alternative state-of-the-art tools in practice. Our method produces succinct abstractions that enable an effective verification of linear temporal logic without next operator, and are interpretable for system diagnostics. | [
"['Alessandro Abate' 'Mirco Giacobbe' 'Yannik Schnitzer']"
] |
null | null | 2405.15727 | null | null | http://arxiv.org/pdf/2405.15727v1 | 2024-05-24T17:17:34Z | 2024-05-24T17:17:34Z | Anomalous Change Point Detection Using Probabilistic Predictive Coding | Change point detection (CPD) and anomaly detection (AD) are essential techniques in various fields to identify abrupt changes or abnormal data instances. However, existing methods are often constrained to univariate data, face scalability challenges with large datasets due to computational demands, and experience reduced performance with high-dimensional or intricate data, as well as hidden anomalies. Furthermore, they often lack interpretability and adaptability to domain-specific knowledge, which limits their versatility across different fields. In this work, we propose a deep learning-based CPD/AD method called Probabilistic Predictive Coding (PPC) that jointly learns to encode sequential data to low dimensional latent space representations and to predict the subsequent data representations as well as the corresponding prediction uncertainties. The model parameters are optimized with maximum likelihood estimation by comparing these predictions with the true encodings. At the time of application, the true and predicted encodings are used to determine the probability of conformity, an interpretable and meaningful anomaly score. Furthermore, our approach has linear time complexity, scalability issues are prevented, and the method can easily be adjusted to a wide range of data types and intricate applications. We demonstrate the effectiveness and adaptability of our proposed method across synthetic time series experiments, image data, and real-world magnetic resonance spectroscopic imaging data. | [
"['Roelof G. Hup' 'Julian P. Merkofer' 'Alex A. Bhogal'\n 'Ruud J. G. van Sloun' 'Reinder Haakma' 'Rik Vullings']"
] |
null | null | 2405.15729 | null | null | http://arxiv.org/pdf/2405.15729v2 | 2024-06-10T21:58:24Z | 2024-05-24T17:19:03Z | Optimizing Large Language Models for OpenAPI Code Completion | Recent advancements in Large Language Models (LLMs) and their utilization in code generation tasks have significantly reshaped the field of software development. Despite the remarkable efficacy of code completion solutions in mainstream programming languages, their performance lags when applied to less ubiquitous formats such as OpenAPI definitions. This study evaluates the OpenAPI completion performance of GitHub Copilot, a prevalent commercial code completion tool, and proposes a set of task-specific optimizations leveraging Meta's open-source model Code Llama. A semantics-aware OpenAPI completion benchmark proposed in this research is used to perform a series of experiments through which the impact of various prompt-engineering and fine-tuning techniques on the Code Llama model's performance is analyzed. The fine-tuned Code Llama model reaches a peak correctness improvement of 55.2% over GitHub Copilot despite utilizing 25 times fewer parameters than the commercial solution's underlying Codex model. Additionally, this research proposes an enhancement to a widely used code infilling training technique, addressing the issue of underperformance when the model is prompted with context sizes smaller than those used during training. The dataset, the benchmark, and the model fine-tuning code are made publicly available. | [
"['Bohdan Petryshyn' 'Mantas Lukoševičius']"
] |
null | null | 2405.15731 | null | null | http://arxiv.org/pdf/2405.15731v2 | 2024-06-03T18:18:33Z | 2024-05-24T17:19:57Z | Understanding the differences in Foundation Models: Attention, State
Space Models, and Recurrent Neural Networks | Softmax attention is the principle backbone of foundation models for various artificial intelligence applications, yet its quadratic complexity in sequence length can limit its inference throughput in long-context settings. To address this challenge, alternative architectures such as linear attention, State Space Models (SSMs), and Recurrent Neural Networks (RNNs) have been considered as more efficient alternatives. While connections between these approaches exist, such models are commonly developed in isolation and there is a lack of theoretical understanding of the shared principles underpinning these architectures and their subtle differences, greatly influencing performance and scalability. In this paper, we introduce the Dynamical Systems Framework (DSF), which allows a principled investigation of all these architectures in a common representation. Our framework facilitates rigorous comparisons, providing new insights on the distinctive characteristics of each model class. For instance, we compare linear attention and selective SSMs, detailing their differences and conditions under which both are equivalent. We also provide principled comparisons between softmax attention and other model classes, discussing the theoretical conditions under which softmax attention can be approximated. Additionally, we substantiate these new insights with empirical validations and mathematical arguments. This shows the DSF's potential to guide the systematic development of future more efficient and scalable foundation models. | [
"['Jerome Sieber' 'Carmen Amo Alonso' 'Alexandre Didier'\n 'Melanie N. Zeilinger' 'Antonio Orvieto']"
] |
null | null | 2405.15732 | null | null | http://arxiv.org/pdf/2405.15732v1 | 2024-05-24T17:20:18Z | 2024-05-24T17:20:18Z | Neural Persistence Dynamics | We consider the problem of learning the dynamics in the topology of time-evolving point clouds, the prevalent spatiotemporal model for systems exhibiting collective behavior, such as swarms of insects and birds or particles in physics. In such systems, patterns emerge from (local) interactions among self-propelled entities. While several well-understood governing equations for motion and interaction exist, they are difficult to fit to data due to the often large number of entities and missing correspondences between the observation times, which may also not be equidistant. To evade such confounding factors, we investigate collective behavior from a textit{topological perspective}, but instead of summarizing entire observation sequences (as in prior work), we propose learning a latent dynamical model from topological features textit{per time point}. The latter is then used to formulate a downstream regression task to predict the parametrization of some a priori specified governing equation. We implement this idea based on a latent ODE learned from vectorized (static) persistence diagrams and show that this modeling choice is justified by a combination of recent stability results for persistent homology. Various (ablation) experiments not only demonstrate the relevance of each individual model component, but provide compelling empirical evidence that our proposed model -- textit{neural persistence dynamics} -- substantially outperforms the state-of-the-art across a diverse set of parameter regression tasks. | [
"['Sebastian Zeng' 'Florian Graf' 'Martin Uray' 'Stefan Huber'\n 'Roland Kwitt']"
] |
null | null | 2405.15739 | null | null | http://arxiv.org/pdf/2405.15739v2 | 2024-05-29T12:50:49Z | 2024-05-24T17:34:32Z | Large Language Models Reflect Human Citation Patterns with a Heightened
Citation Bias | Citation practices are crucial in shaping the structure of scientific knowledge, yet they are often influenced by contemporary norms and biases. The emergence of Large Language Models (LLMs) like GPT-4 introduces a new dynamic to these practices. Interestingly, the characteristics and potential biases of references recommended by LLMs that entirely rely on their parametric knowledge, and not on search or retrieval-augmented generation, remain unexplored. Here, we analyze these characteristics in an experiment using a dataset of 166 papers from AAAI, NeurIPS, ICML, and ICLR, published after GPT-4's knowledge cut-off date, encompassing 3,066 references in total. In our experiment, GPT-4 was tasked with suggesting scholarly references for the anonymized in-text citations within these papers. Our findings reveal a remarkable similarity between human and LLM citation patterns, but with a more pronounced high citation bias in GPT-4, which persists even after controlling for publication year, title length, number of authors, and venue. Additionally, we observe a large consistency between the characteristics of GPT-4's existing and non-existent generated references, indicating the model's internalization of citation patterns. By analyzing citation graphs, we show that the references recommended by GPT-4 are embedded in the relevant citation context, suggesting an even deeper conceptual internalization of the citation networks. While LLMs can aid in citation generation, they may also amplify existing biases and introduce new ones, potentially skewing scientific knowledge dissemination. Our results underscore the need for identifying the model's biases and for developing balanced methods to interact with LLMs in general. | [
"['Andres Algaba' 'Carmen Mazijn' 'Vincent Holst' 'Floriano Tori'\n 'Sylvia Wenmackers' 'Vincent Ginis']"
] |
null | null | 2405.15743 | null | null | http://arxiv.org/pdf/2405.15743v1 | 2024-05-24T17:39:26Z | 2024-05-24T17:39:26Z | Sparse maximal update parameterization: A holistic approach to sparse
training dynamics | Several challenges make it difficult for sparse neural networks to compete with dense models. First, setting a large fraction of weights to zero impairs forward and gradient signal propagation. Second, sparse studies often need to test multiple sparsity levels, while also introducing new hyperparameters (HPs), leading to prohibitive tuning costs. Indeed, the standard practice is to re-use the learning HPs originally crafted for dense models. Unfortunately, we show sparse and dense networks do not share the same optimal HPs. Without stable dynamics and effective training recipes, it is costly to test sparsity at scale, which is key to surpassing dense networks and making the business case for sparsity acceleration in hardware. A holistic approach is needed to tackle these challenges and we propose S$mu$Par as one such approach. S$mu$Par ensures activations, gradients, and weight updates all scale independently of sparsity level. Further, by reparameterizing the HPs, S$mu$Par enables the same HP values to be optimal as we vary both sparsity level and model width. HPs can be tuned on small dense networks and transferred to large sparse models, greatly reducing tuning costs. On large-scale language modeling, S$mu$Par training improves loss by up to 8.2% over the common approach of using the dense model standard parameterization. | [
"['Nolan Dey' 'Shane Bergsma' 'Joel Hestness']"
] |
null | null | 2405.15744 | null | null | http://arxiv.org/pdf/2405.15744v1 | 2024-05-24T17:41:30Z | 2024-05-24T17:41:30Z | CAFe: Cost and Age aware Federated Learning | In many federated learning (FL) models, a common strategy employed to ensure the progress in the training process, is to wait for at least $M$ clients out of the total $N$ clients to send back their local gradients based on a reporting deadline $T$, once the parameter server (PS) has broadcasted the global model. If enough clients do not report back within the deadline, the particular round is considered to be a failed round and the training round is restarted from scratch. If enough clients have responded back, the round is deemed successful and the local gradients of all the clients that responded back are used to update the global model. In either case, the clients that failed to report back an update within the deadline would have wasted their computational resources. Having a tighter deadline (small $T$) and waiting for a larger number of participating clients (large $M$) leads to a large number of failed rounds and therefore greater communication cost and computation resource wastage. However, having a larger $T$ leads to longer round durations whereas smaller $M$ may lead to noisy gradients. Therefore, there is a need to optimize the parameters $M$ and $T$ such that communication cost and the resource wastage is minimized while having an acceptable convergence rate. In this regard, we show that the average age of a client at the PS appears explicitly in the theoretical convergence bound, and therefore, can be used as a metric to quantify the convergence of the global model. We provide an analytical scheme to select the parameters $M$ and $T$ in this setting. | [
"['Sahan Liyanaarachchi' 'Kanchana Thilakarathna' 'Sennur Ulukus']"
] |
null | null | 2405.15750 | null | null | http://arxiv.org/pdf/2405.15750v1 | 2024-05-24T17:47:20Z | 2024-05-24T17:47:20Z | Filtered Corpus Training (FiCT) Shows that Language Models can
Generalize from Indirect Evidence | This paper introduces Filtered Corpus Training, a method that trains language models (LMs) on corpora with certain linguistic constructions filtered out from the training data, and uses it to measure the ability of LMs to perform linguistic generalization on the basis of indirect evidence. We apply the method to both LSTM and Transformer LMs (of roughly comparable size), developing filtered corpora that target a wide range of linguistic phenomena. Our results show that while transformers are better qua LMs (as measured by perplexity), both models perform equally and surprisingly well on linguistic generalization measures, suggesting that they are capable of generalizing from indirect evidence. | [
"['Abhinav Patil' 'Jaap Jumelet' 'Yu Ying Chiu' 'Andy Lapastora'\n 'Peter Shen' 'Lexie Wang' 'Clevis Willrich' 'Shane Steinert-Threlkeld']"
] |
null | null | 2405.15754 | null | null | http://arxiv.org/pdf/2405.15754v1 | 2024-05-24T17:50:17Z | 2024-05-24T17:50:17Z | Score-based generative models are provably robust: an uncertainty
quantification perspective | Through an uncertainty quantification (UQ) perspective, we show that score-based generative models (SGMs) are provably robust to the multiple sources of error in practical implementation. Our primary tool is the Wasserstein uncertainty propagation (WUP) theorem, a model-form UQ bound that describes how the $L^2$ error from learning the score function propagates to a Wasserstein-1 ($mathbf{d}_1$) ball around the true data distribution under the evolution of the Fokker-Planck equation. We show how errors due to (a) finite sample approximation, (b) early stopping, (c) score-matching objective choice, (d) score function parametrization expressiveness, and (e) reference distribution choice, impact the quality of the generative model in terms of a $mathbf{d}_1$ bound of computable quantities. The WUP theorem relies on Bernstein estimates for Hamilton-Jacobi-Bellman partial differential equations (PDE) and the regularizing properties of diffusion processes. Specifically, PDE regularity theory shows that stochasticity is the key mechanism ensuring SGM algorithms are provably robust. The WUP theorem applies to integral probability metrics beyond $mathbf{d}_1$, such as the total variation distance and the maximum mean discrepancy. Sample complexity and generalization bounds in $mathbf{d}_1$ follow directly from the WUP theorem. Our approach requires minimal assumptions, is agnostic to the manifold hypothesis and avoids absolute continuity assumptions for the target distribution. Additionally, our results clarify the trade-offs among multiple error sources in SGMs. | [
"['Nikiforos Mimikos-Stamatopoulos' 'Benjamin J. Zhang'\n 'Markos A. Katsoulakis']"
] |
null | null | 2405.15756 | null | null | http://arxiv.org/pdf/2405.15756v2 | 2024-06-24T22:14:42Z | 2024-05-24T17:51:39Z | Sparse Expansion and Neuronal Disentanglement | We show how to improve the inference efficiency of an LLM by expanding it into a mixture of sparse experts, where each expert is a copy of the original weights, one-shot pruned for a specific cluster of input values. We call this approach $textit{Sparse Expansion}$. We show that, for models such as Llama 2 70B, as we increase the number of sparse experts, Sparse Expansion outperforms all other one-shot sparsification approaches for the same inference FLOP budget per token, and that this gap grows as sparsity increases, leading to inference speedups. But why? To answer this, we provide strong evidence that the mixture of sparse experts is effectively $textit{disentangling}$ the input-output relationship of every individual neuron across clusters of inputs. Specifically, sparse experts approximate the dense neuron output distribution with fewer weights by decomposing the distribution into a collection of simpler ones, each with a separate sparse dot product covering it. Interestingly, we show that the Wasserstein distance between a neuron's output distribution and a Gaussian distribution is an indicator of its entanglement level and contribution to the accuracy of the model. Every layer of an LLM has a fraction of highly entangled Wasserstein neurons, and model performance suffers more when these are sparsified as opposed to others. The code for Sparse Expansion is available at: https://github.com/Shavit-Lab/Sparse-Expansion . | [
"['Shashata Sawmya' 'Linghao Kong' 'Ilia Markov' 'Dan Alistarh'\n 'Nir Shavit']"
] |
null | null | 2405.15765 | null | null | http://arxiv.org/pdf/2405.15765v1 | 2024-05-24T17:58:38Z | 2024-05-24T17:58:38Z | Scaling Laws for Discriminative Classification in Large Language Models | Modern large language models (LLMs) represent a paradigm shift in what can plausibly be expected of machine learning models. The fact that LLMs can effectively generate sensible answers to a diverse range of queries suggests that they would be useful in customer support applications. While powerful, LLMs have been observed to be prone to hallucination which unfortunately makes their near term use in customer support applications challenging. To address this issue we present a system that allows us to use an LLM to augment our customer support advocates by re-framing the language modeling task as a discriminative classification task. In this framing, we seek to present the top-K best template responses for a customer support advocate to use when responding to a customer. We present the result of both offline and online experiments where we observed offline gains and statistically significant online lifts for our experimental system. Along the way, we present observed scaling curves for validation loss and top-K accuracy, resulted from model parameter ablation studies. We close by discussing the space of trade-offs with respect to model size, latency, and accuracy as well as and suggesting future applications to explore. | [
"['Dean Wyatte' 'Fatemeh Tahmasbi' 'Ming Li' 'Thomas Markovich']"
] |
null | null | 2405.15767 | null | null | http://arxiv.org/pdf/2405.15767v2 | 2024-06-14T13:20:06Z | 2024-05-24T17:59:06Z | Improved Particle Approximation Error for Mean Field Neural Networks | Mean-field Langevin dynamics (MFLD) minimizes an entropy-regularized nonlinear convex functional defined over the space of probability distributions. MFLD has gained attention due to its connection with noisy gradient descent for mean-field two-layer neural networks. Unlike standard Langevin dynamics, the nonlinearity of the objective functional induces particle interactions, necessitating multiple particles to approximate the dynamics in a finite-particle setting. Recent works (Chen et al., 2022; Suzuki et al., 2023b) have demonstrated the uniform-in-time propagation of chaos for MFLD, showing that the gap between the particle system and its mean-field limit uniformly shrinks over time as the number of particles increases. In this work, we improve the dependence on logarithmic Sobolev inequality (LSI) constants in their particle approximation errors, which can exponentially deteriorate with the regularization coefficient. Specifically, we establish an LSI-constant-free particle approximation error concerning the objective gap by leveraging the problem structure in risk minimization. As the application, we demonstrate improved convergence of MFLD, sampling guarantee for the mean-field stationary distribution, and uniform-in-time Wasserstein propagation of chaos in terms of particle complexity. | [
"['Atsushi Nitanda']"
] |
null | null | 2405.15768 | null | null | http://arxiv.org/pdf/2405.15768v1 | 2024-05-24T17:59:21Z | 2024-05-24T17:59:21Z | Canonical Variates in Wasserstein Metric Space | In this paper, we address the classification of instances each characterized not by a singular point, but by a distribution on a vector space. We employ the Wasserstein metric to measure distances between distributions, which are then used by distance-based classification algorithms such as k-nearest neighbors, k-means, and pseudo-mixture modeling. Central to our investigation is dimension reduction within the Wasserstein metric space to enhance classification accuracy. We introduce a novel approach grounded in the principle of maximizing Fisher's ratio, defined as the quotient of between-class variation to within-class variation. The directions in which this ratio is maximized are termed discriminant coordinates or canonical variates axes. In practice, we define both between-class and within-class variations as the average squared distances between pairs of instances, with the pairs either belonging to the same class or to different classes. This ratio optimization is achieved through an iterative algorithm, which alternates between optimal transport and maximization steps within the vector space. We conduct empirical studies to assess the algorithm's convergence and, through experimental validation, demonstrate that our dimension reduction technique substantially enhances classification performance. Moreover, our method outperforms well-established algorithms that operate on vector representations derived from distributional data. It also exhibits robustness against variations in the distributional representations of data clouds. | [
"['Jia Li' 'Lin Lin']"
] |
null | null | 2405.15771 | null | null | http://arxiv.org/pdf/2405.15771v1 | 2024-03-13T17:47:39Z | 2024-03-13T17:47:39Z | Adaptive Splitting of Reusable Temporal Monitors for Rare Traffic
Violations | Autonomous Vehicles (AVs) are often tested in simulation to estimate the probability they will violate safety specifications. Two common issues arise when using existing techniques to produce this estimation: If violations occur rarely, simple Monte-Carlo sampling techniques can fail to produce efficient estimates; if simulation horizons are too long, importance sampling techniques (which learn proposal distributions from past simulations) can fail to converge. This paper addresses both issues by interleaving rare-event sampling techniques with online specification monitoring algorithms. We use adaptive multi-level splitting to decompose simulations into partial trajectories, then calculate the distance of those partial trajectories to failure by leveraging robustness metrics from Signal Temporal Logic (STL). By caching those partial robustness metric values, we can efficiently re-use computations across multiple sampling stages. Our experiments on an interstate lane-change scenario show our method is viable for testing simulated AV-pipelines, efficiently estimating failure probabilities for STL specifications based on real traffic rules. We produce better estimates than Monte-Carlo and importance sampling in fewer simulations. | [
"['Craig Innes' 'Subramanian Ramamoorthy']"
] |
null | null | 2405.15773 | null | null | http://arxiv.org/pdf/2405.15773v1 | 2024-03-16T07:34:33Z | 2024-03-16T07:34:33Z | Feature Aggregation with Latent Generative Replay for Federated
Continual Learning of Socially Appropriate Robot Behaviours | For widespread real-world applications, it is beneficial for robots to explore Federated Learning (FL) settings where several robots, deployed in parallel, can learn independently while also sharing their learning with each other. This work explores a simulated living room environment where robots need to learn the social appropriateness of their actions. We propose Federated Root (FedRoot), a novel weight aggregation strategy which disentangles feature learning across clients from individual task-based learning. Adapting popular FL strategies to use FedRoot instead, we present a novel FL benchmark for learning the social appropriateness of different robot actions in diverse social configurations. FedRoot-based methods offer competitive performance compared to others while offering sizeable (up to 86% for CPU usage and up to 72% for GPU usage) reduction in resource consumption. Furthermore, real-world interactions require social robots to dynamically adapt to changing environmental and task settings. To facilitate this, we propose Federated Latent Generative Replay (FedLGR), a novel Federated Continual Learning (FCL) strategy that uses FedRoot-based weight aggregation and embeds each client with a generator model for pseudo-rehearsal of learnt feature embeddings to mitigate forgetting in a resource-efficient manner. Our benchmark results demonstrate that FedRoot-based FCL methods outperform other methods while also offering sizeable (up to 84% for CPU usage and up to 92% for GPU usage) reduction in resource consumption, with FedLGR providing the best results across evaluations. | [
"['Nikhil Churamani' 'Saksham Checker' 'Hao-Tien Lewis Chiang'\n 'Hatice Gunes']"
] |
null | null | 2405.15778 | null | null | http://arxiv.org/pdf/2405.15778v1 | 2024-04-03T15:11:53Z | 2024-04-03T15:11:53Z | Investigation of Energy-efficient AI Model Architectures and Compression
Techniques for "Green" Fetal Brain Segmentation | Artificial intelligence have contributed to advancements across various industries. However, the rapid growth of artificial intelligence technologies also raises concerns about their environmental impact, due to associated carbon footprints to train computational models. Fetal brain segmentation in medical imaging is challenging due to the small size of the fetal brain and the limited image quality of fast 2D sequences. Deep neural networks are a promising method to overcome this challenge. In this context, the construction of larger models requires extensive data and computing power, leading to high energy consumption. Our study aims to explore model architectures and compression techniques that promote energy efficiency by optimizing the trade-off between accuracy and energy consumption through various strategies such as lightweight network design, architecture search, and optimized distributed training tools. We have identified several effective strategies including optimization of data loading, modern optimizers, distributed training strategy implementation, and reduced floating point operations precision usage with light model architectures while tuning parameters according to available computer resources. Our findings demonstrate that these methods lead to satisfactory model performance with low energy consumption during deep neural network training for medical image segmentation. | [
"['Szymon Mazurek' 'Monika Pytlarz' 'Sylwia Malec' 'Alessandro Crimi']"
] |
null | null | 2405.15780 | null | null | http://arxiv.org/pdf/2405.15780v1 | 2024-04-17T19:57:07Z | 2024-04-17T19:57:07Z | Sequence Length Scaling in Vision Transformers for Scientific Images on
Frontier | Vision Transformers (ViTs) are pivotal for foundational models in scientific imagery, including Earth science applications, due to their capability to process large sequence lengths. While transformers for text has inspired scaling sequence lengths in ViTs, yet adapting these for ViTs introduces unique challenges. We develop distributed sequence parallelism for ViTs, enabling them to handle up to 1M tokens. Our approach, leveraging DeepSpeed-Ulysses and Long-Sequence-Segmentation with model sharding, is the first to apply sequence parallelism in ViT training, achieving a 94% batch scaling efficiency on 2,048 AMD-MI250X GPUs. Evaluating sequence parallelism in ViTs, particularly in models up to 10B parameters, highlighted substantial bottlenecks. We countered these with hybrid sequence, pipeline, tensor parallelism, and flash attention strategies, to scale beyond single GPU memory limits. Our method significantly enhances climate modeling accuracy by 20% in temperature predictions, marking the first training of a transformer model on a full-attention matrix over 188K sequence length. | [
"['Aristeidis Tsaris' 'Chengming Zhang' 'Xiao Wang' 'Junqi Yin' 'Siyan Liu'\n 'Moetasim Ashfaq' 'Ming Fan' 'Jong Youl Choi' 'Mohamed Wahib' 'Dan Lu'\n 'Prasanna Balaprakash' 'Feiyi Wang']"
] |
null | null | 2405.15788 | null | null | http://arxiv.org/pdf/2405.15788v1 | 2024-05-03T01:53:17Z | 2024-05-03T01:53:17Z | Towards Fairness in Provably Communication-Efficient Federated
Recommender Systems | To reduce the communication overhead caused by parallel training of multiple clients, various federated learning (FL) techniques use random client sampling. Nonetheless, ensuring the efficacy of random sampling and determining the optimal number of clients to sample in federated recommender systems (FRSs) remains challenging due to the isolated nature of each user as a separate client. This challenge is exacerbated in models where public and private features can be separated, and FL allows communication of only public features (item gradients). In this study, we establish sample complexity bounds that dictate the ideal number of clients required for improved communication efficiency and retained accuracy in such models. In line with our theoretical findings, we empirically demonstrate that RS-FairFRS reduces communication cost (~47%). Second, we demonstrate the presence of class imbalance among clients that raises a substantial equity concern for FRSs. Unlike centralized machine learning, clients in FRS can not share raw data, including sensitive attributes. For this, we introduce RS-FairFRS, first fairness under unawareness FRS built upon random sampling based FRS. While random sampling improves communication efficiency, we propose a novel two-phase dual-fair update technique to achieve fairness without revealing protected attributes of active clients participating in training. Our results on real-world datasets and different sensitive features illustrate a significant reduction in demographic bias (~approx40%), offering a promising path to achieving fairness and communication efficiency in FRSs without compromising the overall accuracy of FRS. | [
"['Kirandeep Kaur' 'Sujit Gujar' 'Shweta Jain']"
] |
null | null | 2405.15789 | null | null | http://arxiv.org/pdf/2405.15789v1 | 2024-05-03T19:21:47Z | 2024-05-03T19:21:47Z | Semantic Objective Functions: A distribution-aware method for adding
logical constraints in deep learning | Issues of safety, explainability, and efficiency are of increasing concern in learning systems deployed with hard and soft constraints. Symbolic Constrained Learning and Knowledge Distillation techniques have shown promising results in this area, by embedding and extracting knowledge, as well as providing logical constraints during neural network training. Although many frameworks exist to date, through an integration of logic and information geometry, we provide a construction and theoretical framework for these tasks that generalize many approaches. We propose a loss-based method that embeds knowledge-enforces logical constraints-into a machine learning model that outputs probability distributions. This is done by constructing a distribution from the external knowledge/logic formula and constructing a loss function as a linear combination of the original loss function with the Fisher-Rao distance or Kullback-Leibler divergence to the constraint distribution. This construction includes logical constraints in the form of propositional formulas (Boolean variables), formulas of a first-order language with finite variables over a model with compact domain (categorical and continuous variables), and in general, likely applicable to any statistical model that was pretrained with semantic information. We evaluate our method on a variety of learning tasks, including classification tasks with logic constraints, transferring knowledge from logic formulas, and knowledge distillation from general distributions. | [
"['Miguel Angel Mendez-Lucero' 'Enrique Bojorquez Gallardo' 'Vaishak Belle']"
] |
null | null | 2405.15793 | null | null | http://arxiv.org/pdf/2405.15793v2 | 2024-05-30T19:09:01Z | 2024-05-06T17:41:33Z | SWE-agent: Agent-Computer Interfaces Enable Automated Software
Engineering | Language model (LM) agents are increasingly being used to automate complicated tasks in digital environments. Just as humans benefit from powerful software applications, such as integrated development environments, for complex tasks like software engineering, we posit that LM agents represent a new category of end users with their own needs and abilities, and would benefit from specially-built interfaces to the software they use. We investigate how interface design affects the performance of language model agents. As a result of this exploration, we introduce SWE-agent: a system that facilitates LM agents to autonomously use computers to solve software engineering tasks. SWE-agent's custom agent-computer interface (ACI) significantly enhances an agent's ability to create and edit code files, navigate entire repositories, and execute tests and other programs. We evaluate SWE-agent on SWE-bench and HumanEvalFix, achieving state-of-the-art performance on both with a pass@1 rate of 12.5% and 87.7%, respectively, far exceeding the previous state-of-the-art achieved with non-interactive LMs. Finally, we provide insight on how the design of the ACI can impact agents' behavior and performance. | [
"['John Yang' 'Carlos E. Jimenez' 'Alexander Wettig' 'Kilian Lieret'\n 'Shunyu Yao' 'Karthik Narasimhan' 'Ofir Press']"
] |
null | null | 2405.15805 | null | null | http://arxiv.org/pdf/2405.15805v1 | 2024-05-19T23:35:06Z | 2024-05-19T23:35:06Z | DSAM: A Deep Learning Framework for Analyzing Temporal and Spatial
Dynamics in Brain Networks | Resting-state functional magnetic resonance imaging (rs-fMRI) is a noninvasive technique pivotal for understanding human neural mechanisms of intricate cognitive processes. Most rs-fMRI studies compute a single static functional connectivity matrix across brain regions of interest, or dynamic functional connectivity matrices with a sliding window approach. These approaches are at risk of oversimplifying brain dynamics and lack proper consideration of the goal at hand. While deep learning has gained substantial popularity for modeling complex relational data, its application to uncovering the spatiotemporal dynamics of the brain is still limited. We propose a novel interpretable deep learning framework that learns goal-specific functional connectivity matrix directly from time series and employs a specialized graph neural network for the final classification. Our model, DSAM, leverages temporal causal convolutional networks to capture the temporal dynamics in both low- and high-level feature representations, a temporal attention unit to identify important time points, a self-attention unit to construct the goal-specific connectivity matrix, and a novel variant of graph neural network to capture the spatial dynamics for downstream classification. To validate our approach, we conducted experiments on the Human Connectome Project dataset with 1075 samples to build and interpret the model for the classification of sex group, and the Adolescent Brain Cognitive Development Dataset with 8520 samples for independent testing. Compared our proposed framework with other state-of-art models, results suggested this novel approach goes beyond the assumption of a fixed connectivity matrix and provides evidence of goal-specific brain connectivity patterns, which opens up the potential to gain deeper insights into how the human brain adapts its functional connectivity specific to the task at hand. | [
"['Bishal Thapaliya' 'Robyn Miller' 'Jiayu Chen' 'Yu-Ping Wang'\n 'Esra Akbas' 'Ram Sapkota' 'Bhaskar Ray' 'Pranav Suresh'\n 'Santosh Ghimire' 'Vince Calhoun' 'Jingyu Liu']"
] |
null | null | 2405.15815 | null | null | http://arxiv.org/abs/2405.15815v1 | 2024-05-22T15:38:10Z | 2024-05-22T15:38:10Z | A social path to human-like artificial intelligence | Traditionally, cognitive and computer scientists have viewed intelligence solipsistically, as a property of unitary agents devoid of social context. Given the success of contemporary learning algorithms, we argue that the bottleneck in artificial intelligence (AI) progress is shifting from data assimilation to novel data generation. We bring together evidence showing that natural intelligence emerges at multiple scales in networks of interacting agents via collective living, social relationships and major evolutionary transitions, which contribute to novel data generation through mechanisms such as population pressures, arms races, Machiavellian selection, social learning and cumulative culture. Many breakthroughs in AI exploit some of these processes, from multi-agent structures enabling algorithms to master complex games like Capture-The-Flag and StarCraft II, to strategic communication in Diplomacy and the shaping of AI data streams by other AIs. Moving beyond a solipsistic view of agency to integrate these mechanisms suggests a path to human-like compounding innovation through ongoing novel data generation. | [
"['Edgar A. Duéñez-Guzmán' 'Suzanne Sadedin' 'Jane X. Wang'\n 'Kevin R. McKee' 'Joel Z. Leibo']"
] |
null | null | 2405.15816 | null | null | http://arxiv.org/pdf/2405.15816v1 | 2024-05-22T20:49:01Z | 2024-05-22T20:49:01Z | Riemannian Bilevel Optimization | We develop new algorithms for Riemannian bilevel optimization. We focus in particular on batch and stochastic gradient-based methods, with the explicit goal of avoiding second-order information such as Riemannian hyper-gradients. We propose and analyze $mathrm{RF^2SA}$, a method that leverages first-order gradient information to navigate the complex geometry of Riemannian manifolds efficiently. Notably, $mathrm{RF^2SA}$ is a single-loop algorithm, and thus easier to implement and use. Under various setups, including stochastic optimization, we provide explicit convergence rates for reaching $epsilon$-stationary points. We also address the challenge of optimizing over Riemannian manifolds with constraints by adjusting the multiplier in the Lagrangian, ensuring convergence to the desired solution without requiring access to second-order derivatives. | [
"['Sanchayan Dutta' 'Xiang Cheng' 'Suvrit Sra']"
] |
null | null | 2405.15821 | null | null | http://arxiv.org/pdf/2405.15821v1 | 2024-05-23T14:01:44Z | 2024-05-23T14:01:44Z | Reinforcing Language Agents via Policy Optimization with Action
Decomposition | Language models as intelligent agents push the boundaries of sequential decision-making agents but struggle with limited knowledge of environmental dynamics and exponentially huge action space. Recent efforts like GLAM and TWOSOME manually constrain the action space to a restricted subset and employ reinforcement learning to align agents' knowledge with specific environments. However, they overlook fine-grained credit assignments for intra-action tokens, which is essential for efficient language agent optimization, and rely on human's prior knowledge to restrict action space. This paper proposes decomposing language agent optimization from the action level to the token level, offering finer supervision for each intra-action token and manageable optimization complexity in environments with unrestricted action spaces. Beginning with the simplification of flattening all actions, we theoretically explore the discrepancies between action-level optimization and this naive token-level optimization. We then derive the Bellman backup with Action Decomposition (BAD) to integrate credit assignments for both intra-action and inter-action tokens, effectively eliminating the discrepancies. Implementing BAD within the PPO algorithm, we introduce Policy Optimization with Action Decomposition (POAD). POAD benefits from a finer-grained credit assignment process and lower optimization complexity, leading to enhanced learning efficiency and generalization abilities in aligning language agents with interactive environments. We validate POAD across diverse testbeds, with results affirming the advantages of our approach and the correctness of our theoretical analysis. | [
"['Muning Wen' 'Ziyu Wan' 'Weinan Zhang' 'Jun Wang' 'Ying Wen']"
] |
null | null | 2405.15824 | null | null | http://arxiv.org/pdf/2405.15824v1 | 2024-05-23T18:26:55Z | 2024-05-23T18:26:55Z | Efficient Mitigation of Bus Bunching through Setter-Based Curriculum
Learning | Curriculum learning has been growing in the domain of reinforcement learning as a method of improving training efficiency for various tasks. It involves modifying the difficulty (lessons) of the environment as the agent learns, in order to encourage more optimal agent behavior and higher reward states. However, most curriculum learning methods currently involve discrete transitions of the curriculum or predefined steps by the programmer or using automatic curriculum learning on only a small subset training such as only on an adversary. In this paper, we propose a novel approach to curriculum learning that uses a Setter Model to automatically generate an action space, adversary strength, initialization, and bunching strength. Transportation and traffic optimization is a well known area of study, especially for reinforcement learning based solutions. We specifically look at the bus bunching problem for the context of this study. The main idea of the problem is to minimize the delays caused by inefficient bus timings for passengers arriving and departing from a system of buses. While the heavy exploration in the area makes innovation and improvement with regards to performance marginal, it simultaneously provides an effective baseline for developing new generalized techniques. Our group is particularly interested in examining curriculum learning and its effect on training efficiency and overall performance. We decide to try a lesser known approach to curriculum learning, in which the curriculum is not fixed or discretely thresholded. Our method for automated curriculum learning involves a curriculum that is dynamically chosen and learned by an adversary network made to increase the difficulty of the agent's training, and defined by multiple forms of input. Our results are shown in the following sections of this paper. | [
"['Avidan Shah' 'Danny Tran' 'Yuhan Tang']"
] |
null | null | 2405.15829 | null | null | http://arxiv.org/pdf/2405.15829v1 | 2024-05-24T02:21:10Z | 2024-05-24T02:21:10Z | Spatio-temporal Value Semantics-based Abstraction for Dense Deep
Reinforcement Learning | Intelligent Cyber-Physical Systems (ICPS) represent a specialized form of Cyber-Physical System (CPS) that incorporates intelligent components, notably Convolutional Neural Networks (CNNs) and Deep Reinforcement Learning (DRL), to undertake multifaceted tasks encompassing perception, decision-making, and control. The utilization of DRL for decision-making facilitates dynamic interaction with the environment, generating control actions aimed at maximizing cumulative rewards. Nevertheless, the inherent uncertainty of the operational environment and the intricate nature of ICPS necessitate exploration within complex and dynamic state spaces during the learning phase. DRL confronts challenges in terms of efficiency, generalization capabilities, and data scarcity during decision-making process. In response to these challenges, we propose an innovative abstract modeling approach grounded in spatial-temporal value semantics, capturing the evolution in the distribution of semantic value across time and space. A semantics-based abstraction is introduced to construct an abstract Markov Decision Process (MDP) for the DRL learning process. Furthermore, optimization techniques for abstraction are delineated, aiming to refine the abstract model and mitigate semantic gaps between abstract and concrete states. The efficacy of the abstract modeling is assessed through the evaluation and analysis of the abstract MDP model using PRISM. A series of experiments are conducted, involving diverse scenarios such as lane-keeping, adaptive cruise control, and intersection crossroad assistance, to demonstrate the effectiveness of our abstracting approach. | [
"['Jihui Nie' 'Dehui Du' 'Jiangnan Zhao']"
] |
null | null | 2405.15831 | null | null | http://arxiv.org/pdf/2405.15831v1 | 2024-05-24T08:20:53Z | 2024-05-24T08:20:53Z | Transmission Interface Power Flow Adjustment: A Deep Reinforcement
Learning Approach based on Multi-task Attribution Map | Transmission interface power flow adjustment is a critical measure to ensure the security and economy operation of power systems. However, conventional model-based adjustment schemes are limited by the increasing variations and uncertainties occur in power systems, where the adjustment problems of different transmission interfaces are often treated as several independent tasks, ignoring their coupling relationship and even leading to conflict decisions. In this paper, we introduce a novel data-driven deep reinforcement learning (DRL) approach, to handle multiple power flow adjustment tasks jointly instead of learning each task from scratch. At the heart of the proposed method is a multi-task attribution map (MAM), which enables the DRL agent to explicitly attribute each transmission interface task to different power system nodes with task-adaptive attention weights. Based on this MAM, the agent can further provide effective strategies to solve the multi-task adjustment problem with a near-optimal operation cost. Simulation results on the IEEE 118-bus system, a realistic 300-bus system in China, and a very large European system with 9241 buses demonstrate that the proposed method significantly improves the performance compared with several baseline methods, and exhibits high interpretability with the learnable MAM. | [
"['Shunyu Liu' 'Wei Luo' 'Yanzhen Zhou' 'Kaixuan Chen' 'Quan Zhang'\n 'Huating Xu' 'Qinglai Guo' 'Mingli Song']"
] |
null | null | 2405.15834 | null | null | http://arxiv.org/pdf/2405.15834v1 | 2024-05-24T09:15:29Z | 2024-05-24T09:15:29Z | A Fisher-Rao gradient flow for entropic mean-field min-max games | Gradient flows play a substantial role in addressing many machine learning problems. We examine the convergence in continuous-time of a textit{Fisher-Rao} (Mean-Field Birth-Death) gradient flow in the context of solving convex-concave min-max games with entropy regularization. We propose appropriate Lyapunov functions to demonstrate convergence with explicit rates to the unique mixed Nash equilibrium. | [
"['Razvan-Andrei Lascu' 'Mateusz B. Majka' 'Łukasz Szpruch']"
] |
null | null | 2405.15840 | null | null | http://arxiv.org/pdf/2405.15840v1 | 2024-05-24T16:03:47Z | 2024-05-24T16:03:47Z | Learning the Language of Protein Structure | Representation learning and emph{de novo} generation of proteins are pivotal computational biology tasks. Whilst natural language processing (NLP) techniques have proven highly effective for protein sequence modelling, structure modelling presents a complex challenge, primarily due to its continuous and three-dimensional nature. Motivated by this discrepancy, we introduce an approach using a vector-quantized autoencoder that effectively tokenizes protein structures into discrete representations. This method transforms the continuous, complex space of protein structures into a manageable, discrete format with a codebook ranging from 4096 to 64000 tokens, achieving high-fidelity reconstructions with backbone root mean square deviations (RMSD) of approximately 1-5 AA. To demonstrate the efficacy of our learned representations, we show that a simple GPT model trained on our codebooks can generate novel, diverse, and designable protein structures. Our approach not only provides representations of protein structure, but also mitigates the challenges of disparate modal representations and sets a foundation for seamless, multi-modal integration, enhancing the capabilities of computational methods in protein design. | [
"['Benoit Gaujac' 'Jérémie Donà' 'Liviu Copoiu' 'Timothy Atkinson'\n 'Thomas Pierrot' 'Thomas D. Barrett']"
] |
null | null | 2405.15842 | null | null | http://arxiv.org/pdf/2405.15842v1 | 2024-05-24T16:20:04Z | 2024-05-24T16:20:04Z | Model Cascading for Code: Reducing Inference Costs with Model Cascading
for LLM Based Code Generation | The rapid development of large language models (LLMs) has led to significant advancements in code completion tasks. While larger models have higher accuracy, they also cost much more to run. Meanwhile, model cascading has been proven effective to conserve computational resources while enhancing accuracy in LLMs on natural language generation tasks. It generates output with the smallest model in a set, and only queries the larger models when it fails to meet predefined quality criteria. However, this strategy has not been used in code completion tasks, primarily because assessing the quality of code completions differs substantially from assessing natural language, where the former relies heavily on the functional correctness. To address this, we propose letting each model generate and execute a set of test cases for their solutions, and use the test results as the cascading threshold. We show that our model cascading strategy reduces computational costs while increases accuracy compared to generating the output with a single model. We also introduce a heuristics to determine the optimal combination of the number of solutions, test cases, and test lines each model should generate, based on the budget. Compared to speculative decoding, our method works on black-box models, having the same level of cost-accuracy trade-off, yet providing much more choices based on the server's budget. Ours is the first work to optimize cost-accuracy trade-off for LLM code generation with model cascading. | [
"['Boyuan Chen' 'Mingzhi Zhu' 'Brendan Dolan-Gavitt' 'Muhammad Shafique'\n 'Siddharth Garg']"
] |
null | null | 2405.15861 | null | null | http://arxiv.org/pdf/2405.15861v2 | 2024-06-24T04:52:25Z | 2024-05-24T18:07:05Z | Achieving Dimension-Free Communication in Federated Learning via
Zeroth-Order Optimization | Federated Learning (FL) offers a promising framework for collaborative and privacy-preserving machine learning across distributed data sources. However, the substantial communication costs associated with FL pose a significant challenge to its efficiency. Specifically, in each communication round, the communication costs scale linearly with the model's dimension, which presents a formidable obstacle, especially in large model scenarios. Despite various communication efficient strategies, the intrinsic dimension-dependent communication cost remains a major bottleneck for current FL implementations. In this paper, we introduce a novel dimension-free communication strategy for FL, leveraging zero-order optimization techniques. We propose a new algorithm, FedDisco, which facilitates the transmission of only a constant number of scalar values between clients and the server in each communication round, thereby reducing the communication cost from $mathscr{O}(d)$ to $mathscr{O}(1)$, where $d$ is the dimension of the model parameters. Theoretically, in non-convex functions, we prove that our algorithm achieves state-of-the-art rates, which show a linear speedup of the number of clients and local steps under standard assumptions and dimension-free rate for low effective rank scenarios. Empirical evaluations through classic deep learning training and large language model fine-tuning substantiate significant reductions in communication overhead compared to traditional FL approaches. Our code is available at https://github.com/ZidongLiu/FedDisco. | [
"['Zhe Li' 'Bicheng Ying' 'Zidong Liu' 'Haibo Yang']"
] |
null | null | 2405.15868 | null | null | http://arxiv.org/pdf/2405.15868v1 | 2024-05-24T18:24:24Z | 2024-05-24T18:24:24Z | LLS: Local Learning Rule for Deep Neural Networks Inspired by Neural
Activity Synchronization | Training deep neural networks (DNNs) using traditional backpropagation (BP) presents challenges in terms of computational complexity and energy consumption, particularly for on-device learning where computational resources are limited. Various alternatives to BP, including random feedback alignment, forward-forward, and local classifiers, have been explored to address these challenges. These methods have their advantages, but they can encounter difficulties when dealing with intricate visual tasks or demand considerable computational resources. In this paper, we propose a novel Local Learning rule inspired by neural activity Synchronization phenomena (LLS) observed in the brain. LLS utilizes fixed periodic basis vectors to synchronize neuron activity within each layer, enabling efficient training without the need for additional trainable parameters. We demonstrate the effectiveness of LLS and its variations, LLS-M and LLS-MxM, on multiple image classification datasets, achieving accuracy comparable to BP with reduced computational complexity and minimal additional parameters. Furthermore, the performance of LLS on the Visual Wake Word (VWW) dataset highlights its suitability for on-device learning tasks, making it a promising candidate for edge hardware implementations. | [
"['Marco Paul E. Apolinario' 'Arani Roy' 'Kaushik Roy']"
] |
null | null | 2405.15871 | null | null | http://arxiv.org/pdf/2405.15871v1 | 2024-05-24T18:33:18Z | 2024-05-24T18:33:18Z | CausalConceptTS: Causal Attributions for Time Series Classification
using High Fidelity Diffusion Models | Despite the excelling performance of machine learning models, understanding the decisions of machine learning models remains a long-standing goal. While commonly used attribution methods in explainable AI attempt to address this issue, they typically rely on associational rather than causal relationships. In this study, within the context of time series classification, we introduce a novel framework to assess the causal effect of concepts, i.e., predefined segments within a time series, on specific classification outcomes. To achieve this, we leverage state-of-the-art diffusion-based generative models to estimate counterfactual outcomes. Our approach compares these causal attributions with closely related associational attributions, both theoretically and empirically. We demonstrate the insights gained by our approach for a diverse set of qualitatively different time series classification tasks. Although causal and associational attributions might often share some similarities, in all cases they differ in important details, underscoring the risks associated with drawing causal conclusions from associational data alone. We believe that the proposed approach is widely applicable also in other domains, particularly where predefined segmentations are available, to shed some light on the limits of associational attributions. | [
"['Juan Miguel Lopez Alcaraz' 'Nils Strodthoff']"
] |
null | null | 2405.15877 | null | null | http://arxiv.org/pdf/2405.15877v1 | 2024-05-24T18:40:20Z | 2024-05-24T18:40:20Z | Basis Selection: Low-Rank Decomposition of Pretrained Large Language
Models for Target Applications | Large language models (LLMs) significantly enhance the performance of various applications, but they are computationally intensive and energy-demanding. This makes it challenging to deploy them on devices with limited resources, such as personal computers and mobile/wearable devices, and results in substantial inference costs in resource-rich environments like cloud servers. To extend the use of LLMs, we introduce a low-rank decomposition approach to effectively compress these models, tailored to the requirements of specific applications. We observe that LLMs pretrained on general datasets contain many redundant components not needed for particular applications. Our method focuses on identifying and removing these redundant parts, retaining only the necessary elements for the target applications. Specifically, we represent the weight matrices of LLMs as a linear combination of base components. We then prune the irrelevant bases and enhance the model with new bases beneficial for specific applications. Deep compression results on the Llama 2-7b and -13B models, conducted on target applications including mathematical reasoning and code generation, show that our method significantly reduces model size while maintaining comparable accuracy to state-of-the-art low-rank compression techniques. | [
"['Yang Li' 'Changsheng Zhao' 'Hyungtak Lee' 'Ernie Chang' 'Yangyang Shi'\n 'Vikas Chandra']"
] |
null | null | 2405.15881 | null | null | http://arxiv.org/pdf/2405.15881v1 | 2024-05-24T18:50:27Z | 2024-05-24T18:50:27Z | Scaling Diffusion Mamba with Bidirectional SSMs for Efficient Image and
Video Generation | In recent developments, the Mamba architecture, known for its selective state space approach, has shown potential in the efficient modeling of long sequences. However, its application in image generation remains underexplored. Traditional diffusion transformers (DiT), which utilize self-attention blocks, are effective but their computational complexity scales quadratically with the input length, limiting their use for high-resolution images. To address this challenge, we introduce a novel diffusion architecture, Diffusion Mamba (DiM), which foregoes traditional attention mechanisms in favor of a scalable alternative. By harnessing the inherent efficiency of the Mamba architecture, DiM achieves rapid inference times and reduced computational load, maintaining linear complexity with respect to sequence length. Our architecture not only scales effectively but also outperforms existing diffusion transformers in both image and video generation tasks. The results affirm the scalability and efficiency of DiM, establishing a new benchmark for image and video generation techniques. This work advances the field of generative models and paves the way for further applications of scalable architectures. | [
"['Shentong Mo' 'Yapeng Tian']"
] |
null | null | 2405.15882 | null | null | http://arxiv.org/pdf/2405.15882v1 | 2024-05-24T18:53:28Z | 2024-05-24T18:53:28Z | Risk Factor Identification In Osteoporosis Using Unsupervised Machine
Learning Techniques | In this study, the reliability of identified risk factors associated with osteoporosis is investigated using a new clustering-based method on electronic medical records. This study proposes utilizing a new CLustering Iterations Framework (CLIF) that includes an iterative clustering framework that can adapt any of the following three components: clustering, feature selection, and principal feature identification. The study proposes using Wasserstein distance to identify principal features, borrowing concepts from the optimal transport theory. The study also suggests using a combination of ANOVA and ablation tests to select influential features from a data set. Some risk factors presented in existing works are endorsed by our identified significant clusters, while the reliability of some other risk factors is weakened. | [
"['Mikayla Calitis']"
] |
null | null | 2405.15885 | null | null | http://arxiv.org/pdf/2405.15885v1 | 2024-05-24T19:08:30Z | 2024-05-24T19:08:30Z | Diffusion Bridge Implicit Models | Denoising diffusion bridge models (DDBMs) are a powerful variant of diffusion models for interpolating between two arbitrary paired distributions given as endpoints. Despite their promising performance in tasks like image translation, DDBMs require a computationally intensive sampling process that involves the simulation of a (stochastic) differential equation through hundreds of network evaluations. In this work, we present diffusion bridge implicit models (DBIMs) for accelerated sampling of diffusion bridges without extra training. We generalize DDBMs via a class of non-Markovian diffusion bridges defined on the discretized timesteps concerning sampling, which share the same training objective as DDBMs. These generalized diffusion bridges give rise to generative processes ranging from stochastic to deterministic (i.e., an implicit probabilistic model) while being up to 25$times$ faster than the vanilla sampler of DDBMs. Moreover, the deterministic sampling procedure yielded by DBIMs enables faithful encoding and reconstruction by a booting noise used in the initial sampling step, and allows us to perform semantically meaningful interpolation in image translation tasks by regarding the booting noise as the latent variable. | [
"['Kaiwen Zheng' 'Guande He' 'Jianfei Chen' 'Fan Bao' 'Jun Zhu']"
] |
null | null | 2405.15891 | null | null | http://arxiv.org/pdf/2405.15891v2 | 2024-06-13T17:56:53Z | 2024-05-24T19:22:09Z | Score Distillation via Reparametrized DDIM | While 2D diffusion models generate realistic, high-detail images, 3D shape generation methods like Score Distillation Sampling (SDS) built on these 2D diffusion models produce cartoon-like, over-smoothed shapes. To help explain this discrepancy, we show that the image guidance used in Score Distillation can be understood as the velocity field of a 2D denoising generative process, up to the choice of a noise term. In particular, after a change of variables, SDS resembles a high-variance version of Denoising Diffusion Implicit Models (DDIM) with a differently-sampled noise term: SDS introduces noise i.i.d. randomly at each step, while DDIM infers it from the previous noise predictions. This excessive variance can lead to over-smoothing and unrealistic outputs. We show that a better noise approximation can be recovered by inverting DDIM in each SDS update step. This modification makes SDS's generative process for 2D images almost identical to DDIM. In 3D, it removes over-smoothing, preserves higher-frequency detail, and brings the generation quality closer to that of 2D samplers. Experimentally, our method achieves better or similar 3D generation quality compared to other state-of-the-art Score Distillation methods, all without training additional neural networks or multi-view supervision, and providing useful insights into relationship between 2D and 3D asset generation with diffusion models. | [
"['Artem Lukoianov' 'Haitz Sáez de Ocáriz Borde' 'Kristjan Greenewald'\n 'Vitor Campagnolo Guizilini' 'Timur Bagautdinov' 'Vincent Sitzmann'\n 'Justin Solomon']"
] |
null | null | 2405.15894 | null | null | http://arxiv.org/pdf/2405.15894v1 | 2024-05-24T19:32:48Z | 2024-05-24T19:32:48Z | Derivatives of Stochastic Gradient Descent | We consider stochastic optimization problems where the objective depends on some parameter, as commonly found in hyperparameter optimization for instance. We investigate the behavior of the derivatives of the iterates of Stochastic Gradient Descent (SGD) with respect to that parameter and show that they are driven by an inexact SGD recursion on a different objective function, perturbed by the convergence of the original SGD. This enables us to establish that the derivatives of SGD converge to the derivative of the solution mapping in terms of mean squared error whenever the objective is strongly convex. Specifically, we demonstrate that with constant step-sizes, these derivatives stabilize within a noise ball centered at the solution derivative, and that with vanishing step-sizes they exhibit $O(log(k)^2 / k)$ convergence rates. Additionally, we prove exponential convergence in the interpolation regime. Our theoretical findings are illustrated by numerical experiments on synthetic tasks. | [
"['Franck Iutzeler' 'Edouard Pauwels' 'Samuel Vaiter']"
] |
null | null | 2405.15895 | null | null | http://arxiv.org/pdf/2405.15895v1 | 2024-05-24T19:33:05Z | 2024-05-24T19:33:05Z | Predicting the Impact of Model Expansion through the Minima Manifold: A
Loss Landscape Perspective | The optimal model for a given task is often challenging to determine, requiring training multiple models from scratch which becomes prohibitive as dataset and model sizes grow. A more efficient alternative is to reuse smaller pre-trained models by expanding them, however, this is not widely adopted as how this impacts training dynamics remains poorly understood. While prior works have introduced statistics to measure these effects, they remain flawed. To rectify this, we offer a new approach for understanding and quantifying the impact of expansion through the lens of the loss landscape, which has been shown to contain a manifold of linearly connected minima. Building on this new perspective, we propose a metric to study the impact of expansion by estimating the size of the manifold. Experimental results show a clear relationship between gains in performance and manifold size, enabling the comparison of candidate models and presenting a first step towards expanding models more reliably based on geometric properties of the loss landscape. | [
"['Pranshu Malviya' 'Jerry Huang' 'Quentin Fournier' 'Sarath Chandar']"
] |
null | null | 2405.15903 | null | null | http://arxiv.org/pdf/2405.15903v1 | 2024-05-24T19:58:25Z | 2024-05-24T19:58:25Z | UnitNorm: Rethinking Normalization for Transformers in Time Series | Normalization techniques are crucial for enhancing Transformer models' performance and stability in time series analysis tasks, yet traditional methods like batch and layer normalization often lead to issues such as token shift, attention shift, and sparse attention. We propose UnitNorm, a novel approach that scales input vectors by their norms and modulates attention patterns, effectively circumventing these challenges. Grounded in existing normalization frameworks, UnitNorm's effectiveness is demonstrated across diverse time series analysis tasks, including forecasting, classification, and anomaly detection, via a rigorous evaluation on 6 state-of-the-art models and 10 datasets. Notably, UnitNorm shows superior performance, especially in scenarios requiring robust attention mechanisms and contextual comprehension, evidenced by significant improvements by up to a 1.46 decrease in MSE for forecasting, and a 4.89% increase in accuracy for classification. This work not only calls for a reevaluation of normalization strategies in time series Transformers but also sets a new direction for enhancing model performance and stability. The source code is available at https://anonymous.4open.science/r/UnitNorm-5B84. | [
"['Nan Huang' 'Christian Kümmerle' 'Xiang Zhang']"
] |
null | null | 2405.15908 | null | null | http://arxiv.org/pdf/2405.15908v1 | 2024-05-24T20:05:12Z | 2024-05-24T20:05:12Z | Knowledge-Informed Auto-Penetration Testing Based on Reinforcement
Learning with Reward Machine | Automated penetration testing (AutoPT) based on reinforcement learning (RL) has proven its ability to improve the efficiency of vulnerability identification in information systems. However, RL-based PT encounters several challenges, including poor sampling efficiency, intricate reward specification, and limited interpretability. To address these issues, we propose a knowledge-informed AutoPT framework called DRLRM-PT, which leverages reward machines (RMs) to encode domain knowledge as guidelines for training a PT policy. In our study, we specifically focus on lateral movement as a PT case study and formulate it as a partially observable Markov decision process (POMDP) guided by RMs. We design two RMs based on the MITRE ATT&CK knowledge base for lateral movement. To solve the POMDP and optimize the PT policy, we employ the deep Q-learning algorithm with RM (DQRM). The experimental results demonstrate that the DQRM agent exhibits higher training efficiency in PT compared to agents without knowledge embedding. Moreover, RMs encoding more detailed domain knowledge demonstrated better PT performance compared to RMs with simpler knowledge. | [
"['Yuanliang Li' 'Hanzheng Dai' 'Jun Yan']"
] |
null | null | 2405.15911 | null | null | http://arxiv.org/pdf/2405.15911v1 | 2024-05-24T20:10:10Z | 2024-05-24T20:10:10Z | Learning accurate and interpretable decision trees | Decision trees are a popular tool in machine learning and yield easy-to-understand models. Several techniques have been proposed in the literature for learning a decision tree classifier, with different techniques working well for data from different domains. In this work, we develop approaches to design decision tree learning algorithms given repeated access to data from the same domain. We propose novel parameterized classes of node splitting criteria in top-down algorithms, which interpolate between popularly used entropy and Gini impurity based criteria, and provide theoretical bounds on the number of samples needed to learn the splitting function appropriate for the data at hand. We also study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression. We further consider the problem of tuning hyperparameters in pruning the decision tree for classical pruning algorithms including min-cost complexity pruning. We also study the interpretability of the learned decision trees and introduce a data-driven approach for optimizing the explainability versus accuracy trade-off using decision trees. Finally, we demonstrate the significance of our approach on real world datasets by learning data-specific decision trees which are simultaneously more accurate and interpretable. | [
"['Maria-Florina Balcan' 'Dravyansh Sharma']"
] |
null | null | 2405.15912 | null | null | http://arxiv.org/pdf/2405.15912v1 | 2024-05-24T20:15:53Z | 2024-05-24T20:15:53Z | Uncertainty Quantification for Neurosymbolic Programs via Compositional
Conformal Prediction | Machine learning has become an effective tool for automatically annotating unstructured data (e.g., images) with structured labels (e.g., object detections). As a result, a new programming paradigm called neurosymbolic programming has emerged where users write queries against these predicted annotations. However, due to the intrinsic fallibility of machine learning models, these programs currently lack any notion of correctness. In many domains, users may want some kind of conservative guarantee that the results of their queries contain all possibly relevant instances. Conformal prediction has emerged as a promising strategy for quantifying uncertainty in machine learning by modifying models to predict sets of labels instead of individual labels; it provides a probabilistic guarantee that the prediction set contains the true label with high probability. We propose a novel framework for adapting conformal prediction to neurosymbolic programs; our strategy is to represent prediction sets as abstract values in some abstract domain, and then to use abstract interpretation to propagate prediction sets through the program. Our strategy satisfies three key desiderata: (i) correctness (i.e., the program outputs a prediction set that contains the true output with high probability), (ii) compositionality (i.e., we can quantify uncertainty separately for different modules and then compose them together), and (iii) structured values (i.e., we can provide uncertainty quantification for structured values such as lists). When the full program is available ahead-of-time, we propose an optimization that incorporates conformal prediction at intermediate program points to reduce imprecision in abstract interpretation. We evaluate our approach on programs that take MNIST and MS-COCO images as input, demonstrating that it produces reasonably sized prediction sets while satisfying a coverage guarantee. | [
"['Ramya Ramalingam' 'Sangdon Park' 'Osbert Bastani']"
] |
null | null | 2405.15913 | null | null | http://arxiv.org/pdf/2405.15913v1 | 2024-05-24T20:19:15Z | 2024-05-24T20:19:15Z | Scaling up the Banded Matrix Factorization Mechanism for Differentially
Private ML | DP-BandMF offers a powerful approach to differentially private machine learning, balancing privacy amplification with noise correlation for optimal noise reduction. However, its scalability has been limited to settings where the number of training iterations is less than $10^4$. In this work, we present techniques that significantly extend DP-BandMF's reach, enabling use in settings with and over $10^6$ training iterations. Our enhanced implementation, coupled with extensive experiments, provides clear guidelines on selecting the optimal number of bands. These insights offer practitioners a deeper understanding of DP-BandMF's performance and how to maximize its utility for privacy-preserving machine learning. | [
"['Ryan McKenna']"
] |
null | null | 2405.15920 | null | null | http://arxiv.org/pdf/2405.15920v1 | 2024-05-24T20:30:14Z | 2024-05-24T20:30:14Z | SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep
Reinforcement Learning | This paper studies the transfer reinforcement learning (RL) problem where multiple RL problems have different reward functions but share the same underlying transition dynamics. In this setting, the Q-function of each RL problem (task) can be decomposed into a successor feature (SF) and a reward mapping: the former characterizes the transition dynamics, and the latter characterizes the task-specific reward function. This Q-function decomposition, coupled with a policy improvement operator known as generalized policy improvement (GPI), reduces the sample complexity of finding the optimal Q-function, and thus the SF & GPI framework exhibits promising empirical performance compared to traditional RL methods like Q-learning. However, its theoretical foundations remain largely unestablished, especially when learning the successor features using deep neural networks (SF-DQN). This paper studies the provable knowledge transfer using SFs-DQN in transfer RL problems. We establish the first convergence analysis with provable generalization guarantees for SF-DQN with GPI. The theory reveals that SF-DQN with GPI outperforms conventional RL approaches, such as deep Q-network, in terms of both faster convergence rate and better generalization. Numerical experiments on real and synthetic RL tasks support the superior performance of SF-DQN & GPI, aligning with our theoretical findings. | [
"['Shuai Zhang' 'Heshan Devaka Fernando' 'Miao Liu' 'Keerthiram Murugesan'\n 'Songtao Lu' 'Pin-Yu Chen' 'Tianyi Chen' 'Meng Wang']"
] |
null | null | 2405.15925 | null | null | http://arxiv.org/pdf/2405.15925v1 | 2024-05-24T20:33:59Z | 2024-05-24T20:33:59Z | MUCM-Net: A Mamba Powered UCM-Net for Skin Lesion Segmentation | Skin lesion segmentation is key for early skin cancer detection. Challenges in automatic segmentation from dermoscopic images include variations in color, texture, and artifacts of indistinct lesion boundaries. Deep learning methods like CNNs and U-Net have shown promise in addressing these issues. To further aid early diagnosis, especially on mobile devices with limited computing power, we present MUCM-Net. This efficient model combines Mamba State-Space Models with our UCM-Net architecture for improved feature learning and segmentation. MUCM-Net's Mamba-UCM Layer is optimized for mobile deployment, offering high accuracy with low computational needs. Tested on ISIC datasets, it outperforms other methods in accuracy and computational efficiency, making it a scalable tool for early detection in settings with limited resources. Our MUCM-Net source code is available for research and collaboration, supporting advances in mobile health diagnostics and the fight against skin cancer. In order to facilitate accessibility and further research in the field, the MUCM-Net source code is https://github.com/chunyuyuan/MUCM-Net | [
"['Chunyu Yuan' 'Dongfang Zhao' 'Sos S. Agaian']"
] |
null | null | 2405.15926 | null | null | http://arxiv.org/pdf/2405.15926v1 | 2024-05-24T20:34:18Z | 2024-05-24T20:34:18Z | Dissecting the Interplay of Attention Paths in a Statistical Mechanics
Theory of Transformers | Despite the remarkable empirical performance of Transformers, their theoretical understanding remains elusive. Here, we consider a deep multi-head self-attention network, that is closely related to Transformers yet analytically tractable. We develop a statistical mechanics theory of Bayesian learning in this model, deriving exact equations for the network's predictor statistics under the finite-width thermodynamic limit, i.e., $N,Prightarrowinfty$, $P/N=mathcal{O}(1)$, where $N$ is the network width and $P$ is the number of training examples. Our theory shows that the predictor statistics are expressed as a sum of independent kernels, each one pairing different 'attention paths', defined as information pathways through different attention heads across layers. The kernels are weighted according to a 'task-relevant kernel combination' mechanism that aligns the total kernel with the task labels. As a consequence, this interplay between attention paths enhances generalization performance. Experiments confirm our findings on both synthetic and real-world sequence classification tasks. Finally, our theory explicitly relates the kernel combination mechanism to properties of the learned weights, allowing for a qualitative transfer of its insights to models trained via gradient descent. As an illustration, we demonstrate an efficient size reduction of the network, by pruning those attention heads that are deemed less relevant by our theory. | [
"['Lorenzo Tiberi' 'Francesca Mignacco' 'Kazuki Irie' 'Haim Sompolinsky']"
] |
null | null | 2405.15928 | null | null | http://arxiv.org/pdf/2405.15928v1 | 2024-05-24T20:37:02Z | 2024-05-24T20:37:02Z | PatchProt: Hydrophobic patch prediction using protein foundation models | Hydrophobic patches on protein surfaces play important functional roles in protein-protein and protein-ligand interactions. Large hydrophobic surfaces are also involved in the progression of aggregation diseases. Predicting exposed hydrophobic patches from a protein sequence has been shown to be a difficult task. Fine-tuning foundation models allows for adapting a model to the specific nuances of a new task using a much smaller dataset. Additionally, multi-task deep learning offers a promising solution for addressing data gaps, simultaneously outperforming single-task methods. In this study, we harnessed a recently released leading large language model ESM-2. Efficient fine-tuning of ESM-2 was achieved by leveraging a recently developed parameter-efficient fine-tuning method. This approach enabled comprehensive training of model layers without excessive parameters and without the need to include a computationally expensive multiple sequence analysis. We explored several related tasks, at local (residue) and global (protein) levels, to improve the representation of the model. As a result, our fine-tuned ESM-2 model, PatchProt, cannot only predict hydrophobic patch areas but also outperforms existing methods at predicting primary tasks, including secondary structure and surface accessibility predictions. Importantly, our analysis shows that including related local tasks can improve predictions on more difficult global tasks. This research sets a new standard for sequence-based protein property prediction and highlights the remarkable potential of fine-tuning foundation models enriching the model representation by training over related tasks. | [
"['Dea Gogishvili' 'Emmanuel Minois-Genin' 'Jan van Eck' 'Sanne Abeln']"
] |
null | null | 2405.15934 | null | null | http://arxiv.org/pdf/2405.15934v1 | 2024-05-24T20:47:58Z | 2024-05-24T20:47:58Z | Clustering Survival Data using a Mixture of Non-parametric Experts | Survival analysis aims to predict the timing of future events across various fields, from medical outcomes to customer churn. However, the integration of clustering into survival analysis, particularly for precision medicine, remains underexplored. This study introduces SurvMixClust, a novel algorithm for survival analysis that integrates clustering with survival function prediction within a unified framework. SurvMixClust learns latent representations for clustering while also predicting individual survival functions using a mixture of non-parametric experts. Our evaluations on five public datasets show that SurvMixClust creates balanced clusters with distinct survival curves, outperforms clustering baselines, and competes with non-clustering survival models in predictive accuracy, as measured by the time-dependent c-index and log-rank metrics. | [
"['Gabriel Buginga' 'Edmundo de Souza e Silva']"
] |
null | null | 2405.15941 | null | null | http://arxiv.org/pdf/2405.15941v1 | 2024-05-24T21:09:19Z | 2024-05-24T21:09:19Z | A Unified Theory of Stochastic Proximal Point Methods without Smoothness | This paper presents a comprehensive analysis of a broad range of variations of the stochastic proximal point method (SPPM). Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning, a trait not shared by the dominant stochastic gradient descent (SGD) algorithm. A framework of assumptions that we introduce encompasses methods employing techniques such as variance reduction and arbitrary sampling. A cornerstone of our general theoretical approach is a parametric assumption on the iterates, correction and control vectors. We establish a single theorem that ensures linear convergence under this assumption and the $mu$-strong convexity of the loss function, and without the need to invoke smoothness. This integral theorem reinstates best known complexity and convergence guarantees for several existing methods which demonstrates the robustness of our approach. We expand our study by developing three new variants of SPPM, and through numerical experiments we elucidate various properties inherent to them. | [
"['Peter Richtárik' 'Abdurakhmon Sadiev' 'Yury Demidovich']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.