categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2406.01257
null
null
http://arxiv.org/pdf/2406.01257v1
2024-06-03T12:14:47Z
2024-06-03T12:14:47Z
What makes unlearning hard and what to do about it
Machine unlearning is the problem of removing the effect of a subset of training data (the ''forget set'') from a trained model without damaging the model's utility e.g. to comply with users' requests to delete their data, or remove mislabeled, poisoned or otherwise problematic data. With unlearning research still being at its infancy, many fundamental open questions exist: Are there interpretable characteristics of forget sets that substantially affect the difficulty of the problem? How do these characteristics affect different state-of-the-art algorithms? With this paper, we present the first investigation aiming to answer these questions. We identify two key factors affecting unlearning difficulty and the performance of unlearning algorithms. Evaluation on forget sets that isolate these identified factors reveals previously-unknown behaviours of state-of-the-art algorithms that don't materialize on random forget sets. Based on our insights, we develop a framework coined Refined-Unlearning Meta-algorithm (RUM) that encompasses: (i) refining the forget set into homogenized subsets, according to different characteristics; and (ii) a meta-algorithm that employs existing algorithms to unlearn each subset and finally delivers a model that has unlearned the overall forget set. We find that RUM substantially improves top-performing unlearning algorithms. Overall, we view our work as an important step in (i) deepening our scientific understanding of unlearning and (ii) revealing new pathways to improving the state-of-the-art.
[ "['Kairan Zhao' 'Meghdad Kurmanji' 'George-Octavian Bărbulescu'\n 'Eleni Triantafillou' 'Peter Triantafillou']" ]
null
null
2406.01274
null
null
http://arxiv.org/pdf/2406.01274v2
2024-06-25T18:10:15Z
2024-06-03T12:40:30Z
Expected Grad-CAM: Towards gradient faithfulness
Although input-gradients techniques have evolved to mitigate and tackle the challenges associated with gradients, modern gradient-weighted CAM approaches still rely on vanilla gradients, which are inherently susceptible to the saturation phenomena. Despite recent enhancements have incorporated counterfactual gradient strategies as a mitigating measure, these local explanation techniques still exhibit a lack of sensitivity to their baseline parameter. Our work proposes a gradient-weighted CAM augmentation that tackles both the saturation and sensitivity problem by reshaping the gradient computation, incorporating two well-established and provably approaches: Expected Gradients and kernel smoothing. By revisiting the original formulation as the smoothed expectation of the perturbed integrated gradients, one can concurrently construct more faithful, localized and robust explanations which minimize infidelity. Through fine modulation of the perturbation distribution it is possible to regulate the complexity characteristic of the explanation, selectively discriminating stable features. Our technique, Expected Grad-CAM, differently from recent works, exclusively optimizes the gradient computation, purposefully designed as an enhanced substitute of the foundational Grad-CAM algorithm and any method built therefrom. Quantitative and qualitative evaluations have been conducted to assess the effectiveness of our method.
[ "['Vincenzo Buono' 'Peyman Sheikholharam Mashhadi' 'Mahmoud Rahat'\n 'Prayag Tiwari' 'Stefan Byttner']" ]
null
null
2406.01275
null
null
http://arxiv.org/pdf/2406.01275v1
2024-06-03T12:44:55Z
2024-06-03T12:44:55Z
Lifting Factor Graphs with Some Unknown Factors
Lifting exploits symmetries in probabilistic graphical models by using a representative for indistinguishable objects, allowing to carry out query answering more efficiently while maintaining exact answers. In this paper, we investigate how lifting enables us to perform probabilistic inference for factor graphs containing factors whose potentials are unknown. We introduce the Lifting Factor Graphs with Some Unknown Factors (LIFAGU) algorithm to identify symmetric subgraphs in a factor graph containing unknown factors, thereby enabling the transfer of known potentials to unknown potentials to ensure a well-defined semantics and allow for (lifted) probabilistic inference.
[ "['Malte Luttermann' 'Ralf Möller' 'Marcel Gehrke']" ]
null
null
2406.01278
null
null
http://arxiv.org/pdf/2406.01278v1
2024-06-03T12:47:48Z
2024-06-03T12:47:48Z
fruit-SALAD: A Style Aligned Artwork Dataset to reveal similarity perception in image embeddings
The notion of visual similarity is essential for computer vision, and in applications and studies revolving around vector embeddings of images. However, the scarcity of benchmark datasets poses a significant hurdle in exploring how these models perceive similarity. Here we introduce Style Aligned Artwork Datasets (SALADs), and an example of fruit-SALAD with 10,000 images of fruit depictions. This combined semantic category and style benchmark comprises 100 instances each of 10 easy-to-recognize fruit categories, across 10 easy distinguishable styles. Leveraging a systematic pipeline of generative image synthesis, this visually diverse yet balanced benchmark demonstrates salient differences in semantic category and style similarity weights across various computational models, including machine learning models, feature extraction algorithms, and complexity measures, as well as conceptual models for reference. This meticulously designed dataset offers a controlled and balanced platform for the comparative analysis of similarity perception. The SALAD framework allows the comparison of how these models perform semantic category and style recognition task to go beyond the level of anecdotal knowledge, making it robustly quantifiable and qualitatively interpretable.
[ "['Tillmann Ohm' 'Andres Karjus' 'Mikhail Tamm' 'Maximilian Schich']" ]
null
null
2406.01282
null
null
http://arxiv.org/pdf/2406.01282v2
2024-06-07T06:53:43Z
2024-06-03T12:50:58Z
Continuous Geometry-Aware Graph Diffusion via Hyperbolic Neural PDE
While Hyperbolic Graph Neural Network (HGNN) has recently emerged as a powerful tool dealing with hierarchical graph data, the limitations of scalability and efficiency hinder itself from generalizing to deep models. In this paper, by envisioning depth as a continuous-time embedding evolution, we decouple the HGNN and reframe the information propagation as a partial differential equation, letting node-wise attention undertake the role of diffusivity within the Hyperbolic Neural PDE (HPDE). By introducing theoretical principles textit{e.g.,} field and flow, gradient, divergence, and diffusivity on a non-Euclidean manifold for HPDE integration, we discuss both implicit and explicit discretization schemes to formulate numerical HPDE solvers. Further, we propose the Hyperbolic Graph Diffusion Equation (HGDE) -- a flexible vector flow function that can be integrated to obtain expressive hyperbolic node embeddings. By analyzing potential energy decay of embeddings, we demonstrate that HGDE is capable of modeling both low- and high-order proximity with the benefit of local-global diffusivity functions. Experiments on node classification and link prediction and image-text classification tasks verify the superiority of the proposed method, which consistently outperforms various competitive models by a significant margin.
[ "['Jiaxu Liu' 'Xinping Yi' 'Sihao Wu' 'Xiangyu Yin' 'Tianle Zhang'\n 'Xiaowei Huang' 'Shi Jin']" ]
null
null
2406.01285
null
null
http://arxiv.org/pdf/2406.01285v1
2024-06-03T12:53:37Z
2024-06-03T12:53:37Z
Large Language Models as Recommender Systems: A Study of Popularity Bias
The issue of popularity bias -- where popular items are disproportionately recommended, overshadowing less popular but potentially relevant items -- remains a significant challenge in recommender systems. Recent advancements have seen the integration of general-purpose Large Language Models (LLMs) into the architecture of such systems. This integration raises concerns that it might exacerbate popularity bias, given that the LLM's training data is likely dominated by popular items. However, it simultaneously presents a novel opportunity to address the bias via prompt tuning. Our study explores this dichotomy, examining whether LLMs contribute to or can alleviate popularity bias in recommender systems. We introduce a principled way to measure popularity bias by discussing existing metrics and proposing a novel metric that fulfills a series of desiderata. Based on our new metric, we compare a simple LLM-based recommender to traditional recommender systems on a movie recommendation task. We find that the LLM recommender exhibits less popularity bias, even without any explicit mitigation.
[ "['Jan Malte Lichtenberg' 'Alexander Buchholz' 'Pola Schwöbel']" ]
null
null
2406.01288
null
null
http://arxiv.org/pdf/2406.01288v1
2024-06-03T12:59:17Z
2024-06-03T12:59:17Z
Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
Recently, Anil et al. (2024) show that many-shot (up to hundreds of) demonstrations can jailbreak state-of-the-art LLMs by exploiting their long-context capability. Nevertheless, is it possible to use few-shot demonstrations to efficiently jailbreak LLMs within limited context sizes? While the vanilla few-shot jailbreaking may be inefficient, we propose improved techniques such as injecting special system tokens like [/INST] and employing demo-level random search from a collected demo pool. These simple techniques result in surprisingly effective jailbreaking against aligned LLMs (even with advanced defenses). For examples, our method achieves >80% (mostly >95%) ASRs on Llama-2-7B and Llama-3-8B without multiple restarts, even if the models are enhanced by strong defenses such as perplexity detection and/or SmoothLLM, which is challenging for suffix-based jailbreaking. In addition, we conduct comprehensive and elaborate (e.g., making sure to use correct system prompts) evaluations against other aligned LLMs and advanced defenses, where our method consistently achieves nearly 100% ASRs. Our code is available at https://github.com/sail-sg/I-FSJ.
[ "['Xiaosen Zheng' 'Tianyu Pang' 'Chao Du' 'Qian Liu' 'Jing Jiang' 'Min Lin']" ]
null
null
2406.01290
null
null
http://arxiv.org/pdf/2406.01290v2
2024-06-05T06:15:14Z
2024-06-03T13:01:09Z
Resource-constrained Fairness
Access to resources strongly constrains the decisions we make. While we might wish to offer every student a scholarship, or schedule every patient for follow-up meetings with a specialist, limited resources mean that this is not possible. Existing tools for fair machine learning ignore these key constraints, with the majority of methods disregarding any finite resource limitations under which decisions are made. Our research introduces the concept of "resource-constrained fairness" and quantifies the cost of fairness within this framework. We demonstrate that the level of available resources significantly influences this cost, a factor that has been overlooked in previous evaluations.
[ "['Sofie Goethals' 'Eoin Delaney' 'Brent Mittelstadt' 'Chris Russell']" ]
null
null
2406.01315
null
null
http://arxiv.org/pdf/2406.01315v1
2024-06-03T13:27:51Z
2024-06-03T13:27:51Z
Scale-Free Image Keypoints Using Differentiable Persistent Homology
In computer vision, keypoint detection is a fundamental task, with applications spanning from robotics to image retrieval; however, existing learning-based methods suffer from scale dependency and lack flexibility. This paper introduces a novel approach that leverages Morse theory and persistent homology, powerful tools rooted in algebraic topology. We propose a novel loss function based on the recent introduction of a notion of subgradient in persistent homology, paving the way toward topological learning. Our detector, MorseDet, is the first topology-based learning model for feature detection, which achieves competitive performance in keypoint repeatability and introduces a principled and theoretically robust approach to the problem.
[ "['Giovanni Barbarani' 'Francesco Vaccarino' 'Gabriele Trivigno'\n 'Marco Guerra' 'Gabriele Berton' 'Carlo Masone']" ]
null
null
2406.01317
null
null
http://arxiv.org/pdf/2406.01317v2
2024-06-28T13:27:36Z
2024-06-03T13:29:36Z
The Intelligible and Effective Graph Neural Additive Networks
Graph Neural Networks (GNNs) have emerged as the predominant approach for learning over graph-structured data. However, most GNNs operate as black-box models and require post-hoc explanations, which may not suffice in high-stakes scenarios where transparency is crucial. In this paper, we present a GNN that is interpretable by design. Our model, Graph Neural Additive Network (GNAN), is a novel extension of the interpretable class of Generalized Additive Models, and can be visualized and fully understood by humans. GNAN is designed to be fully interpretable, allowing both global and local explanations at the feature and graph levels through direct visualization of the model. These visualizations describe the exact way the model uses the relationships between the target variable, the features, and the graph. We demonstrate the intelligibility of GNANs in a series of examples on different tasks and datasets. In addition, we show that the accuracy of GNAN is on par with black-box GNNs, making it suitable for critical applications where transparency is essential, alongside high accuracy.
[ "['Maya Bechler-Speicher' 'Amir Globerson' 'Ran Gilad-Bachrach']" ]
null
null
2406.01321
null
null
http://arxiv.org/pdf/2406.01321v1
2024-06-03T13:42:10Z
2024-06-03T13:42:10Z
Sequence-to-Sequence Multi-Modal Speech In-Painting
Speech in-painting is the task of regenerating missing audio contents using reliable context information. Despite various recent studies in multi-modal perception of audio in-painting, there is still a need for an effective infusion of visual and auditory information in speech in-painting. In this paper, we introduce a novel sequence-to-sequence model that leverages the visual information to in-paint audio signals via an encoder-decoder architecture. The encoder plays the role of a lip-reader for facial recordings and the decoder takes both encoder outputs as well as the distorted audio spectrograms to restore the original speech. Our model outperforms an audio-only speech in-painting model and has comparable results with a recent multi-modal speech in-painter in terms of speech quality and intelligibility metrics for distortions of 300 ms to 1500 ms duration, which proves the effectiveness of the introduced multi-modality in speech in-painting.
[ "['Mahsa Kadkhodaei Elyaderani' 'Shahram Shirani']" ]
null
null
2406.01345
null
null
http://arxiv.org/pdf/2406.01345v1
2024-06-03T14:08:04Z
2024-06-03T14:08:04Z
BMRS: Bayesian Model Reduction for Structured Pruning
Modern neural networks are often massively overparameterized leading to high compute costs during training and at inference. One effective method to improve both the compute and energy efficiency of neural networks while maintaining good performance is structured pruning, where full network structures (e.g. neurons or convolutional filters) that have limited impact on the model output are removed. In this work, we propose Bayesian Model Reduction for Structured pruning (BMRS), a fully end-to-end Bayesian method of structured pruning. BMRS is based on two recent methods: Bayesian structured pruning with multiplicative noise, and Bayesian model reduction (BMR), a method which allows efficient comparison of Bayesian models under a change in prior. We present two realizations of BMRS derived from different priors which yield different structured pruning characteristics: 1) BMRS_N with the truncated log-normal prior, which offers reliable compression rates and accuracy without the need for tuning any thresholds and 2) BMRS_U with the truncated log-uniform prior that can achieve more aggressive compression based on the boundaries of truncation. Overall, we find that BMRS offers a theoretically grounded approach to structured pruning of neural networks yielding both high compression rates and accuracy. Experiments on multiple datasets and neural networks of varying complexity showed that the two BMRS methods offer a competitive performance-efficiency trade-off compared to other pruning methods.
[ "['Dustin Wright' 'Christian Igel' 'Raghavendra Selvan']" ]
null
null
2406.01352
null
null
http://arxiv.org/pdf/2406.01352v1
2024-06-03T14:16:56Z
2024-06-03T14:16:56Z
Position Paper: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience
Inner Interpretability is a promising emerging field tasked with uncovering the inner mechanisms of AI systems, though how to develop these mechanistic theories is still much debated. Moreover, recent critiques raise issues that question its usefulness to advance the broader goals of AI. However, it has been overlooked that these issues resemble those that have been grappled with in another field: Cognitive Neuroscience. Here we draw the relevant connections and highlight lessons that can be transferred productively between fields. Based on these, we propose a general conceptual framework and give concrete methodological strategies for building mechanistic explanations in AI inner interpretability research. With this conceptual framework, Inner Interpretability can fend off critiques and position itself on a productive path to explain AI systems.
[ "['Martina G. Vilas' 'Federico Adolfi' 'David Poeppel' 'Gemma Roig']" ]
null
null
2406.01361
null
null
http://arxiv.org/pdf/2406.01361v1
2024-06-03T14:25:29Z
2024-06-03T14:25:29Z
Learning to Play Atari in a World of Tokens
Model-based reinforcement learning agents utilizing transformers have shown improved sample efficiency due to their ability to model extended context, resulting in more accurate world models. However, for complex reasoning and planning tasks, these methods primarily rely on continuous representations. This complicates modeling of discrete properties of the real world such as disjoint object classes between which interpolation is not plausible. In this work, we introduce discrete abstract representations for transformer-based learning (DART), a sample-efficient method utilizing discrete representations for modeling both the world and learning behavior. We incorporate a transformer-decoder for auto-regressive world modeling and a transformer-encoder for learning behavior by attending to task-relevant cues in the discrete representation of the world model. For handling partial observability, we aggregate information from past time steps as memory tokens. DART outperforms previous state-of-the-art methods that do not use look-ahead search on the Atari 100k sample efficiency benchmark with a median human-normalized score of 0.790 and beats humans in 9 out of 26 games. We release our code at https://pranaval.github.io/DART/.
[ "['Pranav Agarwal' 'Sheldon Andrews' 'Samira Ebrahimi Kahou']" ]
null
null
2406.01365
null
null
http://arxiv.org/pdf/2406.01365v1
2024-06-03T14:32:39Z
2024-06-03T14:32:39Z
From Feature Visualization to Visual Circuits: Effect of Adversarial Model Manipulation
Understanding the inner working functionality of large-scale deep neural networks is challenging yet crucial in several high-stakes applications. Mechanistic inter- pretability is an emergent field that tackles this challenge, often by identifying human-understandable subgraphs in deep neural networks known as circuits. In vision-pretrained models, these subgraphs are usually interpreted by visualizing their node features through a popular technique called feature visualization. Recent works have analyzed the stability of different feature visualization types under the adversarial model manipulation framework. This paper starts by addressing limitations in existing works by proposing a novel attack called ProxPulse that simultaneously manipulates the two types of feature visualizations. Surprisingly, when analyzing these attacks under the umbrella of visual circuits, we find that visual circuits show some robustness to ProxPulse. We, therefore, introduce a new attack based on ProxPulse that unveils the manipulability of visual circuits, shedding light on their lack of robustness. The effectiveness of these attacks is validated using pre-trained AlexNet and ResNet-50 models on ImageNet.
[ "['Geraldin Nanfack' 'Michael Eickenberg' 'Eugene Belilovsky']" ]
null
null
2406.01378
null
null
http://arxiv.org/pdf/2406.01378v1
2024-06-03T14:42:31Z
2024-06-03T14:42:31Z
A Theory of Learnability for Offline Decision Making
We study the problem of offline decision making, which focuses on learning decisions from datasets only partially correlated with the learning objective. While previous research has extensively studied specific offline decision making problems like offline reinforcement learning (RL) and off-policy evaluation (OPE), a unified framework and theory remain absent. To address this gap, we introduce a unified framework termed Decision Making with Offline Feedback (DMOF), which captures a wide range of offline decision making problems including offline RL, OPE, and offline partially observable Markov decision processes (POMDPs). For the DMOF framework, we introduce a hardness measure called the Offline Estimation Coefficient (OEC), which measures the learnability of offline decision making problems and is also reflected in the derived minimax lower bounds. Additionally, we introduce an algorithm called Empirical Decision with Divergence (EDD), for which we establish both an instance-dependent upper bound and a minimax upper bound. The minimax upper bound almost matches the lower bound determined by the OEC. Finally, we show that EDD achieves a fast convergence rate (i.e., a rate scaling as $1/N$, where $N$ is the sample size) for specific settings such as supervised learning and Markovian sequential problems~(e.g., MDPs) with partial coverage.
[ "['Chenjie Mao' 'Qiaosheng Zhang']" ]
null
null
2406.01386
null
null
http://arxiv.org/pdf/2406.01386v1
2024-06-03T14:48:53Z
2024-06-03T14:48:53Z
Combinatorial Multivariant Multi-Armed Bandits with Applications to Episodic Reinforcement Learning and Beyond
We introduce a novel framework of combinatorial multi-armed bandits (CMAB) with multivariant and probabilistically triggering arms (CMAB-MT), where the outcome of each arm is a $d$-dimensional multivariant random variable and the feedback follows a general arm triggering process. Compared with existing CMAB works, CMAB-MT not only enhances the modeling power but also allows improved results by leveraging distinct statistical properties for multivariant random variables. For CMAB-MT, we propose a general 1-norm multivariant and triggering probability-modulated smoothness condition, and an optimistic CUCB-MT algorithm built upon this condition. Our framework can include many important problems as applications, such as episodic reinforcement learning (RL) and probabilistic maximum coverage for goods distribution, all of which meet the above smoothness condition and achieve matching or improved regret bounds compared to existing works. Through our new framework, we build the first connection between the episodic RL and CMAB literature, by offering a new angle to solve the episodic RL through the lens of CMAB, which may encourage more interactions between these two important directions.
[ "['Xutong Liu' 'Siwei Wang' 'Jinhang Zuo' 'Han Zhong' 'Xuchuang Wang'\n 'Zhiyong Wang' 'Shuai Li' 'Mohammad Hajiesmaili' 'John C. S. Lui'\n 'Wei Chen']" ]
null
null
2406.01389
null
null
http://arxiv.org/pdf/2406.01389v2
2024-06-26T15:42:57Z
2024-06-03T14:51:27Z
RL in Latent MDPs is Tractable: Online Guarantees via Off-Policy Evaluation
In many real-world decision problems there is partially observed, hidden or latent information that remains fixed throughout an interaction. Such decision problems can be modeled as Latent Markov Decision Processes (LMDPs), where a latent variable is selected at the beginning of an interaction and is not disclosed to the agent. In the last decade, there has been significant progress in solving LMDPs under different structural assumptions. However, for general LMDPs, there is no known learning algorithm that provably matches the existing lower bound (Kwon et al., 2021). We introduce the first sample-efficient algorithm for LMDPs without any additional structural assumptions. Our result builds off a new perspective on the role of off-policy evaluation guarantees and coverage coefficients in LMDPs, a perspective, that has been overlooked in the context of exploration in partially observed environments. Specifically, we establish a novel off-policy evaluation lemma and introduce a new coverage coefficient for LMDPs. Then, we show how these can be used to derive near-optimal guarantees of an optimistic exploration algorithm. These results, we believe, can be valuable for a wide range of interactive learning problems beyond LMDPs, and especially, for partially observed environments.
[ "['Jeongyeol Kwon' 'Shie Mannor' 'Constantine Caramanis' 'Yonathan Efroni']" ]
null
null
2406.01400
null
null
http://arxiv.org/pdf/2406.01400v1
2024-06-03T15:03:31Z
2024-06-03T15:03:31Z
Efficient Computation Using Spatial-Photonic Ising Machines: Utilizing Low-Rank and Circulant Matrix Constraints
We explore the potential of spatial-photonic Ising machines (SPIMs) to address computationally intensive Ising problems that employ low-rank and circulant coupling matrices. Our results indicate that the performance of SPIMs is critically affected by the rank and precision of the coupling matrices. By developing and assessing advanced decomposition techniques, we expand the range of problems SPIMs can solve, overcoming the limitations of traditional Mattis-type matrices. Our approach accommodates a diverse array of coupling matrices, including those with inherently low ranks, applicable to complex NP-complete problems. We explore the practical benefits of low-rank approximation in optimization tasks, particularly in financial optimization, to demonstrate the real-world applications of SPIMs. Finally, we evaluate the computational limitations imposed by SPIM hardware precision and suggest strategies to optimize the performance of these systems within these constraints.
[ "['Richard Zhipeng Wang' 'James S. Cummins' 'Marvin Syed' 'Nikita Stroev'\n 'George Pastras' 'Jason Sakellariou' 'Symeon Tsintzos'\n 'Alexis Askitopoulos' 'Daniele Veraldi' 'Marcello Calvanese Strinati'\n 'Silvia Gentilini' 'Davide Pierangeli' 'Claudio Conti'\n 'Natalia G. Berloff']" ]
null
null
2406.01402
null
null
http://arxiv.org/pdf/2406.01402v1
2024-06-03T15:04:47Z
2024-06-03T15:04:47Z
Mixture of Rationale: Multi-Modal Reasoning Mixture for Visual Question Answering
Zero-shot visual question answering (VQA) is a challenging task that requires reasoning across modalities. While some existing methods rely on a single rationale within the Chain of Thoughts (CoT) framework, they may fall short of capturing the complexity of the VQA problem. On the other hand, some other methods that use multiple rationales may still suffer from low diversity, poor modality alignment, and inefficient retrieval and fusion. In response to these challenges, we propose emph{Mixture of Rationales (MoR)}, a novel multi-modal reasoning method that mixes multiple rationales for VQA. MoR uses a single frozen Vision-and-Language Pre-trained Models (VLPM) model to {dynamically generate, retrieve and fuse multi-modal thoughts}. We evaluate MoR on two challenging VQA datasets, i.e. NLVR2 and OKVQA, with two representative backbones OFA and VL-T5. MoR achieves a 12.43% accuracy improvement on NLVR2, and a 2.45% accuracy improvement on OKVQA-S( the science and technology category of OKVQA).
[ "['Tao Li' 'Linjun Shou' 'Xuejun Liu']" ]
null
null
2406.01411
null
null
http://arxiv.org/pdf/2406.01411v1
2024-06-03T15:10:01Z
2024-06-03T15:10:01Z
Using Constraints to Discover Sparse and Alternative Subgroup Descriptions
Subgroup-discovery methods allow users to obtain simple descriptions of interesting regions in a dataset. Using constraints in subgroup discovery can enhance interpretability even further. In this article, we focus on two types of constraints: First, we limit the number of features used in subgroup descriptions, making the latter sparse. Second, we propose the novel optimization problem of finding alternative subgroup descriptions, which cover a similar set of data objects as a given subgroup but use different features. We describe how to integrate both constraint types into heuristic subgroup-discovery methods. Further, we propose a novel Satisfiability Modulo Theories (SMT) formulation of subgroup discovery as a white-box optimization problem, which allows solver-based search for subgroups and is open to a variety of constraint types. Additionally, we prove that both constraint types lead to an NP-hard optimization problem. Finally, we employ 27 binary-classification datasets to compare heuristic and solver-based search for unconstrained and constrained subgroup discovery. We observe that heuristic search methods often yield high-quality subgroups within a short runtime, also in scenarios with constraints.
[ "['Jakob Bach']" ]
null
null
2406.01414
null
null
http://arxiv.org/pdf/2406.01414v1
2024-06-03T15:13:21Z
2024-06-03T15:13:21Z
CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework
This work presents a novel approach to neural architecture search (NAS) that aims to increase carbon efficiency for the model design process. The proposed framework CE-NAS addresses the key challenge of high carbon cost associated with NAS by exploring the carbon emission variations of energy and energy differences of different NAS algorithms. At the high level, CE-NAS leverages a reinforcement-learning agent to dynamically adjust GPU resources based on carbon intensity, predicted by a time-series transformer, to balance energy-efficient sampling and energy-intensive evaluation tasks. Furthermore, CE-NAS leverages a recently proposed multi-objective optimizer to effectively reduce the NAS search space. We demonstrate the efficacy of CE-NAS in lowering carbon emissions while achieving SOTA results for both NAS datasets and open-domain NAS tasks. For example, on the HW-NasBench dataset, CE-NAS reduces carbon emissions by up to 7.22X while maintaining a search efficiency comparable to vanilla NAS. For open-domain NAS tasks, CE-NAS achieves SOTA results with 97.35% top-1 accuracy on CIFAR-10 with only 1.68M parameters and a carbon consumption of 38.53 lbs of CO2. On ImageNet, our searched model achieves 80.6% top-1 accuracy with a 0.78 ms TensorRT latency using FP16 on NVIDIA V100, consuming only 909.86 lbs of CO2, making it comparable to other one-shot-based NAS baselines.
[ "['Yiyang Zhao' 'Yunzhuo Liu' 'Bo Jiang' 'Tian Guo']" ]
null
null
2406.01416
null
null
http://arxiv.org/pdf/2406.01416v1
2024-06-03T15:16:02Z
2024-06-03T15:16:02Z
Adapting Conformal Prediction to Distribution Shifts Without Labels
Conformal prediction (CP) enables machine learning models to output prediction sets with guaranteed coverage rate, assuming exchangeable data. Unfortunately, the exchangeability assumption is frequently violated due to distribution shifts in practice, and the challenge is often compounded by the lack of ground truth labels at test time. Focusing on classification in this paper, our goal is to improve the quality of CP-generated prediction sets using only unlabeled data from the test domain. This is achieved by two new methods called ECP and EACP, that adjust the score function in CP according to the base model's uncertainty on the unlabeled test data. Through extensive experiments on a number of large-scale datasets and neural network architectures, we show that our methods provide consistent improvement over existing baselines and nearly match the performance of supervised algorithms.
[ "['Kevin Kasa' 'Zhiyu Zhang' 'Heng Yang' 'Graham W. Taylor']" ]
null
null
2406.01417
null
null
http://arxiv.org/pdf/2406.01417v1
2024-06-03T15:16:09Z
2024-06-03T15:16:09Z
Mixup Augmentation with Multiple Interpolations
Mixup and its variants form a popular class of data augmentation techniques.Using a random sample pair, it generates a new sample by linear interpolation of the inputs and labels. However, generating only one single interpolation may limit its augmentation ability. In this paper, we propose a simple yet effective extension called multi-mix, which generates multiple interpolations from a sample pair. With an ordered sequence of generated samples, multi-mix can better guide the training process than standard mixup. Moreover, theoretically, this can also reduce the stochastic gradient variance. Extensive experiments on a number of synthetic and large-scale data sets demonstrate that multi-mix outperforms various mixup variants and non-mixup-based baselines in terms of generalization, robustness, and calibration.
[ "['Lifeng Shen' 'Jincheng Yu' 'Hansi Yang' 'James T. Kwok']" ]
null
null
2406.01421
null
null
http://arxiv.org/abs/2406.01421v1
2024-06-03T15:20:05Z
2024-06-03T15:20:05Z
Problematizing AI Omnipresence in Landscape Architecture
This position paper argues for, and offers, a critical lens through which to examine the current AI frenzy in the landscape architecture profession. In it, the authors propose five archetypes or mental modes that landscape architects might inhabit when thinking about AI. Rather than limiting judgments of AI use to a single axis of acceleration, these archetypes and corresponding narratives exist along a relational spectrum and are permeable, allowing LAs to take on and switch between them according to context. We model these relationships between the archetypes and their contributions to AI advancement using a causal loop diagram (CLD), and with those interactions argue that more nuanced ways of approaching AI might also open new modes of practice in the new digital economy.
[ "['Phillip Fernberg' 'Zihao Zhang']" ]
null
null
2406.01423
null
null
http://arxiv.org/pdf/2406.01423v1
2024-06-03T15:24:15Z
2024-06-03T15:24:15Z
Value Improved Actor Critic Algorithms
Many modern reinforcement learning algorithms build on the actor-critic (AC) framework: iterative improvement of a policy (the actor) using policy improvement operators and iterative approximation of the policy's value (the critic). In contrast, the popular value-based algorithm family employs improvement operators in the value update, to iteratively improve the value function directly. In this work, we propose a general extension to the AC framework that employs two separate improvement operators: one applied to the policy in the spirit of policy-based algorithms and one applied to the value in the spirit of value-based algorithms, which we dub Value-Improved AC (VI-AC). We design two practical VI-AC algorithms based in the popular online off-policy AC algorithms TD3 and DDPG. We evaluate VI-TD3 and VI-DDPG in the Mujoco benchmark and find that both improve upon or match the performance of their respective baselines in all environments tested.
[ "['Yaniv Oren' 'Moritz A. Zanger' 'Pascal R. van der Vaart'\n 'Matthijs T. J. Spaan' 'Wendelin Bohmer']" ]
null
null
2406.01424
null
null
http://arxiv.org/pdf/2406.01424v1
2024-06-03T15:25:13Z
2024-06-03T15:25:13Z
Universal In-Context Approximation By Prompting Fully Recurrent Models
Zero-shot and in-context learning enable solving tasks without model fine-tuning, making them essential for developing generative model solutions. Therefore, it is crucial to understand whether a pretrained model can be prompted to approximate any function, i.e., whether it is a universal in-context approximator. While it was recently shown that transformer models do possess this property, these results rely on their attention mechanism. Hence, these findings do not apply to fully recurrent architectures like RNNs, LSTMs, and the increasingly popular SSMs. We demonstrate that RNNs, LSTMs, GRUs, Linear RNNs, and linear gated architectures such as Mamba and Hawk/Griffin can also serve as universal in-context approximators. To streamline our argument, we introduce a programming language called LSRL that compiles to these fully recurrent architectures. LSRL may be of independent interest for further studies of fully recurrent models, such as constructing interpretability benchmarks. We also study the role of multiplicative gating and observe that architectures incorporating such gating (e.g., LSTMs, GRUs, Hawk/Griffin) can implement certain operations more stably, making them more viable candidates for practical in-context universal approximation.
[ "['Aleksandar Petrov' 'Tom A. Lamb' 'Alasdair Paren' 'Philip H. S. Torr'\n 'Adel Bibi']" ]
null
null
2406.01435
null
null
http://arxiv.org/pdf/2406.01435v1
2024-06-03T15:28:12Z
2024-06-03T15:28:12Z
Learning Analysis of Kernel Ridgeless Regression with Asymmetric Kernel Learning
Ridgeless regression has garnered attention among researchers, particularly in light of the ``Benign Overfitting'' phenomenon, where models interpolating noisy samples demonstrate robust generalization. However, kernel ridgeless regression does not always perform well due to the lack of flexibility. This paper enhances kernel ridgeless regression with Locally-Adaptive-Bandwidths (LAB) RBF kernels, incorporating kernel learning techniques to improve performance in both experiments and theory. For the first time, we demonstrate that functions learned from LAB RBF kernels belong to an integral space of Reproducible Kernel Hilbert Spaces (RKHSs). Despite the absence of explicit regularization in the proposed model, its optimization is equivalent to solving an $ell_0$-regularized problem in the integral space of RKHSs, elucidating the origin of its generalization ability. Taking an approximation analysis viewpoint, we introduce an $l_q$-norm analysis technique (with $0<q<1$) to derive the learning rate for the proposed model under mild conditions. This result deepens our theoretical understanding, explaining that our algorithm's robust approximation ability arises from the large capacity of the integral space of RKHSs, while its generalization ability is ensured by sparsity, controlled by the number of support vectors. Experimental results on both synthetic and real datasets validate our theoretical conclusions.
[ "['Fan He' 'Mingzhen He' 'Lei Shi' 'Xiaolin Huang' 'Johan A. K. Suykens']" ]
null
null
2406.01438
null
null
http://arxiv.org/pdf/2406.01438v2
2024-06-20T08:11:12Z
2024-06-03T15:29:38Z
Asynchronous Byzantine Federated Learning
Federated learning (FL) enables a set of geographically distributed clients to collectively train a model through a server. Classically, the training process is synchronous, but can be made asynchronous to maintain its speed in presence of slow clients and in heterogeneous networks. The vast majority of Byzantine fault-tolerant FL systems however rely on a synchronous training process. Our solution is one of the first Byzantine-resilient and asynchronous FL algorithms that does not require an auxiliary server dataset and is not delayed by stragglers, which are shortcomings of previous works. Intuitively, the server in our solution waits to receive a minimum number of updates from clients on its latest model to safely update it, and is later able to safely leverage the updates that late clients might send. We compare the performance of our solution with state-of-the-art algorithms on both image and text datasets under gradient inversion, perturbation, and backdoor attacks. Our results indicate that our solution trains a model faster than previous synchronous FL solution, and maintains a higher accuracy, up to 1.54x and up to 1.75x for perturbation and gradient inversion attacks respectively, in the presence of Byzantine clients than previous asynchronous FL solutions.
[ "['Bart Cox' 'Abele Mălan' 'Lydia Y. Chen' 'Jérémie Decouchant']" ]
null
null
2406.01439
null
null
http://arxiv.org/pdf/2406.01439v2
2024-06-20T12:04:28Z
2024-06-03T15:29:46Z
Asynchronous Multi-Server Federated Learning for Geo-Distributed Clients
Federated learning (FL) systems enable multiple clients to train a machine learning model iteratively through synchronously exchanging the intermediate model weights with a single server. The scalability of such FL systems can be limited by two factors: server idle time due to synchronous communication and the risk of a single server becoming the bottleneck. In this paper, we propose a new FL architecture, to our knowledge, the first multi-server FL system that is entirely asynchronous, and therefore addresses these two limitations simultaneously. Our solution keeps both servers and clients continuously active. As in previous multi-server methods, clients interact solely with their nearest server, ensuring efficient update integration into the model. Differently, however, servers also periodically update each other asynchronously, and never postpone interactions with clients. We compare our solution to three representative baselines - FedAvg, FedAsync and HierFAVG - on the MNIST and CIFAR-10 image classification datasets and on the WikiText-2 language modeling dataset. Our solution converges to similar or higher accuracy levels than previous baselines and requires 61% less time to do so in geo-distributed settings.
[ "['Yuncong Zuo' 'Bart Cox' 'Lydia Y. Chen' 'Jérémie Decouchant']" ]
null
null
2406.01446
null
null
http://arxiv.org/pdf/2406.01446v1
2024-06-03T15:38:40Z
2024-06-03T15:38:40Z
Enabling ASR for Low-Resource Languages: A Comprehensive Dataset Creation Approach
In recent years, automatic speech recognition (ASR) systems have significantly improved, especially in languages with a vast amount of transcribed speech data. However, ASR systems tend to perform poorly for low-resource languages with fewer resources, such as minority and regional languages. This study introduces a novel pipeline designed to generate ASR training datasets from audiobooks, which typically feature a single transcript associated with hours-long audios. The common structure of these audiobooks poses a unique challenge due to the extensive length of audio segments, whereas optimal ASR training requires segments ranging from 4 to 15 seconds. To address this, we propose a method for effectively aligning audio with its corresponding text and segmenting it into lengths suitable for ASR training. Our approach simplifies data preparation for ASR systems in low-resource languages and demonstrates its application through a case study involving the Armenian language. Our method, which is "portable" to many low-resource languages, not only mitigates the issue of data scarcity but also enhances the performance of ASR models for underrepresented languages.
[ "['Ara Yeroyan' 'Nikolay Karpov']" ]
null
null
2406.01455
null
null
http://arxiv.org/pdf/2406.01455v1
2024-06-03T15:43:29Z
2024-06-03T15:43:29Z
Automatic Fused Multimodal Deep Learning for Plant Identification
Plant classification is vital for ecological conservation and agricultural productivity, enhancing our understanding of plant growth dynamics and aiding species preservation. The advent of deep learning (DL) techniques has revolutionized this field by enabling autonomous feature extraction, significantly reducing the dependence on manual expertise. However, conventional DL models often rely solely on single data sources, failing to capture the full biological diversity of plant species comprehensively. Recent research has turned to multimodal learning to overcome this limitation by integrating multiple data types, which enriches the representation of plant characteristics. This shift introduces the challenge of determining the optimal point for modality fusion. In this paper, we introduce a pioneering multimodal DL-based approach for plant classification with automatic modality fusion. Utilizing the multimodal fusion architecture search, our method integrates images from multiple plant organs-flowers, leaves, fruits, and stems-into a cohesive model. Our method achieves 83.48% accuracy on 956 classes of the PlantCLEF2015 dataset, surpassing state-of-the-art methods. It outperforms late fusion by 11.07% and is more robust to missing modalities. We validate our model against established benchmarks using standard performance metrics and McNemar's test, further underscoring its superiority.
[ "['Alfreds Lapkovskis' 'Natalia Nefedova' 'Ali Beikmohammadi']" ]
null
null
2406.01457
null
null
http://arxiv.org/pdf/2406.01457v1
2024-06-03T15:43:57Z
2024-06-03T15:43:57Z
Differentially Private Tabular Data Synthesis using Large Language Models
Synthetic tabular data generation with differential privacy is a crucial problem to enable data sharing with formal privacy. Despite a rich history of methodological research and development, developing differentially private tabular data generators that can provide realistic synthetic datasets remains challenging. This paper introduces DP-LLMTGen -- a novel framework for differentially private tabular data synthesis that leverages pretrained large language models (LLMs). DP-LLMTGen models sensitive datasets using a two-stage fine-tuning procedure with a novel loss function specifically designed for tabular data. Subsequently, it generates synthetic data through sampling the fine-tuned LLMs. Our empirical evaluation demonstrates that DP-LLMTGen outperforms a variety of existing mechanisms across multiple datasets and privacy settings. Additionally, we conduct an ablation study and several experimental analyses to deepen our understanding of LLMs in addressing this important problem. Finally, we highlight the controllable generation ability of DP-LLMTGen through a fairness-constrained generation setting.
[ "['Toan V. Tran' 'Li Xiong']" ]
null
null
2406.01461
null
null
http://arxiv.org/pdf/2406.01461v1
2024-06-03T15:50:32Z
2024-06-03T15:50:32Z
Hardness of Learning Neural Networks under the Manifold Hypothesis
The manifold hypothesis presumes that high-dimensional data lies on or near a low-dimensional manifold. While the utility of encoding geometric structure has been demonstrated empirically, rigorous analysis of its impact on the learnability of neural networks is largely missing. Several recent results have established hardness results for learning feedforward and equivariant neural networks under i.i.d. Gaussian or uniform Boolean data distributions. In this paper, we investigate the hardness of learning under the manifold hypothesis. We ask which minimal assumptions on the curvature and regularity of the manifold, if any, render the learning problem efficiently learnable. We prove that learning is hard under input manifolds of bounded curvature by extending proofs of hardness in the SQ and cryptographic settings for Boolean data inputs to the geometric setting. On the other hand, we show that additional assumptions on the volume of the data manifold alleviate these fundamental limitations and guarantee learnability via a simple interpolation argument. Notable instances of this regime are manifolds which can be reliably reconstructed via manifold learning. Looking forward, we comment on and empirically explore intermediate regimes of manifolds, which have heterogeneous features commonly found in real world data.
[ "['Bobak T. Kiani' 'Jason Wang' 'Melanie Weber']" ]
null
null
2406.01462
null
null
http://arxiv.org/pdf/2406.01462v1
2024-06-03T15:51:04Z
2024-06-03T15:51:04Z
Understanding Preference Fine-Tuning Through the Lens of Coverage
Learning from human preference data has emerged as the dominant paradigm for fine-tuning large language models (LLMs). The two most common families of techniques -- online reinforcement learning (RL) such as Proximal Policy Optimization (PPO) and offline contrastive methods such as Direct Preference Optimization (DPO) -- were positioned as equivalent in prior work due to the fact that both have to start from the same offline preference dataset. To further expand our theoretical understanding of the similarities and differences between online and offline techniques for preference fine-tuning, we conduct a rigorous analysis through the lens of dataset coverage, a concept that captures how the training data covers the test distribution and is widely used in RL. We prove that a global coverage condition is both necessary and sufficient for offline contrastive methods to converge to the optimal policy, but a weaker partial coverage condition suffices for online RL methods. This separation provides one explanation of why online RL methods can perform better than offline methods, especially when the offline preference data is not diverse enough. Finally, motivated by our preceding theoretical observations, we derive a hybrid preference optimization (HyPO) algorithm that uses offline data for contrastive-based preference optimization and online data for KL regularization. Theoretically and empirically, we demonstrate that HyPO is more performant than its pure offline counterpart DPO, while still preserving its computation and memory efficiency.
[ "['Yuda Song' 'Gokul Swamy' 'Aarti Singh' 'J. Andrew Bagnell' 'Wen Sun']" ]
null
null
2406.01468
null
null
http://arxiv.org/pdf/2406.01468v1
2024-06-03T15:57:29Z
2024-06-03T15:57:29Z
Understanding Token Probability Encoding in Output Embeddings
In this paper, we investigate the output token probability information in the output embedding of language models. We provide an approximate common log-linear encoding of output token probabilities within the output embedding vectors and demonstrate that it is accurate and sparse when the output space is large and output logits are concentrated. Based on such findings, we edit the encoding in output embedding to modify the output probability distribution accurately. Moreover, the sparsity we find in output probability encoding suggests that a large number of dimensions in the output embedding do not contribute to causal language modeling. Therefore, we attempt to delete the output-unrelated dimensions and find more than 30% of the dimensions can be deleted without significant movement in output distribution and degeneration on sequence generation. Additionally, in training dynamics, we use such encoding as a probe and find that the output embeddings capture token frequency information in early steps, even before an obvious convergence starts.
[ "['Hakaze Cho' 'Yoshihiro Sakai' 'Kenshiro Tanaka' 'Mariko Kato'\n 'Naoya Inoue']" ]
null
null
2406.01471
null
null
http://arxiv.org/pdf/2406.01471v1
2024-06-03T15:59:19Z
2024-06-03T15:59:19Z
Inverse design of photonic surfaces on Inconel via multi-fidelity machine learning ensemble framework and high throughput femtosecond laser processing
We demonstrate a multi-fidelity (MF) machine learning ensemble framework for the inverse design of photonic surfaces, trained on a dataset of 11,759 samples that we fabricate using high throughput femtosecond laser processing. The MF ensemble combines an initial low fidelity model for generating design solutions, with a high fidelity model that refines these solutions through local optimization. The combined MF ensemble can generate multiple disparate sets of laser-processing parameters that can each produce the same target input spectral emissivity with high accuracy (root mean squared errors < 2%). SHapley Additive exPlanations analysis shows transparent model interpretability of the complex relationship between laser parameters and spectral emissivity. Finally, the MF ensemble is experimentally validated by fabricating and evaluating photonic surface designs that it generates for improved efficiency energy harvesting devices. Our approach provides a powerful tool for advancing the inverse design of photonic surfaces in energy harvesting applications.
[ "['Luka Grbcic' 'Minok Park' 'Mahmoud Elzouka' 'Ravi Prasher'\n 'Juliane Müller' 'Costas P. Grigoropoulos' 'Sean D. Lubner'\n 'Vassilia Zorba' 'Wibe Albert de Jong']" ]
null
null
2406.01477
null
null
http://arxiv.org/pdf/2406.01477v1
2024-06-03T16:06:12Z
2024-06-03T16:06:12Z
Finding Optimally Robust Data Mixtures via Concave Maximization
Training on mixtures of data distributions is now common in many modern machine learning pipelines, useful for performing well on several downstream tasks. Group distributionally robust optimization (group DRO) is one popular way to learn mixture weights for training a specific model class, but group DRO methods suffer for non-linear models due to non-convex loss functions and when the models are non-parametric. We address these challenges by proposing to solve a more general DRO problem, giving a method we call MixMax. MixMax selects mixture weights by maximizing a particular concave objective with entropic mirror ascent, and, crucially, we prove that optimally fitting this mixture distribution over the set of bounded predictors returns a group DRO optimal model. Experimentally, we tested MixMax on a sequence modeling task with transformers and on a variety of non-parametric learning problems. In all instances MixMax matched or outperformed the standard data mixing and group DRO baselines, and in particular, MixMax improved the performance of XGBoost over the only baseline, data balancing, for variations of the ACSIncome and CelebA annotations datasets.
[ "['Anvith Thudi' 'Chris J. Maddison']" ]
null
null
2406.01478
null
null
http://arxiv.org/pdf/2406.01478v1
2024-06-03T16:06:23Z
2024-06-03T16:06:23Z
Stochastic Newton Proximal Extragradient Method
Stochastic second-order methods achieve fast local convergence in strongly convex optimization by using noisy Hessian estimates to precondition the gradient. However, these methods typically reach superlinear convergence only when the stochastic Hessian noise diminishes, increasing per-iteration costs over time. Recent work in [arXiv:2204.09266] addressed this with a Hessian averaging scheme that achieves superlinear convergence without higher per-iteration costs. Nonetheless, the method has slow global convergence, requiring up to $tilde{O}(kappa^2)$ iterations to reach the superlinear rate of $tilde{O}((1/t)^{t/2})$, where $kappa$ is the problem's condition number. In this paper, we propose a novel stochastic Newton proximal extragradient method that improves these bounds, achieving a faster global linear rate and reaching the same fast superlinear rate in $tilde{O}(kappa)$ iterations. We accomplish this by extending the Hybrid Proximal Extragradient (HPE) framework, achieving fast global and local convergence rates for strongly convex functions with access to a noisy Hessian oracle.
[ "['Ruichen Jiang' 'Michał Dereziński' 'Aryan Mokhtari']" ]
null
null
2406.01481
null
null
http://arxiv.org/pdf/2406.01481v1
2024-06-03T16:07:52Z
2024-06-03T16:07:52Z
Learning from Streaming Data when Users Choose
In digital markets comprised of many competing services, each user chooses between multiple service providers according to their preferences, and the chosen service makes use of the user data to incrementally improve its model. The service providers' models influence which service the user will choose at the next time step, and the user's choice, in return, influences the model update, leading to a feedback loop. In this paper, we formalize the above dynamics and develop a simple and efficient decentralized algorithm to locally minimize the overall user loss. Theoretically, we show that our algorithm asymptotically converges to stationary points of of the overall loss almost surely. We also experimentally demonstrate the utility of our algorithm with real world data.
[ "['Jinyan Su' 'Sarah Dean']" ]
null
null
2406.01484
null
null
http://arxiv.org/pdf/2406.01484v1
2024-06-03T16:09:34Z
2024-06-03T16:09:34Z
Online Optimization Perspective on First-Order and Zero-Order Decentralized Nonsmooth Nonconvex Stochastic Optimization
We investigate the finite-time analysis of finding ($delta,epsilon$)-stationary points for nonsmooth nonconvex objectives in decentralized stochastic optimization. A set of agents aim at minimizing a global function using only their local information by interacting over a network. We present a novel algorithm, called Multi Epoch Decentralized Online Learning (ME-DOL), for which we establish the sample complexity in various settings. First, using a recently proposed online-to-nonconvex technique, we show that our algorithm recovers the optimal convergence rate of smooth nonconvex objectives. We then extend our analysis to the nonsmooth setting, building on properties of randomized smoothing and Goldstein-subdifferential sets. We establish the sample complexity of $O(delta^{-1}epsilon^{-3})$, which to the best of our knowledge is the first finite-time guarantee for decentralized nonsmooth nonconvex stochastic optimization in the first-order setting (without weak-convexity), matching its optimal centralized counterpart. We further prove the same rate for the zero-order oracle setting without using variance reduction.
[ "['Emre Sahinoglu' 'Shahin Shahrampour']" ]
null
null
2406.01494
null
null
http://arxiv.org/pdf/2406.01494v1
2024-06-03T16:21:29Z
2024-06-03T16:21:29Z
Robust Classification by Coupling Data Mollification with Label Smoothing
Introducing training-time augmentations is a key technique to enhance generalization and prepare deep neural networks against test-time corruptions. Inspired by the success of generative diffusion models, we propose a novel approach coupling data augmentation, in the form of image noising and blurring, with label smoothing to align predicted label confidences with image degradation. The method is simple to implement, introduces negligible overheads, and can be combined with existing augmentations. We demonstrate improved robustness and uncertainty quantification on the corrupted image benchmarks of the CIFAR and TinyImageNet datasets.
[ "['Markus Heinonen' 'Ba-Hien Tran' 'Michael Kampffmeyer'\n 'Maurizio Filippone']" ]
null
null
2406.01506
null
null
http://arxiv.org/pdf/2406.01506v1
2024-06-03T16:34:01Z
2024-06-03T16:34:01Z
The Geometry of Categorical and Hierarchical Concepts in Large Language Models
Understanding how semantic meaning is encoded in the representation spaces of large language models is a fundamental problem in interpretability. In this paper, we study the two foundational questions in this area. First, how are categorical concepts, such as {'mammal', 'bird', 'reptile', 'fish'}, represented? Second, how are hierarchical relations between concepts encoded? For example, how is the fact that 'dog' is a kind of 'mammal' encoded? We show how to extend the linear representation hypothesis to answer these questions. We find a remarkably simple structure: simple categorical concepts are represented as simplices, hierarchically related concepts are orthogonal in a sense we make precise, and (in consequence) complex concepts are represented as polytopes constructed from direct sums of simplices, reflecting the hierarchical structure. We validate these theoretical results on the Gemma large language model, estimating representations for 957 hierarchically related concepts using data from WordNet.
[ "['Kiho Park' 'Yo Joong Choe' 'Yibo Jiang' 'Victor Veitch']" ]
null
null
2406.01517
null
null
http://arxiv.org/pdf/2406.01517v1
2024-06-03T16:48:25Z
2024-06-03T16:48:25Z
Beyond symmetrization: effective adjacency matrices and renormalization for (un)singed directed graphs
To address the peculiarities of directed and/or signed graphs, new Laplacian operators have emerged. For instance, in the case of directionality, we encounter the magnetic operator, dilation (which is underexplored), operators based on random walks, and so forth. The definition of these new operators leads to the need for new studies and concepts, and consequently, the development of new computational tools. But is this really necessary? In this work, we define the concept of effective adjacency matrices that arise from the definition of deformed Laplacian operators such as magnetic, dilation, and signal. These effective matrices allow mapping generic graphs to a family of unsigned, undirected graphs, enabling the application of the well-explored toolkit of measures, machine learning methods, and renormalization groups of undirected graphs. To explore the interplay between deformed operators and effective matrices, we show how the Hodge-Helmholtz decomposition can assist us in navigating this complexity.
[ "['Bruno Messias Farias de Resende']" ]
null
null
2406.01521
null
null
http://arxiv.org/pdf/2406.01521v1
2024-06-03T16:51:57Z
2024-06-03T16:51:57Z
MOSEAC: Streamlined Variable Time Step Reinforcement Learning
Traditional reinforcement learning (RL) methods typically employ a fixed control loop, where each cycle corresponds to an action. This rigidity poses challenges in practical applications, as the optimal control frequency is task-dependent. A suboptimal choice can lead to high computational demands and reduced exploration efficiency. Variable Time Step Reinforcement Learning (VTS-RL) addresses these issues by using adaptive frequencies for the control loop, executing actions only when necessary. This approach, rooted in reactive programming principles, reduces computational load and extends the action space by including action durations. However, VTS-RL's implementation is often complicated by the need to tune multiple hyperparameters that govern exploration in the multi-objective action-duration space (i.e., balancing task performance and number of time steps to achieve a goal). To overcome these challenges, we introduce the Multi-Objective Soft Elastic Actor-Critic (MOSEAC) method. This method features an adaptive reward scheme that adjusts hyperparameters based on observed trends in task rewards during training. This scheme reduces the complexity of hyperparameter tuning, requiring a single hyperparameter to guide exploration, thereby simplifying the learning process and lowering deployment costs. We validate the MOSEAC method through simulations in a Newtonian kinematics environment, demonstrating high task and training performance with fewer time steps, ultimately lowering energy consumption. This validation shows that MOSEAC streamlines RL algorithm deployment by automatically tuning the agent control loop frequency using a single parameter. Its principles can be applied to enhance any RL algorithm, making it a versatile solution for various applications.
[ "['Dong Wang' 'Giovanni Beltrame']" ]
null
null
2406.01523
null
null
http://arxiv.org/pdf/2406.01523v1
2024-06-03T16:53:07Z
2024-06-03T16:53:07Z
Predicting the fatigue life of asphalt concrete using neural networks
Asphalt concrete's (AC) durability and maintenance demands are strongly influenced by its fatigue life. Traditional methods for determining this characteristic are both resource-intensive and time-consuming. This study employs artificial neural networks (ANNs) to predict AC fatigue life, focusing on the impact of strain level, binder content, and air-void content. Leveraging a substantial dataset, we tailored our models to effectively handle the wide range of fatigue life data, typically represented on a logarithmic scale. The mean square logarithmic error was utilized as the loss function to enhance prediction accuracy across all levels of fatigue life. Through comparative analysis of various hyperparameters, we developed a machine-learning model that captures the complex relationships within the data. Our findings demonstrate that higher binder content significantly enhances fatigue life, while the influence of air-void content is more variable, depending on binder levels. Most importantly, this study provides insights into the intricacies of using ANNs for modeling, showcasing their potential utility with larger datasets. The codes developed and the data used in this study are provided as open source on a GitHub repository, with a link included in the paper for full access.
[ "['Jakub Houlík' 'Jan Valentin' 'Václav Nežerka']" ]
null
null
2406.01528
null
null
http://arxiv.org/pdf/2406.01528v2
2024-07-07T11:30:50Z
2024-06-03T16:58:17Z
Physics-Informed Neural Networks for Dynamic Process Operations with Limited Physical Knowledge and Data
In chemical engineering, process data are expensive to acquire, and complex phenomena are difficult to fully model. We explore the use of physics-informed neural networks (PINNs) for dynamic processes with incomplete mechanistic semi-explicit differential-algebraic equation systems and scarce process data. In particular, we focus on estimating states for which neither direct observational data nor constitutive equations are available. We propose an easy-to-apply heuristic to assess whether estimation of such states may be possible. As numerical examples, we consider a continuously stirred tank reactor and a liquid-liquid separator. We find that PINNs can infer unmeasured states with reasonable accuracy, and they generalize better in low-data scenarios than purely data-driven models. We thus show that PINNs are capable of modeling processes when relatively few experimental data and only partially known mechanistic descriptions are available, and conclude that they constitute a promising avenue that warrants further investigation.
[ "['Mehmet Velioglu' 'Song Zhai' 'Sophia Rupprecht' 'Alexander Mitsos'\n 'Andreas Jupke' 'Manuel Dahmen']" ]
null
null
2406.01529
null
null
http://arxiv.org/pdf/2406.01529v1
2024-06-03T16:59:48Z
2024-06-03T16:59:48Z
How to Count Coughs: An Event-Based Framework for Evaluating Automatic Cough Detection Algorithm Performance
Chronic cough disorders are widespread and challenging to assess because they rely on subjective patient questionnaires about cough frequency. Wearable devices running Machine Learning (ML) algorithms are promising for quantifying daily coughs, providing clinicians with objective metrics to track symptoms and evaluate treatments. However, there is a mismatch between state-of-the-art metrics for cough counting algorithms and the information relevant to clinicians. Most works focus on distinguishing cough from non-cough samples, which does not directly provide clinically relevant outcomes such as the number of cough events or their temporal patterns. In addition, typical metrics such as specificity and accuracy can be biased by class imbalance. We propose using event-based evaluation metrics aligned with clinical guidelines on significant cough counting endpoints. We use an ML classifier to illustrate the shortcomings of traditional sample-based accuracy measurements, highlighting their variance due to dataset class imbalance and sample window length. We also present an open-source event-based evaluation framework to test algorithm performance in identifying cough events and rejecting false positives. We provide examples and best practice guidelines in event-based cough counting as a necessary first step to assess algorithm performance with clinical relevance.
[ "['Lara Orlandic' 'Jonathan Dan' 'Jerome Thevenot' 'Tomas Teijeiro'\n 'Alain Sauty' 'David Atienza']" ]
null
null
2406.01539
null
null
http://arxiv.org/pdf/2406.01539v2
2024-06-10T17:22:08Z
2024-06-03T17:16:11Z
Physics-informed deep learning and compressive collocation for high-dimensional diffusion-reaction equations: practical existence theory and numerics
On the forefront of scientific computing, Deep Learning (DL), i.e., machine learning with Deep Neural Networks (DNNs), has emerged a powerful new tool for solving Partial Differential Equations (PDEs). It has been observed that DNNs are particularly well suited to weakening the effect of the curse of dimensionality, a term coined by Richard E. Bellman in the late `50s to describe challenges such as the exponential dependence of the sample complexity, i.e., the number of samples required to solve an approximation problem, on the dimension of the ambient space. However, although DNNs have been used to solve PDEs since the `90s, the literature underpinning their mathematical efficiency in terms of numerical analysis (i.e., stability, accuracy, and sample complexity), is only recently beginning to emerge. In this paper, we leverage recent advancements in function approximation using sparsity-based techniques and random sampling to develop and analyze an efficient high-dimensional PDE solver based on DL. We show, both theoretically and numerically, that it can compete with a novel stable and accurate compressive spectral collocation method. In particular, we demonstrate a new practical existence theorem, which establishes the existence of a class of trainable DNNs with suitable bounds on the network architecture and a sufficient condition on the sample complexity, with logarithmic or, at worst, linear scaling in dimension, such that the resulting networks stably and accurately approximate a diffusion-reaction PDE with high probability.
[ "['Simone Brugiapaglia' 'Nick Dexter' 'Samir Karam' 'Weiqi Wang']" ]
null
null
2406.01544
null
null
http://arxiv.org/pdf/2406.01544v1
2024-06-03T17:25:18Z
2024-06-03T17:25:18Z
Learning from Mistakes: a Weakly-supervised Method for Mitigating the Distribution Shift in Autonomous Vehicle Planning
The planning problem constitutes a fundamental aspect of the autonomous driving framework. Recent strides in representation learning have empowered vehicles to comprehend their surrounding environments, thereby facilitating the integration of learning-based planning strategies. Among these approaches, Imitation Learning stands out due to its notable training efficiency. However, traditional Imitation Learning methodologies encounter challenges associated with the co-variate shift phenomenon. We propose Learn from Mistakes (LfM) as a remedy to address this issue. The essence of LfM lies in deploying a pre-trained planner across diverse scenarios. Instances where the planner deviates from its immediate objectives, such as maintaining a safe distance from obstacles or adhering to traffic rules, are flagged as mistakes. The environments corresponding to these mistakes are categorized as out-of-distribution states and compiled into a new dataset termed closed-loop mistakes dataset. Notably, the absence of expert annotations for the closed-loop data precludes the applicability of standard imitation learning approaches. To facilitate learning from the closed-loop mistakes, we introduce Validity Learning, a weakly supervised method, which aims to discern valid trajectories within the current environmental context. Experimental evaluations conducted on the InD and Nuplan datasets reveal substantial enhancements in closed-loop metrics such as Progress and Collision Rate, underscoring the effectiveness of the proposed methodology.
[ "['Fazel Arasteh' 'Mohammed Elmahgiubi' 'Behzad Khamidehi'\n 'Hamidreza Mirkhani' 'Weize Zhang' 'Kasra Rezaee']" ]
null
null
2406.01552
null
null
http://arxiv.org/pdf/2406.01552v1
2024-06-03T17:32:43Z
2024-06-03T17:32:43Z
Learning equivariant tensor functions with applications to sparse vector recovery
This work characterizes equivariant polynomial functions from tuples of tensor inputs to tensor outputs. Loosely motivated by physics, we focus on equivariant functions with respect to the diagonal action of the orthogonal group on tensors. We show how to extend this characterization to other linear algebraic groups, including the Lorentz and symplectic groups. Our goal behind these characterizations is to define equivariant machine learning models. In particular, we focus on the sparse vector estimation problem. This problem has been broadly studied in the theoretical computer science literature, and explicit spectral methods, derived by techniques from sum-of-squares, can be shown to recover sparse vectors under certain assumptions. Our numerical results show that the proposed equivariant machine learning models can learn spectral methods that outperform the best theoretically known spectral methods in some regimes. The experiments also suggest that learned spectral methods can solve the problem in settings that have not yet been theoretically analyzed. This is an example of a promising direction in which theory can inform machine learning models and machine learning models could inform theory.
[ "['Wilson G. Gregory' 'Josué Tonelli-Cueto' 'Nicholas F. Marshall'\n 'Andrew S. Lee' 'Soledad Villar']" ]
null
null
2406.01561
null
null
http://arxiv.org/pdf/2406.01561v2
2024-06-22T04:48:56Z
2024-06-03T17:44:11Z
Long and Short Guidance in Score identity Distillation for One-Step Text-to-Image Generation
Diffusion-based text-to-image generation models trained on extensive text-image pairs have shown the capacity to generate photorealistic images consistent with textual descriptions. However, a significant limitation of these models is their slow sample generation, which requires iterative refinement through the same network. In this paper, we enhance Score identity Distillation (SiD) by developing long and short classifier-free guidance (LSG) to efficiently distill pretrained Stable Diffusion models without using real training data. SiD aims to optimize a model-based explicit score matching loss, utilizing a score-identity-based approximation alongside the proposed LSG for practical computation. By training exclusively with fake images synthesized with its one-step generator, SiD equipped with LSG rapidly improves FID and CLIP scores, achieving state-of-the-art FID performance while maintaining a competitive CLIP score. Specifically, its data-free distillation of Stable Diffusion 1.5 achieves a record low FID of 8.15 on the COCO-2014 validation set, with a CLIP score of 0.304 at an LSG scale of 1.5, and a FID of 9.56 with a CLIP score of 0.313 at an LSG scale of 2. Our SiD-LSG code and distilled one-step text-to-image generators are available at https://github.com/mingyuanzhou/SiD-LSG
[ "['Mingyuan Zhou' 'Zhendong Wang' 'Huangjie Zheng' 'Hai Huang']" ]
null
null
2406.01562
null
null
http://arxiv.org/pdf/2406.01562v1
2024-06-03T17:45:19Z
2024-06-03T17:45:19Z
A New View on Planning in Online Reinforcement Learning
This paper investigates a new approach to model-based reinforcement learning using background planning: mixing (approximate) dynamic programming updates and model-free updates, similar to the Dyna architecture. Background planning with learned models is often worse than model-free alternatives, such as Double DQN, even though the former uses significantly more memory and computation. The fundamental problem is that learned models can be inaccurate and often generate invalid states, especially when iterated many steps. In this paper, we avoid this limitation by constraining background planning to a set of (abstract) subgoals and learning only local, subgoal-conditioned models. This goal-space planning (GSP) approach is more computationally efficient, naturally incorporates temporal abstraction for faster long-horizon planning and avoids learning the transition dynamics entirely. We show that our GSP algorithm can propagate value from an abstract space in a manner that helps a variety of base learners learn significantly faster in different domains.
[ "['Kevin Roice' 'Parham Mohammad Panahi' 'Scott M. Jordan' 'Adam White'\n 'Martha White']" ]
null
null
2406.01566
null
null
http://arxiv.org/pdf/2406.01566v1
2024-06-03T17:47:53Z
2024-06-03T17:47:53Z
Helix: Distributed Serving of Large Language Models via Max-Flow on Heterogeneous GPUs
This paper introduces Helix, a distributed system for high-throughput, low-latency large language model (LLM) serving on heterogeneous GPU clusters. A key idea behind Helix is to formulate inference computation of LLMs over heterogeneous GPUs and network connections as a max-flow problem for a directed, weighted graph, whose nodes represent GPU instances and edges capture both GPU and network heterogeneity through their capacities. Helix then uses a mixed integer linear programming (MILP) algorithm to discover highly optimized strategies to serve LLMs. This approach allows Helix to jointly optimize model placement and request scheduling, two highly entangled tasks in heterogeneous LLM serving. Our evaluation on several heterogeneous cluster settings ranging from 24 to 42 GPU nodes shows that Helix improves serving throughput by up to 2.7$times$ and reduces prompting and decoding latency by up to 2.8$times$ and 1.3$times$, respectively, compared to best existing approaches.
[ "['Yixuan Mei' 'Yonghao Zhuang' 'Xupeng Miao' 'Juncheng Yang' 'Zhihao Jia'\n 'Rashmi Vinayak']" ]
null
null
2406.01570
null
null
http://arxiv.org/pdf/2406.01570v1
2024-06-03T17:51:33Z
2024-06-03T17:51:33Z
Single Trajectory Conformal Prediction
We study the performance of risk-controlling prediction sets (RCPS), an empirical risk minimization-based formulation of conformal prediction, with a single trajectory of temporally correlated data from an unknown stochastic dynamical system. First, we use the blocking technique to show that RCPS attains performance guarantees similar to those enjoyed in the iid setting whenever data is generated by asymptotically stationary and contractive dynamics. Next, we use the decoupling technique to characterize the graceful degradation in RCPS guarantees when the data generating process deviates from stationarity and contractivity. We conclude by discussing how these tools could be used toward a unified analysis of online and offline conformal prediction algorithms, which are currently treated with very different tools.
[ "['Brian Lee' 'Nikolai Matni']" ]
null
null
2406.01572
null
null
http://arxiv.org/pdf/2406.01572v1
2024-06-03T17:51:54Z
2024-06-03T17:51:54Z
Unlocking Guidance for Discrete State-Space Diffusion and Flow Models
Generative models on discrete state-spaces have a wide range of potential applications, particularly in the domain of natural sciences. In continuous state-spaces, controllable and flexible generation of samples with desired properties has been realized using guidance on diffusion and flow models. However, these guidance approaches are not readily amenable to discrete state-space models. Consequently, we introduce a general and principled method for applying guidance on such models. Our method depends on leveraging continuous-time Markov processes on discrete state-spaces, which unlocks computational tractability for sampling from a desired guided distribution. We demonstrate the utility of our approach, Discrete Guidance, on a range of applications including guided generation of images, small-molecules, DNA sequences and protein sequences.
[ "['Hunter Nisonoff' 'Junhao Xiong' 'Stephan Allenspach'\n 'Jennifer Listgarten']" ]
null
null
2406.01575
null
null
http://arxiv.org/pdf/2406.01575v1
2024-06-03T17:54:39Z
2024-06-03T17:54:39Z
Stochastic Bilevel Optimization with Lower-Level Contextual Markov Decision Processes
In various applications, the optimal policy in a strategic decision-making problem depends both on the environmental configuration and exogenous events. For these settings, we introduce Bilevel Optimization with Contextual Markov Decision Processes (BO-CMDP), a stochastic bilevel decision-making model, where the lower level consists of solving a contextual Markov Decision Process (CMDP). BO-CMDP can be viewed as a Stackelberg Game where the leader and a random context beyond the leader's control together decide the setup of (many) MDPs that (potentially multiple) followers best respond to. This framework extends beyond traditional bilevel optimization and finds relevance in diverse fields such as model design for MDPs, tax design, reward shaping and dynamic mechanism design. We propose a stochastic Hyper Policy Gradient Descent (HPGD) algorithm to solve BO-CMDP, and demonstrate its convergence. Notably, HPGD only utilizes observations of the followers' trajectories. Therefore, it allows followers to use any training procedure and the leader to be agnostic of the specific algorithm used, which aligns with various real-world scenarios. We further consider the setting when the leader can influence the training of followers and propose an accelerated algorithm. We empirically demonstrate the performance of our algorithm.
[ "['Vinzenz Thoma' 'Barna Pasztor' 'Andreas Krause' 'Giorgia Ramponi'\n 'Yifan Hu']" ]
null
null
2406.01577
null
null
http://arxiv.org/pdf/2406.01577v1
2024-06-03T17:54:58Z
2024-06-03T17:54:58Z
An Equivalence Between Static and Dynamic Regret Minimization
We study the problem of dynamic regret minimization in online convex optimization, in which the objective is to minimize the difference between the cumulative loss of an algorithm and that of an arbitrary sequence of comparators. While the literature on this topic is very rich, a unifying framework for the analysis and design of these algorithms is still missing. In this paper, emph{we show that dynamic regret minimization is equivalent to static regret minimization in an extended decision space}. Using this simple observation, we show that there is a frontier of lower bounds trading off penalties due to the variance of the losses and penalties due to variability of the comparator sequence, and provide a framework for achieving any of the guarantees along this frontier. As a result, we prove for the first time that adapting to the squared path-length of an arbitrary sequence of comparators to achieve regret $R_{T}(u_{1},dots,u_{T})le O(sqrt{Tsum_{t} |u_{t}-u_{t+1}|^{2}})$ is impossible. However, we prove that it is possible to adapt to a new notion of variability based on the locally-smoothed squared path-length of the comparator sequence, and provide an algorithm guaranteeing dynamic regret of the form $R_{T}(u_{1},dots,u_{T})le tilde O(sqrt{Tsum_{i}|bar u_{i}-bar u_{i+1}|^{2}})$. Up to polylogarithmic terms, the new notion of variability is never worse than the classic one involving the path-length.
[ "['Andrew Jacobsen' 'Francesco Orabona']" ]
null
null
2406.01581
null
null
http://arxiv.org/pdf/2406.01581v1
2024-06-03T17:56:58Z
2024-06-03T17:56:58Z
Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbol{x}) = textstylesigma_*left(langleboldsymbol{x},boldsymbol{theta}rangleright)$ under isotropic Gaussian data in $mathbb{R}^d$, where the link function $sigma_*:mathbb{R}tomathbb{R}$ is an unknown degree $q$ polynomial with information exponent $p$ (defined as the lowest degree in the Hermite expansion). Prior works showed that gradient-based training of neural networks can learn this target with $ngtrsim d^{Theta(p)}$ samples, and such statistical complexity is predicted to be necessary by the correlational statistical query lower bound. Surprisingly, we prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ of arbitrary polynomial link function with a sample and runtime complexity of $n asymp T asymp C(q) cdot dmathrm{polylog} d$, where constant $C(q)$ only depends on the degree of $sigma_*$, regardless of information exponent; this dimension dependence matches the information theoretic limit up to polylogarithmic factors. Core to our analysis is the reuse of minibatch in the gradient computation, which gives rise to higher-order information beyond correlational queries.
[ "['Jason D. Lee' 'Kazusato Oko' 'Taiji Suzuki' 'Denny Wu']" ]
null
null
2406.01583
null
null
http://arxiv.org/pdf/2406.01583v1
2024-06-03T17:58:43Z
2024-06-03T17:58:43Z
Decomposing and Interpreting Image Representations via Text in ViTs Beyond CLIP
Recent works have explored how individual components of the CLIP-ViT model contribute to the final representation by leveraging the shared image-text representation space of CLIP. These components, such as attention heads and MLPs, have been shown to capture distinct image features like shape, color or texture. However, understanding the role of these components in arbitrary vision transformers (ViTs) is challenging. To this end, we introduce a general framework which can identify the roles of various components in ViTs beyond CLIP. Specifically, we (a) automate the decomposition of the final representation into contributions from different model components, and (b) linearly map these contributions to CLIP space to interpret them via text. Additionally, we introduce a novel scoring function to rank components by their importance with respect to specific features. Applying our framework to various ViT variants (e.g. DeiT, DINO, DINOv2, Swin, MaxViT), we gain insights into the roles of different components concerning particular image features.These insights facilitate applications such as image retrieval using text descriptions or reference images, visualizing token importance heatmaps, and mitigating spurious correlations.
[ "['Sriram Balasubramanian' 'Samyadeep Basu' 'Soheil Feizi']" ]
null
null
2406.01588
null
null
http://arxiv.org/pdf/2406.01588v1
2024-06-03T17:59:30Z
2024-06-03T17:59:30Z
nn2poly: An R Package for Converting Neural Networks into Interpretable Polynomials
The nn2poly package provides the implementation in R of the NN2Poly method to explain and interpret feed-forward neural networks by means of polynomial representations that predict in an equivalent manner as the original network.Through the obtained polynomial coefficients, the effect and importance of each variable and their interactions on the output can be represented. This capabiltiy of capturing interactions is a key aspect usually missing from most Explainable Artificial Intelligence (XAI) methods, specially if they rely on expensive computations that can be amplified when used on large neural networks. The package provides integration with the main deep learning framework packages in R (tensorflow and torch), allowing an user-friendly application of the NN2Poly algorithm. Furthermore, nn2poly provides implementation of the required weight constraints to be used during the network training in those same frameworks. Other neural networks packages can also be used by including their weights in list format. Polynomials obtained with nn2poly can also be used to predict with new data or be visualized through its own plot method. Simulations are provided exemplifying the usage of the package alongside with a comparison with other approaches available in R to interpret neural networks.
[ "['Pablo Morala' 'Jenny Alexandra Cifuentes' 'Rosa E. Lillo' 'Iñaki Ucar']" ]
null
null
2406.01589
null
null
http://arxiv.org/pdf/2406.01589v1
2024-06-03T17:59:33Z
2024-06-03T17:59:33Z
Tilting the Odds at the Lottery: the Interplay of Overparameterisation and Curricula in Neural Networks
A wide range of empirical and theoretical works have shown that overparameterisation can amplify the performance of neural networks. According to the lottery ticket hypothesis, overparameterised networks have an increased chance of containing a sub-network that is well-initialised to solve the task at hand. A more parsimonious approach, inspired by animal learning, consists in guiding the learner towards solving the task by curating the order of the examples, i.e. providing a curriculum. However, this learning strategy seems to be hardly beneficial in deep learning applications. In this work, we undertake an analytical study that connects curriculum learning and overparameterisation. In particular, we investigate their interplay in the online learning setting for a 2-layer network in the XOR-like Gaussian Mixture problem. Our results show that a high degree of overparameterisation -- while simplifying the problem -- can limit the benefit from curricula, providing a theoretical account of the ineffectiveness of curricula in deep learning.
[ "['Stefano Sarao Mannelli' 'Yaraslau Ivashinka' 'Andrew Saxe'\n 'Luca Saglietti']" ]
null
null
2406.01592
null
null
http://arxiv.org/pdf/2406.01592v1
2024-06-03T17:59:43Z
2024-06-03T17:59:43Z
Text-guided Controllable Mesh Refinement for Interactive 3D Modeling
We propose a novel technique for adding geometric details to an input coarse 3D mesh guided by a text prompt. Our method is composed of three stages. First, we generate a single-view RGB image conditioned on the input coarse geometry and the input text prompt. This single-view image generation step allows the user to pre-visualize the result and offers stronger conditioning for subsequent multi-view generation. Second, we use our novel multi-view normal generation architecture to jointly generate six different views of the normal images. The joint view generation reduces inconsistencies and leads to sharper details. Third, we optimize our mesh with respect to all views and generate a fine, detailed geometry as output. The resulting method produces an output within seconds and offers explicit user control over the coarse structure, pose, and desired details of the resulting 3D mesh. Project page: https://text-mesh-refinement.github.io.
[ "['Yun-Chun Chen' 'Selena Ling' 'Zhiqin Chen' 'Vladimir G. Kim'\n 'Matheus Gadelha' 'Alec Jacobson']" ]
null
null
2406.01594
null
null
http://arxiv.org/pdf/2406.01594v1
2024-06-03T17:59:53Z
2024-06-03T17:59:53Z
DiffUHaul: A Training-Free Method for Object Dragging in Images
Text-to-image diffusion models have proven effective for solving many image editing tasks. However, the seemingly straightforward task of seamlessly relocating objects within a scene remains surprisingly challenging. Existing methods addressing this problem often struggle to function reliably in real-world scenarios due to lacking spatial reasoning. In this work, we propose a training-free method, dubbed DiffUHaul, that harnesses the spatial understanding of a localized text-to-image model, for the object dragging task. Blindly manipulating layout inputs of the localized model tends to cause low editing performance due to the intrinsic entanglement of object representation in the model. To this end, we first apply attention masking in each denoising step to make the generation more disentangled across different objects and adopt the self-attention sharing mechanism to preserve the high-level object appearance. Furthermore, we propose a new diffusion anchoring technique: in the early denoising steps, we interpolate the attention features between source and target images to smoothly fuse new layouts with the original appearance; in the later denoising steps, we pass the localized features from the source images to the interpolated images to retain fine-grained object details. To adapt DiffUHaul to real-image editing, we apply a DDPM self-attention bucketing that can better reconstruct real images with the localized model. Finally, we introduce an automated evaluation pipeline for this task and showcase the efficacy of our method. Our results are reinforced through a user preference study.
[ "['Omri Avrahami' 'Rinon Gal' 'Gal Chechik' 'Ohad Fried' 'Dani Lischinski'\n 'Arash Vahdat' 'Weili Nie']" ]
null
null
2406.01599
null
null
http://arxiv.org/pdf/2406.01599v1
2024-04-25T18:22:22Z
2024-04-25T18:22:22Z
Markov Chain Monte Carlo with Gaussian Process Emulation for a 1D Hemodynamics Model of CTEPH
Microvascular disease is a contributor to persistent pulmonary hypertension in those with chronic thromboembolic pulmonary hypertension (CTEPH). The heterogenous nature of the micro and macrovascular defects motivates the use of personalized computational models, which can predict flow dynamics within multiple generations of the arterial tree and into the microvasculature. Our study uses computational hemodynamics models and Gaussian processes for rapid, subject-specific calibration using retrospective data from a large animal model of CTEPH. Our subject-specific predictions shed light on microvascular dysfunction and arterial wall shear stress changes in CTEPH.
[ "['Amirreza Kachabi' 'Mitchel J. Colebank' 'Sofia Altieri Correa'\n 'Naomi C. Chesler']" ]
null
null
2406.01601
null
null
http://arxiv.org/pdf/2406.01601v1
2024-05-21T14:42:18Z
2024-05-21T14:42:18Z
Backpropogation-Free Multi-modal On-Device Model Adaptation via Cloud-Device Collaboration
In our increasingly interconnected world, where intelligent devices continually amass copious personalized multi-modal data, a pressing need arises to deliver high-quality, personalized device-aware services. However, this endeavor presents a multifaceted challenge to prevailing artificial intelligence (AI) systems primarily rooted in the cloud. As these systems grapple with shifting data distributions between the cloud and devices, the traditional approach of fine-tuning-based adaptation (FTA) exists the following issues: the costly and time-consuming data annotation required by FTA and the looming risk of model overfitting. To surmount these challenges, we introduce a Universal On-Device Multi-modal Model Adaptation Framework, revolutionizing on-device model adaptation by striking a balance between efficiency and effectiveness. The framework features the Fast Domain Adaptor (FDA) hosted in the cloud, providing tailored parameters for the Lightweight Multi-modal Model on devices. To enhance adaptability across multi-modal tasks, the AnchorFrame Distribution Reasoner (ADR) minimizes communication costs. Our contributions, encapsulated in the Cloud-Device Collaboration Multi-modal Parameter Generation (CDC-MMPG) framework, represent a pioneering solution for on-Device Multi-modal Model Adaptation (DMMA). Extensive experiments validate the efficiency and effectiveness of our method, particularly in video question answering and retrieval tasks, driving forward the integration of intelligent devices into our daily lives.
[ "['Wei Ji' 'Li Li' 'Zheqi Lv' 'Wenqiao Zhang' 'Mengze Li' 'Zhen Wan'\n 'Wenqiang Lei' 'Roger Zimmermann']" ]
null
null
2406.01603
null
null
http://arxiv.org/pdf/2406.01603v1
2024-05-24T07:43:00Z
2024-05-24T07:43:00Z
Privacy-preserving recommender system using the data collaboration analysis for distributed datasets
In order to provide high-quality recommendations for users, it is desirable to share and integrate multiple datasets held by different parties. However, when sharing such distributed datasets, we need to protect personal and confidential information contained in the datasets. To this end, we establish a framework for privacy-preserving recommender systems using the data collaboration analysis of distributed datasets. Numerical experiments with two public rating datasets demonstrate that our privacy-preserving method for rating prediction can improve the prediction accuracy for distributed datasets. This study opens up new possibilities for privacy-preserving techniques in recommender systems.
[ "['Tomoya Yanagi' 'Shunnosuke Ikeda' 'Noriyoshi Sukegawa' 'Yuichi Takano']" ]
null
null
2406.01609
null
null
http://arxiv.org/pdf/2406.01609v1
2024-05-28T04:22:28Z
2024-05-28T04:22:28Z
Judgement Citation Retrieval using Contextual Similarity
Traditionally in the domain of legal research, the retrieval of pertinent citations from intricate case descriptions has demanded manual effort and keyword-based search applications that mandate expertise in understanding legal jargon. Legal case descriptions hold pivotal information for legal professionals and researchers, necessitating more efficient and automated approaches. We propose a methodology that combines natural language processing (NLP) and machine learning techniques to enhance the organization and utilization of legal case descriptions. This approach revolves around the creation of textual embeddings with the help of state-of-art embedding models. Our methodology addresses two primary objectives: unsupervised clustering and supervised citation retrieval, both designed to automate the citation extraction process. Although the proposed methodology can be used for any dataset, we employed the Supreme Court of The United States (SCOTUS) dataset, yielding remarkable results. Our methodology achieved an impressive accuracy rate of 90.9%. By automating labor-intensive processes, we pave the way for a more efficient, time-saving, and accessible landscape in legal research, benefiting legal professionals, academics, and researchers.
[ "['Akshat Mohan Dasula' 'Hrushitha Tigulla' 'Preethika Bhukya']" ]
null
null
2406.01611
null
null
http://arxiv.org/pdf/2406.01611v1
2024-05-29T18:19:37Z
2024-05-29T18:19:37Z
System-2 Recommenders: Disentangling Utility and Engagement in Recommendation Systems via Temporal Point-Processes
Recommender systems are an important part of the modern human experience whose influence ranges from the food we eat to the news we read. Yet, there is still debate as to what extent recommendation platforms are aligned with the user goals. A core issue fueling this debate is the challenge of inferring a user utility based on engagement signals such as likes, shares, watch time etc., which are the primary metric used by platforms to optimize content. This is because users utility-driven decision-processes (which we refer to as System-2), e.g., reading news that are relevant for them, are often confounded by their impulsive decision-processes (which we refer to as System-1), e.g., spend time on click-bait news. As a result, it is difficult to infer whether an observed engagement is utility-driven or impulse-driven. In this paper we explore a new approach to recommender systems where we infer user utility based on their return probability to the platform rather than engagement signals. Our intuition is that users tend to return to a platform in the long run if it creates utility for them, while pure engagement-driven interactions that do not add utility, may affect user return in the short term but will not have a lasting effect. We propose a generative model in which past content interactions impact the arrival rates of users based on a self-exciting Hawkes process. These arrival rates to the platform are a combination of both System-1 and System-2 decision processes. The System-2 arrival intensity depends on the utility and has a long lasting effect, while the System-1 intensity depends on the instantaneous gratification and tends to vanish rapidly. We show analytically that given samples it is possible to disentangle System-1 and System-2 and allow content optimization based on user utility. We conduct experiments on synthetic data to demonstrate the effectiveness of our approach.
[ "['Arpit Agarwal' 'Nicolas Usunier' 'Alessandro Lazaric'\n 'Maximilian Nickel']" ]
null
null
2406.01617
null
null
http://arxiv.org/pdf/2406.01617v1
2024-05-31T10:57:25Z
2024-05-31T10:57:25Z
LightCPPgen: An Explainable Machine Learning Pipeline for Rational Design of Cell Penetrating Peptides
Cell-penetrating peptides (CPPs) are powerful vectors for the intracellular delivery of a diverse array of therapeutic molecules. Despite their potential, the rational design of CPPs remains a challenging task that often requires extensive experimental efforts and iterations. In this study, we introduce an innovative approach for the de novo design of CPPs, leveraging the strengths of machine learning (ML) and optimization algorithms. Our strategy, named LightCPPgen, integrates a LightGBM-based predictive model with a genetic algorithm (GA), enabling the systematic generation and optimization of CPP sequences. At the core of our methodology is the development of an accurate, efficient, and interpretable predictive model, which utilizes 20 explainable features to shed light on the critical factors influencing CPP translocation capacity. The CPP predictive model works synergistically with an optimization algorithm, which is tuned to enhance computational efficiency while maintaining optimization performance. The GA solutions specifically target the candidate sequences' penetrability score, while trying to maximize similarity with the original non-penetrating peptide in order to retain its original biological and physicochemical properties. By prioritizing the synthesis of only the most promising CPP candidates, LightCPPgen can drastically reduce the time and cost associated with wet lab experiments. In summary, our research makes a substantial contribution to the field of CPP design, offering a robust framework that combines ML and optimization techniques to facilitate the rational design of penetrating peptides, by enhancing the explainability and interpretability of the design process.
[ "['Gabriele Maroni' 'Filip Stojceski' 'Lorenzo Pallante' 'Marco A. Deriu'\n 'Dario Piga' 'Gianvito Grasso']" ]
null
null
2406.01622
null
null
http://arxiv.org/pdf/2406.01622v1
2024-05-31T21:39:51Z
2024-05-31T21:39:51Z
Sifting through the Noise: A Survey of Diffusion Probabilistic Models and Their Applications to Biomolecules
Diffusion probabilistic models have made their way into a number of high-profile applications since their inception. In particular, there has been a wave of research into using diffusion models in the prediction and design of biomolecular structures and sequences. Their growing ubiquity makes it imperative for researchers in these fields to understand them. This paper serves as a general overview for the theory behind these models and the current state of research. We first introduce diffusion models and discuss common motifs used when applying them to biomolecules. We then present the significant outcomes achieved through the application of these models in generative and predictive tasks. This survey aims to provide readers with a comprehensive understanding of the increasingly critical role of diffusion models.
[ "['Trevor Norton' 'Debswapna Bhattacharya']" ]
null
null
2406.01624
null
null
http://arxiv.org/abs/2406.01624v2
2024-06-05T22:21:55Z
2024-06-01T00:39:55Z
Unveiling Hidden Factors: Explainable AI for Feature Boosting in Speech Emotion Recognition
Speech emotion recognition (SER) has gained significant attention due to its several application fields, such as mental health, education, and human-computer interaction. However, the accuracy of SER systems is hindered by high-dimensional feature sets that may contain irrelevant and redundant information. To overcome this challenge, this study proposes an iterative feature boosting approach for SER that emphasizes feature relevance and explainability to enhance machine learning model performance. Our approach involves meticulous feature selection and analysis to build efficient SER systems. In addressing our main problem through model explainability, we employ a feature evaluation loop with Shapley values to iteratively refine feature sets. This process strikes a balance between model performance and transparency, which enables a comprehensive understanding of the model's predictions. The proposed approach offers several advantages, including the identification and removal of irrelevant and redundant features, leading to a more effective model. Additionally, it promotes explainability, facilitating comprehension of the model's predictions and the identification of crucial features for emotion determination. The effectiveness of the proposed method is validated on the SER benchmarks of the Toronto emotional speech set (TESS), Berlin Database of Emotional Speech (EMO-DB), Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), and Surrey Audio-Visual Expressed Emotion (SAVEE) datasets, outperforming state-of-the-art methods. To the best of our knowledge, this is the first work to incorporate model explainability into an SER framework. The source code of this paper is publicly available via this https://github.com/alaaNfissi/Unveiling-Hidden-Factors-Explainable-AI-for-Feature-Boosting-in-Speech-Emotion-Recognition.
[ "['Alaa Nfissi' 'Wassim Bouachir' 'Nizar Bouguila' 'Brian Mishara']" ]
null
null
2406.01627
null
null
http://arxiv.org/pdf/2406.01627v2
2024-06-05T10:51:22Z
2024-06-01T08:01:05Z
GenBench: A Benchmarking Suite for Systematic Evaluation of Genomic Foundation Models
The Genomic Foundation Model (GFM) paradigm is expected to facilitate the extraction of generalizable representations from massive genomic data, thereby enabling their application across a spectrum of downstream applications. Despite advancements, a lack of evaluation framework makes it difficult to ensure equitable assessment due to experimental settings, model intricacy, benchmark datasets, and reproducibility challenges. In the absence of standardization, comparative analyses risk becoming biased and unreliable. To surmount this impasse, we introduce GenBench, a comprehensive benchmarking suite specifically tailored for evaluating the efficacy of Genomic Foundation Models. GenBench offers a modular and expandable framework that encapsulates a variety of state-of-the-art methodologies. Through systematic evaluations of datasets spanning diverse biological domains with a particular emphasis on both short-range and long-range genomic tasks, firstly including the three most important DNA tasks covering Coding Region, Non-Coding Region, Genome Structure, etc. Moreover, We provide a nuanced analysis of the interplay between model architecture and dataset characteristics on task-specific performance. Our findings reveal an interesting observation: independent of the number of parameters, the discernible difference in preference between the attention-based and convolution-based models on short- and long-range tasks may provide insights into the future design of GFM.
[ "['Zicheng Liu' 'Jiahui Li' 'Siyuan Li' 'Zelin Zang' 'Cheng Tan'\n 'Yufei Huang' 'Yajing Bai' 'Stan Z. Li']" ]
null
null
2406.01630
null
null
http://arxiv.org/pdf/2406.01630v1
2024-06-01T11:36:29Z
2024-06-01T11:36:29Z
Equivariant amortized inference of poses for cryo-EM
Cryo-EM is a vital technique for determining 3D structure of biological molecules such as proteins and viruses. The cryo-EM reconstruction problem is challenging due to the high noise levels, the missing poses of particles, and the computational demands of processing large datasets. A promising solution to these challenges lies in the use of amortized inference methods, which have shown particular efficacy in pose estimation for large datasets. However, these methods also encounter convergence issues, often necessitating sophisticated initialization strategies or engineered solutions for effective convergence. Building upon the existing cryoAI pipeline, which employs a symmetric loss function to address convergence problems, this work explores the emergence and persistence of these issues within the pipeline. Additionally, we explore the impact of equivariant amortized inference on enhancing convergence. Our investigations reveal that, when applied to simulated data, a pipeline incorporating an equivariant encoder not only converges faster and more frequently than the standard approach but also demonstrates superior performance in terms of pose estimation accuracy and the resolution of the reconstructed volume. Notably, $D_4$-equivariant encoders make the symmetric loss superfluous and, therefore, allow for a more efficient reconstruction pipeline.
[ "['Larissa de Ruijter' 'Gabriele Cesa']" ]
null
null
2406.01631
null
null
http://arxiv.org/pdf/2406.01631v1
2024-06-01T11:56:08Z
2024-06-01T11:56:08Z
An LLM-based Recommender System Environment
Reinforcement learning (RL) has gained popularity in the realm of recommender systems due to its ability to optimize long-term rewards and guide users in discovering relevant content. However, the successful implementation of RL in recommender systems is challenging because of several factors, including the limited availability of online data for training on-policy methods. This scarcity requires expensive human interaction for online model training. Furthermore, the development of effective evaluation frameworks that accurately reflect the quality of models remains a fundamental challenge in recommender systems. To address these challenges, we propose a comprehensive framework for synthetic environments that simulate human behavior by harnessing the capabilities of large language models (LLMs). We complement our framework with in-depth ablation studies and demonstrate its effectiveness with experiments on movie and book recommendations. By utilizing LLMs as synthetic users, this work introduces a modular and novel framework for training RL-based recommender systems. The software, including the RL environment, is publicly available.
[ "['Nathan Corecco' 'Giorgio Piatti' 'Luca A. Lanzendörfer'\n 'Flint Xiaofeng Fan' 'Roger Wattenhofer']" ]
null
null
2406.01633
null
null
http://arxiv.org/pdf/2406.01633v1
2024-06-01T15:54:45Z
2024-06-01T15:54:45Z
On Overcoming Miscalibrated Conversational Priors in LLM-based Chatbots
We explore the use of Large Language Model (LLM-based) chatbots to power recommender systems. We observe that the chatbots respond poorly when they encounter under-specified requests (e.g., they make incorrect assumptions, hedge with a long response, or refuse to answer). We conjecture that such miscalibrated response tendencies (i.e., conversational priors) can be attributed to LLM fine-tuning using annotators -- single-turn annotations may not capture multi-turn conversation utility, and the annotators' preferences may not even be representative of users interacting with a recommender system. We first analyze public LLM chat logs to conclude that query under-specification is common. Next, we study synthetic recommendation problems with configurable latent item utilities and frame them as Partially Observed Decision Processes (PODP). We find that pre-trained LLMs can be sub-optimal for PODPs and derive better policies that clarify under-specified queries when appropriate. Then, we re-calibrate LLMs by prompting them with learned control messages to approximate the improved policy. Finally, we show empirically that our lightweight learning approach effectively uses logged conversation data to re-calibrate the response strategies of LLM-based chatbots for recommendation tasks.
[ "['Christine Herlihy' 'Jennifer Neville' 'Tobias Schnabel'\n 'Adith Swaminathan']" ]
null
null
2406.01638
null
null
http://arxiv.org/pdf/2406.01638v3
2024-06-14T01:39:29Z
2024-06-03T00:27:29Z
TimeCMA: Towards LLM-Empowered Time Series Forecasting via Cross-Modality Alignment
The widespread adoption of scalable mobile sensing has led to large amounts of time series data for real-world applications. A fundamental application is multivariate time series forecasting (MTSF), which aims to predict future time series values based on historical observations. Existing MTSF methods suffer from limited parameterization and small-scale training data. Recently, Large language models (LLMs) have been introduced in time series, which achieve promising forecasting performance but incur heavy computational costs. To solve these challenges, we propose TimeCMA, an LLM-empowered framework for time series forecasting with cross-modality alignment. We design a dual-modality encoding module with two branches, where the time series encoding branch extracts relatively low-quality yet pure embeddings of time series through an inverted Transformer. In addition, the LLM-empowered encoding branch wraps the same time series as prompts to obtain high-quality yet entangled prompt embeddings via a Pre-trained LLM. Then, we design a cross-modality alignment module to retrieve high-quality and pure time series embeddings from the prompt embeddings. Moreover, we develop a time series forecasting module to decode the aligned embeddings while capturing dependencies among multiple variables for forecasting. Notably, we tailor the prompt to encode sufficient temporal information into a last token and design the last token embedding storage to reduce computational costs. Extensive experiments on real data offer insight into the accuracy and efficiency of the proposed framework.
[ "['Chenxi Liu' 'Qianxiong Xu' 'Hao Miao' 'Sun Yang' 'Lingzheng Zhang'\n 'Cheng Long' 'Ziyue Li' 'Rui Zhao']" ]
null
null
2406.01645
null
null
http://arxiv.org/pdf/2406.01645v1
2024-06-03T12:24:24Z
2024-06-03T12:24:24Z
FNP: Fourier Neural Processes for Arbitrary-Resolution Data Assimilation
Data assimilation is a vital component in modern global medium-range weather forecasting systems to obtain the best estimation of the atmospheric state by combining the short-term forecast and observations. Recently, AI-based data assimilation approaches have attracted increasing attention for their significant advantages over traditional techniques in terms of computational consumption. However, existing AI-based data assimilation methods can only handle observations with a specific resolution, lacking the compatibility and generalization ability to assimilate observations with other resolutions. Considering that complex real-world observations often have different resolutions, we propose the textit{textbf{Fourier Neural Processes}} (FNP) for textit{arbitrary-resolution data assimilation} in this paper. Leveraging the efficiency of the designed modules and flexible structure of neural processes, FNP achieves state-of-the-art results in assimilating observations with varying resolutions, and also exhibits increasing advantages over the counterparts as the resolution and the amount of observations increase. Moreover, our FNP trained on a fixed resolution can directly handle the assimilation of observations with out-of-distribution resolutions and the observational information reconstruction task without additional fine-tuning, demonstrating its excellent generalization ability across data resolutions as well as across tasks.
[ "['Kun Chen' 'Tao Chen' 'Peng Ye' 'Hao Chen' 'Kang Chen' 'Tao Han'\n 'Wanli Ouyang' 'Lei Bai']" ]
null
null
2406.01646
null
null
http://arxiv.org/pdf/2406.01646v1
2024-06-03T12:33:27Z
2024-06-03T12:33:27Z
iKAN: Global Incremental Learning with KAN for Human Activity Recognition Across Heterogeneous Datasets
This work proposes an incremental learning (IL) framework for wearable sensor human activity recognition (HAR) that tackles two challenges simultaneously: catastrophic forgetting and non-uniform inputs. The scalable framework, iKAN, pioneers IL with Kolmogorov-Arnold Networks (KAN) to replace multi-layer perceptrons as the classifier that leverages the local plasticity and global stability of splines. To adapt KAN for HAR, iKAN uses task-specific feature branches and a feature redistribution layer. Unlike existing IL methods that primarily adjust the output dimension or the number of classifier nodes to adapt to new tasks, iKAN focuses on expanding the feature extraction branches to accommodate new inputs from different sensor modalities while maintaining consistent dimensions and the number of classifier outputs. Continual learning across six public HAR datasets demonstrated the iKAN framework's incremental learning performance, with a last performance of 84.9% (weighted F1 score) and an average incremental performance of 81.34%, which significantly outperforms the two existing incremental learning methods, such as EWC (51.42%) and experience replay (59.92%).
[ "['Mengxi Liu' 'Sizhen Bian' 'Bo Zhou' 'Paul Lukowicz']" ]
null
null
2406.01647
null
null
http://arxiv.org/pdf/2406.01647v1
2024-06-03T12:58:29Z
2024-06-03T12:58:29Z
An Analysis under a Unified Fomulation of Learning Algorithms with Output Constraints
Neural networks (NN) perform well in diverse tasks, but sometimes produce nonsensical results to humans. Most NN models "solely" learn from (input, output) pairs, occasionally conflicting with human knowledge. Many studies indicate injecting human knowledge by reducing output constraints during training can improve model performance and reduce constraint violations. While there have been several attempts to compare different existing algorithms under the same programming framework, nonetheless, there has been no previous work that categorizes learning algorithms with output constraints in a unified manner. Our contributions are as follows: (1) We categorize the previous studies based on three axes: type of constraint loss used (e.g. probabilistic soft logic, REINFORCE), exploration strategy of constraint-violating examples, and integration mechanism of learning signals from main task and constraint. (2) We propose new algorithms to integrate the information of main task and constraint injection, inspired by continual-learning algorithms. (3) Furthermore, we propose the $Hbeta$-score as a metric for considering the main task metric and constraint violation simultaneously. To provide a thorough analysis, we examine all the algorithms on three NLP tasks: natural language inference (NLI), synthetic transduction examples (STE), and semantic role labeling (SRL). We explore and reveal the key factors of various algorithms associated with achieving high $Hbeta$-scores.
[ "['Mooho Song' 'Jay-Yoon Lee']" ]
null
null
2406.01649
null
null
http://arxiv.org/pdf/2406.01649v1
2024-06-03T14:27:46Z
2024-06-03T14:27:46Z
CoLa-DCE -- Concept-guided Latent Diffusion Counterfactual Explanations
Recent advancements in generative AI have introduced novel prospects and practical implementations. Especially diffusion models show their strength in generating diverse and, at the same time, realistic features, positioning them well for generating counterfactual explanations for computer vision models. Answering "what if" questions of what needs to change to make an image classifier change its prediction, counterfactual explanations align well with human understanding and consequently help in making model behavior more comprehensible. Current methods succeed in generating authentic counterfactuals, but lack transparency as feature changes are not directly perceivable. To address this limitation, we introduce Concept-guided Latent Diffusion Counterfactual Explanations (CoLa-DCE). CoLa-DCE generates concept-guided counterfactuals for any classifier with a high degree of control regarding concept selection and spatial conditioning. The counterfactuals comprise an increased granularity through minimal feature changes. The reference feature visualization ensures better comprehensibility, while the feature localization provides increased transparency of "where" changed "what". We demonstrate the advantages of our approach in minimality and comprehensibility across multiple image classification models and datasets and provide insights into how our CoLa-DCE explanations help comprehend model errors like misclassification cases.
[ "['Franz Motzkus' 'Christian Hellert' 'Ute Schmid']" ]
null
null
2406.01650
null
null
http://arxiv.org/pdf/2406.01650v1
2024-06-03T14:43:54Z
2024-06-03T14:43:54Z
TAGMol: Target-Aware Gradient-guided Molecule Generation
3D generative models have shown significant promise in structure-based drug design (SBDD), particularly in discovering ligands tailored to specific target binding sites. Existing algorithms often focus primarily on ligand-target binding, characterized by binding affinity. Moreover, models trained solely on target-ligand distribution may fall short in addressing the broader objectives of drug discovery, such as the development of novel ligands with desired properties like drug-likeness, and synthesizability, underscoring the multifaceted nature of the drug design process. To overcome these challenges, we decouple the problem into molecular generation and property prediction. The latter synergistically guides the diffusion sampling process, facilitating guided diffusion and resulting in the creation of meaningful molecules with the desired properties. We call this guided molecular generation process as TAGMol. Through experiments on benchmark datasets, TAGMol demonstrates superior performance compared to state-of-the-art baselines, achieving a 22% improvement in average Vina Score and yielding favorable outcomes in essential auxiliary properties. This establishes TAGMol as a comprehensive framework for drug generation.
[ "['Vineeth Dorna' 'D. Subhalingam' 'Keshav Kolluru' 'Shreshth Tuli'\n 'Mrityunjay Singh' 'Saurabh Singal' 'N. M. Anoop Krishnan' 'Sayan Ranu']" ]
null
null
2406.01651
null
null
http://arxiv.org/pdf/2406.01651v1
2024-06-03T14:48:54Z
2024-06-03T14:48:54Z
FusionDTI: Fine-grained Binding Discovery with Token-level Fusion for Drug-Target Interaction
Predicting drug-target interaction (DTI) is critical in the drug discovery process. Despite remarkable advances in recent DTI models through the integration of representations from diverse drug and target encoders, such models often struggle to capture the fine-grained interactions between drugs and protein, i.e. the binding of specific drug atoms (or substructures) and key amino acids of proteins, which is crucial for understanding the binding mechanisms and optimising drug design. To address this issue, this paper introduces a novel model, called FusionDTI, which uses a token-level Fusion module to effectively learn fine-grained information for Drug-Target Interaction. In particular, our FusionDTI model uses the SELFIES representation of drugs to mitigate sequence fragment invalidation and incorporates the structure-aware (SA) vocabulary of target proteins to address the limitation of amino acid sequences in structural information, additionally leveraging pre-trained language models extensively trained on large-scale biomedical datasets as encoders to capture the complex information of drugs and targets. Experiments on three well-known benchmark datasets show that our proposed FusionDTI model achieves the best performance in DTI prediction compared with seven existing state-of-the-art baselines. Furthermore, our case study indicates that FusionDTI could highlight the potential binding sites, enhancing the explainability of the DTI prediction.
[ "['Zhaohan Meng' 'Zaiqiao Meng' 'Iadh Ounis']" ]
null
null
2406.01652
null
null
http://arxiv.org/pdf/2406.01652v1
2024-06-03T15:47:34Z
2024-06-03T15:47:34Z
Distributional bias compromises leave-one-out cross-validation
Cross-validation is a common method for estimating the predictive performance of machine learning models. In a data-scarce regime, where one typically wishes to maximize the number of instances used for training the model, an approach called "leave-one-out cross-validation" is often used. In this design, a separate model is built for predicting each data instance after training on all other instances. Since this results in a single test data point available per model trained, predictions are aggregated across the entire dataset to calculate common rank-based performance metrics such as the area under the receiver operating characteristic or precision-recall curves. In this work, we demonstrate that this approach creates a negative correlation between the average label of each training fold and the label of its corresponding test instance, a phenomenon that we term distributional bias. As machine learning models tend to regress to the mean of their training data, this distributional bias tends to negatively impact performance evaluation and hyperparameter optimization. We show that this effect generalizes to leave-P-out cross-validation and persists across a wide range of modeling and evaluation approaches, and that it can lead to a bias against stronger regularization. To address this, we propose a generalizable rebalanced cross-validation approach that corrects for distributional bias. We demonstrate that our approach improves cross-validation performance evaluation in synthetic simulations and in several published leave-one-out analyses.
[ "['George I. Austin' \"Itsik Pe'er\" 'Tal Korem']" ]
null
null
2406.01653
null
null
http://arxiv.org/pdf/2406.01653v1
2024-06-03T16:26:24Z
2024-06-03T16:26:24Z
An efficient Wasserstein-distance approach for reconstructing jump-diffusion processes using parameterized neural networks
We analyze the Wasserstein distance ($W$-distance) between two probability distributions associated with two multidimensional jump-diffusion processes. Specifically, we analyze a temporally decoupled squared $W_2$-distance, which provides both upper and lower bounds associated with the discrepancies in the drift, diffusion, and jump amplitude functions between the two jump-diffusion processes. Then, we propose a temporally decoupled squared $W_2$-distance method for efficiently reconstructing unknown jump-diffusion processes from data using parameterized neural networks. We further show its performance can be enhanced by utilizing prior information on the drift function of the jump-diffusion process. The effectiveness of our proposed reconstruction method is demonstrated across several examples and applications.
[ "['Mingtao Xia' 'Xiangting Li' 'Qijing Shen' 'Tom Chou']" ]
null
null
2406.01655
null
null
http://arxiv.org/pdf/2406.01655v1
2024-06-03T17:27:40Z
2024-06-03T17:27:40Z
TinySV: Speaker Verification in TinyML with On-device Learning
TinyML is a novel area of machine learning that gained huge momentum in the last few years thanks to the ability to execute machine learning algorithms on tiny devices (such as Internet-of-Things or embedded systems). Interestingly, research in this area focused on the efficient execution of the inference phase of TinyML models on tiny devices, while very few solutions for on-device learning of TinyML models are available in the literature due to the relevant overhead introduced by the learning algorithms. The aim of this paper is to introduce a new type of adaptive TinyML solution that can be used in tasks, such as the presented textit{Tiny Speaker Verification} (TinySV), that require to be tackled with an on-device learning algorithm. Achieving this goal required (i) reducing the memory and computational demand of TinyML learning algorithms, and (ii) designing a TinyML learning algorithm operating with few and possibly unlabelled training data. The proposed TinySV solution relies on a two-layer hierarchical TinyML solution comprising Keyword Spotting and Adaptive Speaker Verification module. We evaluated the effectiveness and efficiency of the proposed TinySV solution on a dataset collected expressly for the task and tested the proposed solution on a real-world IoT device (Infineon PSoC 62S2 Wi-Fi BT Pioneer Kit).
[ "['Massimo Pavan' 'Gioele Mombelli' 'Francesco Sinacori' 'Manuel Roveri']" ]
null
null
2406.01660
null
null
http://arxiv.org/pdf/2406.01660v3
2024-06-07T17:25:12Z
2024-06-03T17:53:25Z
Self-Improving Robust Preference Optimization
Both online and offline RLHF methods such as PPO and DPO have been extremely successful in aligning AI with human preferences. Despite their success, the existing methods suffer from a fundamental problem that their optimal solution is highly task-dependent (i.e., not robust to out-of-distribution (OOD) tasks). Here we address this challenge by proposing Self-Improving Robust Preference Optimization SRPO, a practical and mathematically principled offline RLHF framework that is completely robust to the changes in the task. The key idea of SRPO is to cast the problem of learning from human preferences as a self-improvement process, which can be mathematically expressed in terms of a min-max objective that aims at joint optimization of self-improvement policy and the generative policy in an adversarial fashion. The solution for this optimization problem is independent of the training task and thus it is robust to its changes. We then show that this objective can be re-expressed in the form of a non-adversarial offline loss which can be optimized using standard supervised optimization techniques at scale without any need for reward model and online inference. We show the effectiveness of SRPO in terms of AI Win-Rate (WR) against human (GOLD) completions. In particular, when SRPO is evaluated on the OOD XSUM dataset, it outperforms the celebrated DPO by a clear margin of 15% after 5 self-revisions, achieving WR of 90%.
[ "['Eugene Choi' 'Arash Ahmadian' 'Matthieu Geist' 'Oilvier Pietquin'\n 'Mohammad Gheshlaghi Azar']" ]
null
null
2406.01661
null
null
http://arxiv.org/pdf/2406.01661v1
2024-06-03T17:55:02Z
2024-06-03T17:55:02Z
A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization
Learning to sample from intractable distributions over discrete sets without relying on corresponding training data is a central problem in a wide range of fields, including Combinatorial Optimization. Currently, popular deep learning-based approaches rely primarily on generative models that yield exact sample likelihoods. This work introduces a method that lifts this restriction and opens the possibility to employ highly expressive latent variable models like diffusion models. Our approach is conceptually based on a loss that upper bounds the reverse Kullback-Leibler divergence and evades the requirement of exact sample likelihoods. We experimentally validate our approach in data-free Combinatorial Optimization and demonstrate that our method achieves a new state-of-the-art on a wide range of benchmark problems.
[ "['Sebastian Sanokowski' 'Sepp Hochreiter' 'Sebastian Lehner']" ]
null
null
2406.01663
null
null
http://arxiv.org/pdf/2406.01663v1
2024-06-03T18:00:00Z
2024-06-03T18:00:00Z
An efficient solution to Hidden Markov Models on trees with coupled branches
Hidden Markov Models (HMMs) are powerful tools for modeling sequential data, where the underlying states evolve in a stochastic manner and are only indirectly observable. Traditional HMM approaches are well-established for linear sequences, and have been extended to other structures such as trees. In this paper, we extend the framework of HMMs on trees to address scenarios where the tree-like structure of the data includes coupled branches -- a common feature in biological systems where entities within the same lineage exhibit dependent characteristics. We develop a dynamic programming algorithm that efficiently solves the likelihood, decoding, and parameter learning problems for tree-based HMMs with coupled branches. Our approach scales polynomially with the number of states and nodes, making it computationally feasible for a wide range of applications and does not suffer from the underflow problem. We demonstrate our algorithm by applying it to simulated data and propose self-consistency checks for validating the assumptions of the model used for inference. This work not only advances the theoretical understanding of HMMs on trees but also provides a practical tool for analyzing complex biological data where dependencies between branches cannot be ignored.
[ "['Farzan Vafa' 'Sahand Hormoz']" ]
null
null
2406.01698
null
null
http://arxiv.org/pdf/2406.01698v1
2024-06-03T18:00:50Z
2024-06-03T18:00:50Z
Demystifying Platform Requirements for Diverse LLM Inference Use Cases
Large language models (LLMs) have shown remarkable performance across a wide range of applications, often outperforming human experts. However, deploying these parameter-heavy models efficiently for diverse inference use cases requires carefully designed hardware platforms with ample computing, memory, and network resources. With LLM deployment scenarios and models evolving at breakneck speed, the hardware requirements to meet SLOs remains an open research question. In this work, we present an analytical tool, GenZ, to study the relationship between LLM inference performance and various platform design parameters. Our analysis provides insights into configuring platforms for different LLM workloads and use cases. We quantify the platform requirements to support SOTA LLMs models like LLaMA and GPT-4 under diverse serving settings. Furthermore, we project the hardware capabilities needed to enable future LLMs potentially exceeding hundreds of trillions of parameters. The trends and insights derived from GenZ can guide AI engineers deploying LLMs as well as computer architects designing next-generation hardware accelerators and platforms. Ultimately, this work sheds light on the platform design considerations for unlocking the full potential of large language models across a spectrum of applications. The source code is available at https://github.com/abhibambhaniya/GenZ-LLM-Analyzer .
[ "['Abhimanyu Bambhaniya' 'Ritik Raj' 'Geonhwa Jeong' 'Souvik Kundu'\n 'Sudarshan Srinivasan' 'Midhilesh Elavazhagan' 'Madhu Kumar'\n 'Tushar Krishna']" ]
null
null
2406.01708
null
null
http://arxiv.org/pdf/2406.01708v1
2024-06-03T18:04:37Z
2024-06-03T18:04:37Z
Model for Peanuts: Hijacking ML Models without Training Access is Possible
The massive deployment of Machine Learning (ML) models has been accompanied by the emergence of several attacks that threaten their trustworthiness and raise ethical and societal concerns such as invasion of privacy, discrimination risks, and lack of accountability. Model hijacking is one of these attacks, where the adversary aims to hijack a victim model to execute a different task than its original one. Model hijacking can cause accountability and security risks since a hijacked model owner can be framed for having their model offering illegal or unethical services. Prior state-of-the-art works consider model hijacking as a training time attack, whereby an adversary requires access to the ML model training to execute their attack. In this paper, we consider a stronger threat model where the attacker has no access to the training phase of the victim model. Our intuition is that ML models, typically over-parameterized, might (unintentionally) learn more than the intended task for they are trained. We propose a simple approach for model hijacking at inference time named SnatchML to classify unknown input samples using distance measures in the latent space of the victim model to previously known samples associated with the hijacking task classes. SnatchML empirically shows that benign pre-trained models can execute tasks that are semantically related to the initial task. Surprisingly, this can be true even for hijacking tasks unrelated to the original task. We also explore different methods to mitigate this risk. We first propose a novel approach we call meta-unlearning, designed to help the model unlearn a potentially malicious task while training on the original task dataset. We also provide insights on over-parameterization as one possible inherent factor that makes model hijacking easier, and we accordingly propose a compression-based countermeasure against this attack.
[ "['Mahmoud Ghorbel' 'Halima Bouzidi' 'Ioan Marius Bilasco' 'Ihsen Alouani']" ]
null
null
2406.01727
null
null
http://arxiv.org/pdf/2406.01727v1
2024-06-03T18:39:27Z
2024-06-03T18:39:27Z
Federated Learning-based Collaborative Wideband Spectrum Sensing and Scheduling for UAVs in UTM Systems
In this paper, we propose a data-driven framework for collaborative wideband spectrum sensing and scheduling for networked unmanned aerial vehicles (UAVs), which act as the secondary users (SUs) to opportunistically utilize detected "spectrum holes". Our overall framework consists of three main stages. Firstly, in the model training stage, we explore dataset generation in a multi-cell environment and training a machine learning (ML) model using the federated learning (FL) architecture. Unlike the existing studies on FL for wireless that presume datasets are readily available for training, we propose a novel architecture that directly integrates wireless dataset generation, which involves capturing I/Q samples from over-the-air signals in a multi-cell environment, into the FL training process. Secondly, in the collaborative spectrum inference stage, we propose a collaborative spectrum fusion strategy that is compatible with the unmanned aircraft system traffic management (UTM) ecosystem. Finally, in the spectrum scheduling stage, we leverage reinforcement learning (RL) solutions to dynamically allocate the detected spectrum holes to the secondary users. To evaluate the proposed methods, we establish a comprehensive simulation framework that generates a near-realistic synthetic dataset using MATLAB LTE toolbox by incorporating base-station~(BS) locations in a chosen area of interest, performing ray-tracing, and emulating the primary users channel usage in terms of I/Q samples. This evaluation methodology provides a flexible framework to generate large spectrum datasets that could be used for developing ML/AI-based spectrum management solutions for aerial devices.
[ "['Sravan Reddy Chintareddy' 'Keenan Roach' 'Kenny Cheung'\n 'Morteza Hashemi']" ]
null
null
2406.01733
null
null
http://arxiv.org/pdf/2406.01733v1
2024-06-03T18:49:57Z
2024-06-03T18:49:57Z
Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching
Diffusion Transformers have recently demonstrated unprecedented generative capabilities for various tasks. The encouraging results, however, come with the cost of slow inference, since each denoising step requires inference on a transformer model with a large scale of parameters. In this study, we make an interesting and somehow surprising observation: the computation of a large proportion of layers in the diffusion transformer, through introducing a caching mechanism, can be readily removed even without updating the model parameters. In the case of U-ViT-H/2, for example, we may remove up to 93.68% of the computation in the cache steps (46.84% for all steps), with less than 0.01 drop in FID. To achieve this, we introduce a novel scheme, named Learning-to-Cache (L2C), that learns to conduct caching in a dynamic manner for diffusion transformers. Specifically, by leveraging the identical structure of layers in transformers and the sequential nature of diffusion, we explore redundant computations between timesteps by treating each layer as the fundamental unit for caching. To address the challenge of the exponential search space in deep models for identifying layers to cache and remove, we propose a novel differentiable optimization objective. An input-invariant yet timestep-variant router is then optimized, which can finally produce a static computation graph. Experimental results show that L2C largely outperforms samplers such as DDIM and DPM-Solver, alongside prior cache-based methods at the same inference speed.
[ "['Xinyin Ma' 'Gongfan Fang' 'Michael Bi Mi' 'Xinchao Wang']" ]
null
null
2406.01753
null
null
http://arxiv.org/pdf/2406.01753v1
2024-06-03T19:43:06Z
2024-06-03T19:43:06Z
Optimizing the Optimal Weighted Average: Efficient Distributed Sparse Classification
While distributed training is often viewed as a solution to optimizing linear models on increasingly large datasets, inter-machine communication costs of popular distributed approaches can dominate as data dimensionality increases. Recent work on non-interactive algorithms shows that approximate solutions for linear models can be obtained efficiently with only a single round of communication among machines. However, this approximation often degenerates as the number of machines increases. In this paper, building on the recent optimal weighted average method, we introduce a new technique, ACOWA, that allows an extra round of communication to achieve noticeably better approximation quality with minor runtime increases. Results show that for sparse distributed logistic regression, ACOWA obtains solutions that are more faithful to the empirical risk minimizer and attain substantially higher accuracy than other distributed algorithms.
[ "['Fred Lu' 'Ryan R. Curtin' 'Edward Raff' 'Francis Ferraro' 'James Holt']" ]
null
null
2406.01755
null
null
http://arxiv.org/pdf/2406.01755v1
2024-06-03T19:44:47Z
2024-06-03T19:44:47Z
Sparser, Better, Deeper, Stronger: Improving Sparse Training with Exact Orthogonal Initialization
Static sparse training aims to train sparse models from scratch, achieving remarkable results in recent years. A key design choice is given by the sparse initialization, which determines the trainable sub-network through a binary mask. Existing methods mainly select such mask based on a predefined dense initialization. Such an approach may not efficiently leverage the mask's potential impact on the optimization. An alternative direction, inspired by research into dynamical isometry, is to introduce orthogonality in the sparse subnetwork, which helps in stabilizing the gradient signal. In this work, we propose Exact Orthogonal Initialization (EOI), a novel sparse orthogonal initialization scheme based on composing random Givens rotations. Contrary to other existing approaches, our method provides exact (not approximated) orthogonality and enables the creation of layers with arbitrary densities. We demonstrate the superior effectiveness and efficiency of EOI through experiments, consistently outperforming common sparse initialization techniques. Our method enables training highly sparse 1000-layer MLP and CNN networks without residual connections or normalization techniques, emphasizing the crucial role of weight initialization in static sparse training alongside sparse mask selection. The code is available at https://github.com/woocash2/sparser-better-deeper-stronger
[ "['Aleksandra Irena Nowak' 'Łukasz Gniecki' 'Filip Szatkowski'\n 'Jacek Tabor']" ]
null
null
2406.01757
null
null
http://arxiv.org/pdf/2406.01757v1
2024-06-03T19:52:41Z
2024-06-03T19:52:41Z
Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities
The rise of foundation models holds immense promise for advancing AI, but this progress may amplify existing risks and inequalities, leaving marginalized communities behind. In this position paper, we discuss that disparities towards marginalized communities - performance, representation, privacy, robustness, interpretability and safety - are not isolated concerns but rather interconnected elements of a cascading disparity phenomenon. We contrast foundation models with traditional models and highlight the potential for exacerbated disparity against marginalized communities. Moreover, we emphasize the unique threat of cascading impacts in foundation models, where interconnected disparities can trigger long-lasting negative consequences, specifically to the people on the margin. We define marginalized communities within the machine learning context and explore the multifaceted nature of disparities. We analyze the sources of these disparities, tracing them from data creation, training and deployment procedures to highlight the complex technical and socio-technical landscape. To mitigate the pressing crisis, we conclude with a set of calls to action to mitigate disparity at its source.
[ "['Golnoosh Farnadi' 'Mohammad Havaei' 'Negar Rostamzadeh']" ]
null
null
2406.01762
null
null
http://arxiv.org/pdf/2406.01762v1
2024-06-03T20:05:04Z
2024-06-03T20:05:04Z
Non-Asymptotic Analysis for Single-Loop (Natural) Actor-Critic with Compatible Function Approximation
Actor-critic (AC) is a powerful method for learning an optimal policy in reinforcement learning, where the critic uses algorithms, e.g., temporal difference (TD) learning with function approximation, to evaluate the current policy and the actor updates the policy along an approximate gradient direction using information from the critic. This paper provides the textit{tightest} non-asymptotic convergence bounds for both the AC and natural AC (NAC) algorithms. Specifically, existing studies show that AC converges to an $epsilon+varepsilon_{text{critic}}$ neighborhood of stationary points with the best known sample complexity of $mathcal{O}(epsilon^{-2})$ (up to a log factor), and NAC converges to an $epsilon+varepsilon_{text{critic}}+sqrt{varepsilon_{text{actor}}}$ neighborhood of the global optimum with the best known sample complexity of $mathcal{O}(epsilon^{-3})$, where $varepsilon_{text{critic}}$ is the approximation error of the critic and $varepsilon_{text{actor}}$ is the approximation error induced by the insufficient expressive power of the parameterized policy class. This paper analyzes the convergence of both AC and NAC algorithms with compatible function approximation. Our analysis eliminates the term $varepsilon_{text{critic}}$ from the error bounds while still achieving the best known sample complexities. Moreover, we focus on the challenging single-loop setting with a single Markovian sample trajectory. Our major technical novelty lies in analyzing the stochastic bias due to policy-dependent and time-varying compatible function approximation in the critic, and handling the non-ergodicity of the MDP due to the single Markovian sample trajectory. Numerical results are also provided in the appendix.
[ "['Yudan Wang' 'Yue Wang' 'Yi Zhou' 'Shaofeng Zou']" ]
null
null
2406.01766
null
null
http://arxiv.org/pdf/2406.01766v1
2024-06-03T20:15:28Z
2024-06-03T20:15:28Z
How Does Gradient Descent Learn Features -- A Local Analysis for Regularized Two-Layer Neural Networks
The ability of learning useful features is one of the major advantages of neural networks. Although recent works show that neural network can operate in a neural tangent kernel (NTK) regime that does not allow feature learning, many works also demonstrate the potential for neural networks to go beyond NTK regime and perform feature learning. Recently, a line of work highlighted the feature learning capabilities of the early stages of gradient-based training. In this paper we consider another mechanism for feature learning via gradient descent through a local convergence analysis. We show that once the loss is below a certain threshold, gradient descent with a carefully regularized objective will capture ground-truth directions. Our results demonstrate that feature learning not only happens at the initial gradient steps, but can also occur towards the end of training.
[ "['Mo Zhou' 'Rong Ge']" ]
null
null
2406.01774
null
null
http://arxiv.org/pdf/2406.01774v1
2024-06-03T20:33:17Z
2024-06-03T20:33:17Z
Efficient Data Distribution Estimation for Accelerated Federated Learning
Federated Learning(FL) is a privacy-preserving machine learning paradigm where a global model is trained in-situ across a large number of distributed edge devices. These systems are often comprised of millions of user devices and only a subset of available devices can be used for training in each epoch. Designing a device selection strategy is challenging, given that devices are highly heterogeneous in both their system resources and training data. This heterogeneity makes device selection very crucial for timely model convergence and sufficient model accuracy. To tackle the FL client heterogeneity problem, various client selection algorithms have been developed, showing promising performance improvement in terms of model coverage and accuracy. In this work, we study the overhead of client selection algorithms in a large scale FL environment. Then we propose an efficient data distribution summary calculation algorithm to reduce the overhead in a real-world large scale FL environment. The evaluation shows that our proposed solution could achieve up to 30x reduction in data summary time, and up to 360x reduction in clustering time.
[ "['Yuanli Wang' 'Lei Huang']" ]
null
null
2406.01781
null
null
http://arxiv.org/pdf/2406.01781v1
2024-06-03T20:52:34Z
2024-06-03T20:52:34Z
DEFT: Efficient Finetuning of Conditional Diffusion Models by Learning the Generalised $h$-transform
Generative modelling paradigms based on denoising diffusion processes have emerged as a leading candidate for conditional sampling in inverse problems. In many real-world applications, we often have access to large, expensively trained unconditional diffusion models, which we aim to exploit for improving conditional sampling. Most recent approaches are motivated heuristically and lack a unifying framework, obscuring connections between them. Further, they often suffer from issues such as being very sensitive to hyperparameters, being expensive to train or needing access to weights hidden behind a closed API. In this work, we unify conditional training and sampling using the mathematically well-understood Doob's h-transform. This new perspective allows us to unify many existing methods under a common umbrella. Under this framework, we propose DEFT (Doob's h-transform Efficient FineTuning), a new approach for conditional generation that simply fine-tunes a very small network to quickly learn the conditional $h$-transform, while keeping the larger unconditional network unchanged. DEFT is much faster than existing baselines while achieving state-of-the-art performance across a variety of linear and non-linear benchmarks. On image reconstruction tasks, we achieve speedups of up to 1.6$times$, while having the best perceptual quality on natural images and reconstruction performance on medical images.
[ "['Alexander Denker' 'Francisco Vargas' 'Shreyas Padhy' 'Kieran Didi'\n 'Simon Mathis' 'Vincent Dutordoir' 'Riccardo Barbano' 'Emile Mathieu'\n 'Urszula Julia Komorowska' 'Pietro Lio']" ]