categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.14267 | null | null | http://arxiv.org/pdf/2405.14267v1 | 2024-05-23T07:45:48Z | 2024-05-23T07:45:48Z | A Gap in Time: The Challenge of Processing Heterogeneous IoT Point Data
in Buildings | The growing need for sustainable energy solutions has driven the integration of digitalized buildings into the power grid, utilizing Internet-of-Things technology to optimize building performance and energy efficiency. However, incorporating IoT point data within deep-learning frameworks for energy management presents a complex challenge, predominantly due to the inherent data heterogeneity. This paper comprehensively analyzes the multifaceted heterogeneity present in real-world building IoT data streams. We meticulously dissect the heterogeneity across multiple dimensions, encompassing ontology, etiology, temporal irregularity, spatial diversity, and their combined effects on the IoT point data distribution. In addition, experiments using state-of-the-art forecasting models are conducted to evaluate their impacts on the performance of deep-learning models for forecasting tasks. By charting the diversity along these dimensions, we illustrate the challenges and delineate pathways for future research to leverage this heterogeneity as a resource rather than a roadblock. This exploration sets the stage for advancing the predictive abilities of deep-learning algorithms and catalyzing the evolution of intelligent energy-efficient buildings. | [
"['Xiachong Lin' 'Arian Prabowo' 'Imran Razzak' 'Hao Xue' 'Matthew Amos'\n 'Sam Behrens' 'Stephen White' 'Flora D. Salim']"
] |
null | null | 2405.14270 | null | null | http://arxiv.org/pdf/2405.14270v1 | 2024-05-23T07:48:00Z | 2024-05-23T07:48:00Z | Sparse $L^1$-Autoencoders for Scientific Data Compression | Scientific datasets present unique challenges for machine learning-driven compression methods, including more stringent requirements on accuracy and mitigation of potential invalidating artifacts. Drawing on results from compressed sensing and rate-distortion theory, we introduce effective data compression methods by developing autoencoders using high dimensional latent spaces that are $L^1$-regularized to obtain sparse low dimensional representations. We show how these information-rich latent spaces can be used to mitigate blurring and other artifacts to obtain highly effective data compression methods for scientific data. We demonstrate our methods for short angle scattering (SAS) datasets showing they can achieve compression ratios around two orders of magnitude and in some cases better. Our compression methods show promise for use in addressing current bottlenecks in transmission, storage, and analysis in high-performance distributed computing environments. This is central to processing the large volume of SAS data being generated at shared experimental facilities around the world to support scientific investigations. Our approaches provide general ways for obtaining specialized compression methods for targeted scientific datasets. | [
"['Matthias Chung' 'Rick Archibald' 'Paul Atzberger' 'Jack Michael Solomon']"
] |
null | null | 2405.14273 | null | null | http://arxiv.org/pdf/2405.14273v1 | 2024-05-23T07:51:05Z | 2024-05-23T07:51:05Z | A fast algorithm to minimize prediction loss of the optimal solution in
inverse optimization problem of MILP | This paper tackles the problem of minimizing the prediction loss of the optimal solution (PLS) of the MILP with given data, which is one of the inverse optimization problems. While existing methods can approximately solve this problem, their implementation in the high-dimensional case to minimize the PLS is computationally expensive because they are inefficient in reducing the prediction loss of weights (PLW). We propose a fast algorithm for minimizing the PLS of MILP. To demonstrate this property, we attribute the problem of minimizing the PLS to that of minimizing the suboptimality loss (SL), which is convex. If the PLS does not vanish, we can adapt the SL to have the estimated loss (SPO loss) with a positive lower bound, which enables us to evaluate the PLW. Consequently, we prove that the proposed algorithm can effectively reduce the PLW and achieve the minimum value of PLS. Our numerical experiments demonstrated that our algorithm successfully achieved the minimum PLS. Compared to existing methods, our algorithm exhibited a smaller dimensionality effect and minimized the PLS in less than 1/7 the number of iterations. Especially in high dimensions, our algorithm significantly improved the PLS by more than two orders of magnitude compared to existing algorithms. | [
"['Akira Kitaoka']"
] |
null | null | 2405.14285 | null | null | http://arxiv.org/pdf/2405.14285v1 | 2024-05-23T08:00:45Z | 2024-05-23T08:00:45Z | Computing the Bias of Constant-step Stochastic Approximation with
Markovian Noise | We study stochastic approximation algorithms with Markovian noise and constant step-size $alpha$. We develop a method based on infinitesimal generator comparisons to study the bias of the algorithm, which is the expected difference between $theta_n$ -- the value at iteration $n$ -- and $theta^*$ -- the unique equilibrium of the corresponding ODE. We show that, under some smoothness conditions, this bias is of order $O(alpha)$. Furthermore, we show that the time-averaged bias is equal to $alpha V + O(alpha^2)$, where $V$ is a constant characterized by a Lyapunov equation, showing that $esp{bar{theta}_n} approx theta^*+Valpha + O(alpha^2)$, where $bar{theta}_n=(1/n)sum_{k=1}^ntheta_k$ is the Polyak-Ruppert average. We also show that $bar{theta}_n$ converges with high probability around $theta^*+alpha V$. We illustrate how to combine this with Richardson-Romberg extrapolation to derive an iterative scheme with a bias of order $O(alpha^2)$. | [
"['Sebastian Allmeier' 'Nicolas Gast']"
] |
null | null | 2405.14286 | null | null | http://arxiv.org/pdf/2405.14286v1 | 2024-05-23T08:01:25Z | 2024-05-23T08:01:25Z | Co-Representation Neural Hypergraph Diffusion for Edge-Dependent Node
Classification | Hypergraphs are widely employed to represent complex higher-order relationships in real-world applications. Most hypergraph learning research focuses on node- or edge-level tasks. A practically relevant but more challenging task, edge-dependent node classification (ENC), is only recently proposed. In ENC, a node can have different labels across different hyperedges, which requires the modeling of node-hyperedge pairs instead of single nodes or hyperedges. Existing solutions for this task are based on message passing and model within-edge and within-node interactions as multi-input single-output functions. This brings three limitations: (1) non-adaptive representation size, (2) node/edge agnostic messages, and (3) insufficient interactions among nodes or hyperedges. To tackle these limitations, we develop CoNHD, a new solution based on hypergraph diffusion. Specifically, we first extend hypergraph diffusion using node-hyperedge co-representations. This extension explicitly models both within-edge and within-node interactions as multi-input multi-output functions using two equivariant diffusion operators. To avoid handcrafted regularization functions, we propose a neural implementation for the co-representation hypergraph diffusion process. Extensive experiments demonstrate the effectiveness and efficiency of the proposed CoNHD model. | [
"['Yijia Zheng' 'Marcel Worring']"
] |
null | null | 2405.14291 | null | null | http://arxiv.org/pdf/2405.14291v1 | 2024-05-23T08:09:21Z | 2024-05-23T08:09:21Z | Variational Bayes for Federated Continual Learning | Federated continual learning (FCL) has received increasing attention due to its potential in handling real-world streaming data, characterized by evolving data distributions and varying client classes over time. The constraints of storage limitations and privacy concerns confine local models to exclusively access the present data within each learning cycle. Consequently, this restriction induces performance degradation in model training on previous data, termed "catastrophic forgetting". However, existing FCL approaches need to identify or know changes in data distribution, which is difficult in the real world. To release these limitations, this paper directs attention to a broader continuous framework. Within this framework, we introduce Federated Bayesian Neural Network (FedBNN), a versatile and efficacious framework employing a variational Bayesian neural network across all clients. Our method continually integrates knowledge from local and historical data distributions into a single model, adeptly learning from new data distributions while retaining performance on historical distributions. We rigorously evaluate FedBNN's performance against prevalent methods in federated learning and continual learning using various metrics. Experimental analyses across diverse datasets demonstrate that FedBNN achieves state-of-the-art results in mitigating forgetting. | [
"['Dezhong Yao' 'Sanmu Li' 'Yutong Dai' 'Zhiqiang Xu' 'Shengshan Hu'\n 'Peilin Zhao' 'Lichao Sun']"
] |
null | null | 2405.14297 | null | null | http://arxiv.org/pdf/2405.14297v1 | 2024-05-23T08:18:30Z | 2024-05-23T08:18:30Z | Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient
Transformer Models | The Sparse Mixture of Experts (SMoE) has been widely employed to enhance the efficiency of training and inference for Transformer-based foundational models, yielding promising results. However, the performance of SMoE heavily depends on the choice of hyper-parameters, such as the number of experts and the number of experts to be activated (referred to as top-k), resulting in significant computational overhead due to the extensive model training by searching over various hyper-parameter configurations. As a remedy, we introduce the Dynamic Mixture of Experts (DynMoE) technique. DynMoE incorporates (1) a novel gating method that enables each token to automatically determine the number of experts to activate. (2) An adaptive process automatically adjusts the number of experts during training. Extensive numerical results across Vision, Language, and Vision-Language tasks demonstrate the effectiveness of our approach to achieve competitive performance compared to GMoE for vision and language tasks, and MoE-LLaVA for vision-language tasks, while maintaining efficiency by activating fewer parameters. Our code is available at https://github.com/LINs-lab/DynMoE. | [
"['Yongxin Guo' 'Zhenglin Cheng' 'Xiaoying Tang' 'Tao Lin']"
] |
null | null | 2405.14302 | null | null | http://arxiv.org/pdf/2405.14302v1 | 2024-05-23T08:22:00Z | 2024-05-23T08:22:00Z | Graphcode: Learning from multiparameter persistent homology using graph
neural networks | We introduce graphcodes, a novel multi-scale summary of the topological properties of a dataset that is based on the well-established theory of persistent homology. Graphcodes handle datasets that are filtered along two real-valued scale parameters. Such multi-parameter topological summaries are usually based on complicated theoretical foundations and difficult to compute; in contrast, graphcodes yield an informative and interpretable summary and can be computed as efficient as one-parameter summaries. Moreover, a graphcode is simply an embedded graph and can therefore be readily integrated in machine learning pipelines using graph neural networks. We describe such a pipeline and demonstrate that graphcodes achieve better classification accuracy than state-of-the-art approaches on various datasets. | [
"['Michael Kerber' 'Florian Russold']"
] |
null | null | 2405.14303 | null | null | http://arxiv.org/pdf/2405.14303v1 | 2024-05-23T08:23:22Z | 2024-05-23T08:23:22Z | Similarity-Navigated Conformal Prediction for Graph Neural Networks | Graph Neural Networks have achieved remarkable accuracy in semi-supervised node classification tasks. However, these results lack reliable uncertainty estimates. Conformal prediction methods provide a theoretical guarantee for node classification tasks, ensuring that the conformal prediction set contains the ground-truth label with a desired probability (e.g., 95%). In this paper, we empirically show that for each node, aggregating the non-conformity scores of nodes with the same label can improve the efficiency of conformal prediction sets. This observation motivates us to propose a novel algorithm named Similarity-Navigated Adaptive Prediction Sets (SNAPS), which aggregates the non-conformity scores based on feature similarity and structural neighborhood. The key idea behind SNAPS is that nodes with high feature similarity or direct connections tend to have the same label. By incorporating adaptive similar nodes information, SNAPS can generate compact prediction sets and increase the singleton hit ratio (correct prediction sets of size one). Moreover, we theoretically provide a finite-sample coverage guarantee of SNAPS. Extensive experiments demonstrate the superiority of SNAPS, improving the efficiency of prediction sets and singleton hit ratio while maintaining valid coverage. | [
"['Jianqing Song' 'Jianguo Huang' 'Wenyu Jiang' 'Baoming Zhang'\n 'Shuangjie Li' 'Chongjun Wang']"
] |
null | null | 2405.14307 | null | null | http://arxiv.org/pdf/2405.14307v1 | 2024-05-23T08:28:44Z | 2024-05-23T08:28:44Z | AdaGMLP: AdaBoosting GNN-to-MLP Knowledge Distillation | Graph Neural Networks (GNNs) have revolutionized graph-based machine learning, but their heavy computational demands pose challenges for latency-sensitive edge devices in practical industrial applications. In response, a new wave of methods, collectively known as GNN-to-MLP Knowledge Distillation, has emerged. They aim to transfer GNN-learned knowledge to a more efficient MLP student, which offers faster, resource-efficient inference while maintaining competitive performance compared to GNNs. However, these methods face significant challenges in situations with insufficient training data and incomplete test data, limiting their applicability in real-world applications. To address these challenges, we propose AdaGMLP, an AdaBoosting GNN-to-MLP Knowledge Distillation framework. It leverages an ensemble of diverse MLP students trained on different subsets of labeled nodes, addressing the issue of insufficient training data. Additionally, it incorporates a Node Alignment technique for robust predictions on test data with missing or incomplete features. Our experiments on seven benchmark datasets with different settings demonstrate that AdaGMLP outperforms existing G2M methods, making it suitable for a wide range of latency-sensitive real-world applications. We have submitted our code to the GitHub repository (https://github.com/WeigangLu/AdaGMLP-KDD24). | [
"['Weigang Lu' 'Ziyu Guan' 'Wei Zhao' 'Yaming Yang']"
] |
null | null | 2405.14313 | null | null | http://arxiv.org/pdf/2405.14313v1 | 2024-05-23T08:33:07Z | 2024-05-23T08:33:07Z | Smooth Pseudo-Labeling | Semi-Supervised Learning (SSL) seeks to leverage large amounts of non-annotated data along with the smallest amount possible of annotated data in order to achieve the same level of performance as if all data were annotated. A fruitful method in SSL is Pseudo-Labeling (PL), which, however, suffers from the important drawback that the associated loss function has discontinuities in its derivatives, which cause instabilities in performance when labels are very scarce. In the present work, we address this drawback with the introduction of a Smooth Pseudo-Labeling (SP L) loss function. It consists in adding a multiplicative factor in the loss function that smooths out the discontinuities in the derivative due to thresholding. In our experiments, we test our improvements on FixMatch and show that it significantly improves the performance in the regime of scarce labels, without addition of any modules, hyperparameters, or computational overhead. In the more stable regime of abundant labels, performance remains at the same level. Robustness with respect to variation of hyperparameters and training parameters is also significantly improved. Moreover, we introduce a new benchmark, where labeled images are selected randomly from the whole dataset, without imposing representation of each class proportional to its frequency in the dataset. We see that the smooth version of FixMatch does appear to perform better than the original, non-smooth implementation. However, more importantly, we notice that both implementations do not necessarily see their performance improve when labeled images are added, an important issue in the design of SSL algorithms that should be addressed so that Active Learning algorithms become more reliable and explainable. | [
"['Nikolaos Karaliolios' 'Hervé Le Borgne' 'Florian Chabot']"
] |
null | null | 2405.14314 | null | null | http://arxiv.org/pdf/2405.14314v2 | 2024-05-26T02:31:15Z | 2024-05-23T08:33:19Z | Towards Efficient LLM Grounding for Embodied Multi-Agent Collaboration | Grounding the reasoning ability of large language models (LLMs) for embodied tasks is challenging due to the complexity of the physical world. Especially, LLM planning for multi-agent collaboration requires communication of agents or credit assignment as the feedback to re-adjust the proposed plans and achieve effective coordination. However, existing methods that overly rely on physical verification or self-reflection suffer from excessive and inefficient querying of LLMs. In this paper, we propose a novel framework for multi-agent collaboration that introduces Reinforced Advantage feedback (ReAd) for efficient self-refinement of plans. Specifically, we perform critic regression to learn a sequential advantage function from LLM-planned data, and then treat the LLM planner as an optimizer to generate actions that maximize the advantage function. It endows the LLM with the foresight to discern whether the action contributes to accomplishing the final task. We provide theoretical analysis by extending advantage-weighted regression in reinforcement learning to multi-agent systems. Experiments on Overcooked-AI and a difficult variant of RoCoBench show that ReAd surpasses baselines in success rate, and also significantly decreases the interaction steps of agents and query rounds of LLMs, demonstrating its high efficiency for grounding LLMs. More results are given at https://read-llm.github.io/. | [
"['Yang Zhang' 'Shixin Yang' 'Chenjia Bai' 'Fei Wu' 'Xiu Li' 'Zhen Wang'\n 'Xuelong Li']"
] |
null | null | 2405.14318 | null | null | http://arxiv.org/pdf/2405.14318v1 | 2024-05-23T08:43:09Z | 2024-05-23T08:43:09Z | Adaptive Rentention & Correction for Continual Learning | Continual learning, also known as lifelong learning or incremental learning, refers to the process by which a model learns from a stream of incoming data over time. A common problem in continual learning is the classification layer's bias towards the most recent task. Traditionally, methods have relied on incorporating data from past tasks during training to mitigate this issue. However, the recent shift in continual learning to memory-free environments has rendered these approaches infeasible. In this study, we propose a solution focused on the testing phase. We first introduce a simple Out-of-Task Detection method, OTD, designed to accurately identify samples from past tasks during testing. Leveraging OTD, we then propose: (1) an Adaptive Retention mechanism for dynamically tuning the classifier layer on past task data; (2) an Adaptive Correction mechanism for revising predictions when the model classifies data from previous tasks into classes from the current task. We name our approach Adaptive Retention & Correction (ARC). While designed for memory-free environments, ARC also proves effective in memory-based settings. Extensive experiments show that our proposed method can be plugged in to virtually any existing continual learning approach without requiring any modifications to its training procedure. Specifically, when integrated with state-of-the-art approaches, ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets, respectively. | [
"['Haoran Chen' 'Micah Goldblum' 'Zuxuan Wu' 'Yu-Gang Jiang']"
] |
null | null | 2405.14331 | null | null | http://arxiv.org/pdf/2405.14331v1 | 2024-05-23T09:00:59Z | 2024-05-23T09:00:59Z | LucidPPN: Unambiguous Prototypical Parts Network for User-centric
Interpretable Computer Vision | Prototypical parts networks combine the power of deep learning with the explainability of case-based reasoning to make accurate, interpretable decisions. They follow the this looks like that reasoning, representing each prototypical part with patches from training images. However, a single image patch comprises multiple visual features, such as color, shape, and texture, making it difficult for users to identify which feature is important to the model. To reduce this ambiguity, we introduce the Lucid Prototypical Parts Network (LucidPPN), a novel prototypical parts network that separates color prototypes from other visual features. Our method employs two reasoning branches: one for non-color visual features, processing grayscale images, and another focusing solely on color information. This separation allows us to clarify whether the model's decisions are based on color, shape, or texture. Additionally, LucidPPN identifies prototypical parts corresponding to semantic parts of classified objects, making comparisons between data classes more intuitive, e.g., when two bird species might differ primarily in belly color. Our experiments demonstrate that the two branches are complementary and together achieve results comparable to baseline methods. More importantly, LucidPPN generates less ambiguous prototypical parts, enhancing user understanding. | [
"['Mateusz Pach' 'Dawid Rymarczyk' 'Koryna Lewandowska' 'Jacek Tabor'\n 'Bartosz Zieliński']"
] |
null | null | 2405.14335 | null | null | http://arxiv.org/pdf/2405.14335v1 | 2024-05-23T09:07:27Z | 2024-05-23T09:07:27Z | Logarithmic Smoothing for Pessimistic Off-Policy Evaluation, Selection
and Learning | This work investigates the offline formulation of the contextual bandit problem, where the goal is to leverage past interactions collected under a behavior policy to evaluate, select, and learn new, potentially better-performing, policies. Motivated by critical applications, we move beyond point estimators. Instead, we adopt the principle of pessimism where we construct upper bounds that assess a policy's worst-case performance, enabling us to confidently select and learn improved policies. Precisely, we introduce novel, fully empirical concentration bounds for a broad class of importance weighting risk estimators. These bounds are general enough to cover most existing estimators and pave the way for the development of new ones. In particular, our pursuit of the tightest bound within this class motivates a novel estimator (LS), that logarithmically smooths large importance weights. The bound for LS is provably tighter than all its competitors, and naturally results in improved policy selection and learning strategies. Extensive policy evaluation, selection, and learning experiments highlight the versatility and favorable performance of LS. | [
"['Otmane Sakhi' 'Imad Aouali' 'Pierre Alquier' 'Nicolas Chopin']"
] |
null | null | 2405.14352 | null | null | http://arxiv.org/pdf/2405.14352v1 | 2024-05-23T09:24:33Z | 2024-05-23T09:24:33Z | Explaining Graph Neural Networks via Structure-aware Interaction Index | The Shapley value is a prominent tool for interpreting black-box machine learning models thanks to its strong theoretical foundation. However, for models with structured inputs, such as graph neural networks, existing Shapley-based explainability approaches either focus solely on node-wise importance or neglect the graph structure when perturbing the input instance. This paper introduces the Myerson-Taylor interaction index that internalizes the graph structure into attributing the node values and the interaction values among nodes. Unlike the Shapley-based methods, the Myerson-Taylor index decomposes coalitions into components satisfying a pre-chosen connectivity criterion. We prove that the Myerson-Taylor index is the unique one that satisfies a system of five natural axioms accounting for graph structure and high-order interaction among nodes. Leveraging these properties, we propose Myerson-Taylor Structure-Aware Graph Explainer (MAGE), a novel explainer that uses the second-order Myerson-Taylor index to identify the most important motifs influencing the model prediction, both positively and negatively. Extensive experiments on various graph datasets and models demonstrate that our method consistently provides superior subgraph explanations compared to state-of-the-art methods. | [
"['Ngoc Bui' 'Hieu Trung Nguyen' 'Viet Anh Nguyen' 'Rex Ying']"
] |
null | null | 2405.14355 | null | null | http://arxiv.org/pdf/2405.14355v1 | 2024-05-23T09:29:00Z | 2024-05-23T09:29:00Z | Retrieval-Augmented Mining of Temporal Logic Specifications from Data | The integration of cyber-physical systems (CPS) into everyday life raises the critical necessity of ensuring their safety and reliability. An important step in this direction is requirement mining, i.e. inferring formally specified system properties from observed behaviors, in order to discover knowledge about the system. Signal Temporal Logic (STL) offers a concise yet expressive language for specifying requirements, particularly suited for CPS, where behaviors are typically represented as time series data. This work addresses the task of learning STL requirements from observed behaviors in a data-driven manner, focusing on binary classification, i.e. on inferring properties of the system which are able to discriminate between regular and anomalous behaviour, and that can be used both as classifiers and as monitors of the compliance of the CPS to desirable specifications. We present a novel framework that combines Bayesian Optimization (BO) and Information Retrieval (IR) techniques to simultaneously learn both the structure and the parameters of STL formulae, without restrictions on the STL grammar. Specifically, we propose a framework that leverages a dense vector database containing semantic-preserving continuous representations of millions of formulae, queried for facilitating the mining of requirements inside a BO loop. We demonstrate the effectiveness of our approach in several signal classification applications, showing its ability to extract interpretable insights from system executions and advance the state-of-the-art in requirement mining for CPS. | [
"['Gaia Saveri' 'Luca Bortolussi']"
] |
null | null | 2405.14366 | null | null | http://arxiv.org/pdf/2405.14366v1 | 2024-05-23T09:43:52Z | 2024-05-23T09:43:52Z | MiniCache: KV Cache Compression in Depth Dimension for Large Language
Models | A critical approach for efficiently deploying computationally demanding large language models (LLMs) is Key-Value (KV) caching. The KV cache stores key-value states of previously generated tokens, significantly reducing the need for repetitive computations and thereby lowering latency in autoregressive generation. However, the size of the KV cache grows linearly with sequence length, posing challenges for applications requiring long context input and extensive sequence generation. In this paper, we present a simple yet effective approach, called MiniCache, to compress the KV cache across layers from a novel depth perspective, significantly reducing the memory footprint for LLM inference. Our approach is based on the observation that KV cache states exhibit high similarity between the adjacent layers in the middle-to-deep portion of LLMs. To facilitate merging, we propose disentangling the states into the magnitude and direction components, interpolating the directions of the state vectors while preserving their lengths unchanged. Furthermore, we introduce a token retention strategy to keep highly distinct state pairs unmerged, thus preserving the information with minimal additional storage overhead. Our MiniCache is training-free and general, complementing existing KV cache compression strategies, such as quantization and sparsity. We conduct a comprehensive evaluation of MiniCache utilizing various models including LLaMA-2, LLaMA-3, Phi-3, Mistral, and Mixtral across multiple benchmarks, demonstrating its exceptional performance in achieving superior compression ratios and high throughput. On the ShareGPT dataset, LLaMA-2-7B with 4-bit MiniCache achieves a remarkable compression ratio of up to 5.02x, enhances inference throughput by approximately 5x, and reduces the memory footprint by 41% compared to the FP16 full cache baseline, all while maintaining near-lossless performance. | [
"['Akide Liu' 'Jing Liu' 'Zizheng Pan' 'Yefei He' 'Gholamreza Haffari'\n 'Bohan Zhuang']"
] |
null | null | 2405.14369 | null | null | http://arxiv.org/pdf/2405.14369v1 | 2024-05-23T09:45:57Z | 2024-05-23T09:45:57Z | RoPINN: Region Optimized Physics-Informed Neural Networks | Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs) by enforcing outputs and gradients of deep models to satisfy target equations. Due to the limitation of numerical computation, PINNs are conventionally optimized on finite selected points. However, since PDEs are usually defined on continuous domains, solely optimizing models on scattered points may be insufficient to obtain an accurate solution for the whole domain. To mitigate this inherent deficiency of the default scatter-point optimization, this paper proposes and theoretically studies a new training paradigm as region optimization. Concretely, we propose to extend the optimization process of PINNs from isolated points to their continuous neighborhood regions, which can theoretically decrease the generalization error, especially for hidden high-order constraints of PDEs. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm, which is implemented by a straightforward but effective Monte Carlo sampling method. By calibrating the sampling process into trust regions, RoPINN finely balances sampling efficiency and generalization error. Experimentally, RoPINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation. | [
"['Haixu Wu' 'Huakun Luo' 'Yuezhou Ma' 'Jianmin Wang' 'Mingsheng Long']"
] |
null | null | 2405.14372 | null | null | http://arxiv.org/pdf/2405.14372v1 | 2024-05-23T09:48:48Z | 2024-05-23T09:48:48Z | Learning Constrained Markov Decision Processes With Non-stationary
Rewards and Constraints | In constrained Markov decision processes (CMDPs) with adversarial rewards and constraints, a well-known impossibility result prevents any algorithm from attaining both sublinear regret and sublinear constraint violation, when competing against a best-in-hindsight policy that satisfies constraints on average. In this paper, we show that this negative result can be eased in CMDPs with non-stationary rewards and constraints, by providing algorithms whose performances smoothly degrade as non-stationarity increases. Specifically, we propose algorithms attaining $tilde{mathcal{O}} (sqrt{T} + C)$ regret and positive constraint violation under bandit feedback, where $C$ is a corruption value measuring the environment non-stationarity. This can be $Theta(T)$ in the worst case, coherently with the impossibility result for adversarial CMDPs. First, we design an algorithm with the desired guarantees when $C$ is known. Then, in the case $C$ is unknown, we show how to obtain the same results by embedding such an algorithm in a general meta-procedure. This is of independent interest, as it can be applied to any non-stationary constrained online learning setting. | [
"['Francesco Emanuele Stradi' 'Anna Lunghi' 'Matteo Castiglioni'\n 'Alberto Marchesi' 'Nicola Gatti']"
] |
null | null | 2405.14374 | null | null | http://arxiv.org/pdf/2405.14374v1 | 2024-05-23T09:50:04Z | 2024-05-23T09:50:04Z | State-Constrained Offline Reinforcement Learning | Traditional offline reinforcement learning methods predominantly operate in a batch-constrained setting. This confines the algorithms to a specific state-action distribution present in the dataset, reducing the effects of distributional shift but restricting the algorithm greatly. In this paper, we alleviate this limitation by introducing a novel framework named emph{state-constrained} offline reinforcement learning. By exclusively focusing on the dataset's state distribution, our framework significantly enhances learning potential and reduces previous limitations. The proposed setting not only broadens the learning horizon but also improves the ability to combine different trajectories from the dataset effectively, a desirable property inherent in offline reinforcement learning. Our research is underpinned by solid theoretical findings that pave the way for subsequent advancements in this domain. Additionally, we introduce StaCQ, a deep learning algorithm that is both performance-driven on the D4RL benchmark datasets and closely aligned with our theoretical propositions. StaCQ establishes a strong baseline for forthcoming explorations in state-constrained offline reinforcement learning. | [
"['Charles A. Hepburn' 'Yue Jin' 'Giovanni Montana']"
] |
null | null | 2405.14377 | null | null | http://arxiv.org/pdf/2405.14377v1 | 2024-05-23T09:52:15Z | 2024-05-23T09:52:15Z | CoMERA: Computing- and Memory-Efficient Training via Rank-Adaptive
Tensor Optimization | Training large AI models such as deep learning recommendation systems and foundation language (or multi-modal) models costs massive GPUs and computing time. The high training cost has become only affordable to big tech companies, meanwhile also causing increasing concerns about the environmental impact. This paper presents CoMERA, a Computing- and Memory-Efficient training method via Rank-Adaptive tensor optimization. CoMERA achieves end-to-end rank-adaptive tensor-compressed training via a multi-objective optimization formulation, and improves the training to provide both a high compression ratio and excellent accuracy in the training process. Our optimized numerical computation (e.g., optimized tensorized embedding and tensor-vector contractions) and GPU implementation eliminate part of the run-time overhead in the tensorized training on GPU. This leads to, for the first time, $2-3times$ speedup per training epoch compared with standard training. CoMERA also outperforms the recent GaLore in terms of both memory and computing efficiency. Specifically, CoMERA is $2times$ faster per training epoch and $9times$ more memory-efficient than GaLore on a tested six-encoder transformer with single-batch training. With further HPC optimization, CoMERA may significantly reduce the training cost of large language models. | [
"['Zi Yang' 'Samridhi Choudhary' 'Xinfeng Xie' 'Cao Gao'\n 'Siegfried Kunzmann' 'Zheng Zhang']"
] |
null | null | 2405.14384 | null | null | http://arxiv.org/pdf/2405.14384v1 | 2024-05-23T10:01:39Z | 2024-05-23T10:01:39Z | Reliable Trajectory Prediction and Uncertainty Quantification with
Conditioned Diffusion Models | This work introduces the conditioned Vehicle Motion Diffusion (cVMD) model, a novel network architecture for highway trajectory prediction using diffusion models. The proposed model ensures the drivability of the predicted trajectory by integrating non-holonomic motion constraints and physical constraints into the generative prediction module. Central to the architecture of cVMD is its capacity to perform uncertainty quantification, a feature that is crucial in safety-critical applications. By integrating the quantified uncertainty into the prediction process, the cVMD's trajectory prediction performance is improved considerably. The model's performance was evaluated using the publicly available highD dataset. Experiments show that the proposed architecture achieves competitive trajectory prediction accuracy compared to state-of-the-art models, while providing guaranteed drivable trajectories and uncertainty quantification. | [
"['Marion Neumeier' 'Sebastian Dorn' 'Michael Botsch' 'Wolfgang Utschick']"
] |
null | null | 2405.14392 | null | null | http://arxiv.org/pdf/2405.14392v1 | 2024-05-23T10:08:19Z | 2024-05-23T10:08:19Z | Markovian Flow Matching: Accelerating MCMC with Continuous Normalizing
Flows | Continuous normalizing flows (CNFs) learn the probability path between a reference and a target density by modeling the vector field generating said path using neural networks. Recently, Lipman et al. (2022) introduced a simple and inexpensive method for training CNFs in generative modeling, termed flow matching (FM). In this paper, we re-purpose this method for probabilistic inference by incorporating Markovian sampling methods in evaluating the FM objective and using the learned probability path to improve Monte Carlo sampling. We propose a sequential method, which uses samples from a Markov chain to fix the probability path defining the FM objective. We augment this scheme with an adaptive tempering mechanism that allows the discovery of multiple modes in the target. Under mild assumptions, we establish convergence to a local optimum of the FM objective, discuss improvements in the convergence rate, and illustrate our methods on synthetic and real-world examples. | [
"['Alberto Cabezas' 'Louis Sharrock' 'Christopher Nemeth']"
] |
null | null | 2405.14399 | null | null | http://arxiv.org/pdf/2405.14399v1 | 2024-05-23T10:19:10Z | 2024-05-23T10:19:10Z | Endowing Interpretability for Neural Cognitive Diagnosis by Efficient
Kolmogorov-Arnold Networks | In the realm of intelligent education, cognitive diagnosis plays a crucial role in subsequent recommendation tasks attributed to the revealed students' proficiency in knowledge concepts. Although neural network-based neural cognitive diagnosis models (CDMs) have exhibited significantly better performance than traditional models, neural cognitive diagnosis is criticized for the poor model interpretability due to the multi-layer perception (MLP) employed, even with the monotonicity assumption. Therefore, this paper proposes to empower the interpretability of neural cognitive diagnosis models through efficient kolmogorov-arnold networks (KANs), named KAN2CD, where KANs are designed to enhance interpretability in two manners. Specifically, in the first manner, KANs are directly used to replace the used MLPs in existing neural CDMs; while in the second manner, the student embedding, exercise embedding, and concept embedding are directly processed by several KANs, and then their outputs are further combined and learned in a unified KAN to get final predictions. To overcome the problem of training KANs slowly, we modify the implementation of original KANs to accelerate the training. Experiments on four real-world datasets show that the proposed KA2NCD exhibits better performance than traditional CDMs, and the proposed KA2NCD still has a bit of performance leading even over the existing neural CDMs. More importantly, the learned structures of KANs enable the proposed KA2NCD to hold as good interpretability as traditional CDMs, which is superior to existing neural CDMs. Besides, the training cost of the proposed KA2NCD is competitive to existing models. | [
"['Shangshang Yang' 'Linrui Qin' 'Xiaoshan Yu']"
] |
null | null | 2405.14402 | null | null | http://arxiv.org/pdf/2405.14402v1 | 2024-05-23T10:21:05Z | 2024-05-23T10:21:05Z | Exact Gauss-Newton Optimization for Training Deep Neural Networks | We present EGN, a stochastic second-order optimization algorithm that combines the generalized Gauss-Newton (GN) Hessian approximation with low-rank linear algebra to compute the descent direction. Leveraging the Duncan-Guttman matrix identity, the parameter update is obtained by factorizing a matrix which has the size of the mini-batch. This is particularly advantageous for large-scale machine learning problems where the dimension of the neural network parameter vector is several orders of magnitude larger than the batch size. Additionally, we show how improvements such as line search, adaptive regularization, and momentum can be seamlessly added to EGN to further accelerate the algorithm. Moreover, under mild assumptions, we prove that our algorithm converges to an $epsilon$-stationary point at a linear rate. Finally, our numerical experiments demonstrate that EGN consistently exceeds, or at most matches the generalization performance of well-tuned SGD, Adam, and SGN optimizers across various supervised and reinforcement learning tasks. | [
"['Mikalai Korbit' 'Adeyemi D. Adeoye' 'Alberto Bemporad' 'Mario Zanon']"
] |
null | null | 2405.14407 | null | null | http://arxiv.org/pdf/2405.14407v1 | 2024-05-23T10:26:18Z | 2024-05-23T10:26:18Z | Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning
for Dynamic Graph Neural Networks | Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data. Meanwhile, the advent of dynamic graph neural networks (DGNNs) marks a significant advancement due to their superior capability in learning from dynamic graphs, which encapsulate spatial-temporal variations in diverse real-world applications (e.g., traffic forecasting). With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning. However, current graph unlearning methodologies are designed for GNNs operating on static graphs and exhibit limitations including their serving in a pre-processing manner and impractical resource demands. Furthermore, the adaptation of these methods to DGNNs presents non-trivial challenges, owing to the distinctive nature of dynamic graphs. To this end, we propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning. Specifically, we first define the unlearning requests and formulate dynamic graph unlearning in the context of continuous-time dynamic graphs. After conducting a role analysis on the unlearning data, the remaining data, and the target DGNN model, we propose a method called Gradient Transformation and a loss function to map the unlearning request to the desired parameter update. Evaluations on six real-world datasets and state-of-the-art DGNN backbones demonstrate its effectiveness (e.g., limited performance drop even obvious improvement) and efficiency (e.g., at most 7.23$times$ speed-up) outperformance, and potential advantages in handling future unlearning requests (e.g., at most 32.59$times$ speed-up). | [
"['He Zhang' 'Bang Wu' 'Xiangwen Yang' 'Xingliang Yuan' 'Chengqi Zhang'\n 'Shirui Pan']"
] |
null | null | 2405.14422 | null | null | http://arxiv.org/pdf/2405.14422v3 | 2024-07-11T19:40:20Z | 2024-05-23T10:43:20Z | Unraveling overoptimism and publication bias in ML-driven science | Machine Learning (ML) is increasingly used across many disciplines with impressive reported results. However, recent studies suggest published performance of ML models are often overoptimistic. Validity concerns are underscored by findings of an inverse relationship between sample size and reported accuracy in published ML models, contrasting with the theory of learning curves where accuracy should improve or remain stable with increasing sample size. This paper investigates factors contributing to overoptimism in ML-driven science, focusing on overfitting and publication bias. We introduce a novel stochastic model for observed accuracy, integrating parametric learning curves and the aforementioned biases. We construct an estimator that corrects for these biases in observed data. Theoretical and empirical results show that our framework can estimate the underlying learning curve, providing realistic performance assessments from published results. Applying the model to meta-analyses of classifications of neurological conditions, we estimate the inherent limits of ML-based prediction in each domain. | [
"['Pouria Saidi' 'Gautam Dasarathy' 'Visar Berisha']"
] |
null | null | 2405.14425 | null | null | http://arxiv.org/pdf/2405.14425v2 | 2024-06-10T11:30:28Z | 2024-05-23T10:48:30Z | When predict can also explain: few-shot prediction to select better
neural latents | Latent variable models serve as powerful tools to infer underlying dynamics from observed neural activity. However, due to the absence of ground truth data, prediction benchmarks are often employed as proxies. In this study, we reveal the limitations of the widely-used 'co-smoothing' prediction framework and propose an improved few-shot prediction approach that encourages more accurate latent dynamics. Utilizing a student-teacher setup with Hidden Markov Models, we demonstrate that the high co-smoothing model space can encompass models with arbitrary extraneous dynamics within their latent representations. To address this, we introduce a secondary metric -- a few-shot version of co-smoothing. This involves performing regression from the latent variables to held-out channels in the data using fewer trials. Our results indicate that among models with near-optimal co-smoothing, those with extraneous dynamics underperform in the few-shot co-smoothing compared to 'minimal' models devoid of such dynamics. We also provide analytical insights into the origin of this phenomenon. We further validate our findings on real neural data using two state-of-the-art methods: LFADS and STNDT. In the absence of ground truth, we suggest a proxy measure to quantify extraneous dynamics. By cross-decoding the latent variables of all model pairs with high co-smoothing, we identify models with minimal extraneous dynamics. We find a correlation between few-shot co-smoothing performance and this new measure. In summary, we present a novel prediction metric designed to yield latent variables that more accurately reflect the ground truth, offering a significant improvement for latent dynamics inference. | [
"['Kabir Dabholkar' 'Omri Barak']"
] |
null | null | 2405.14428 | null | null | http://arxiv.org/pdf/2405.14428v1 | 2024-05-23T10:54:14Z | 2024-05-23T10:54:14Z | Mitigating Quantization Errors Due to Activation Spikes in GLU-Based
LLMs | Modern large language models (LLMs) have established state-of-the-art performance through architectural improvements, but still require significant computational cost for inference. In an effort to reduce the inference cost, post-training quantization (PTQ) has become a popular approach, quantizing weights and activations to lower precision, such as INT8. In this paper, we reveal the challenges of activation quantization in GLU variants, which are widely used in feed-forward network (FFN) of modern LLMs, such as LLaMA family. The problem is that severe local quantization errors, caused by excessive magnitudes of activation in GLU variants, significantly degrade the performance of the quantized LLM. We denote these activations as activation spikes. Our further observations provide a systematic pattern of activation spikes: 1) The activation spikes occur in the FFN of specific layers, particularly in the early and late layers, 2) The activation spikes are dedicated to a couple of tokens, rather than being shared across a sequence. Based on our observations, we propose two empirical methods, Quantization-free Module (QFeM) and Quantization-free Prefix (QFeP), to isolate the activation spikes during quantization. Our extensive experiments validate the effectiveness of the proposed methods for the activation quantization, especially with coarse-grained scheme, of latest LLMs with GLU variants, including LLaMA-2/3, Mistral, Mixtral, SOLAR, and Gemma. In particular, our methods enhance the current alleviation techniques (e.g., SmoothQuant) that fail to control the activation spikes. Code is available at https://github.com/onnoo/activation-spikes. | [
"['Jaewoo Yang' 'Hayun Kim' 'Younghoon Kim']"
] |
null | null | 2405.14432 | null | null | http://arxiv.org/pdf/2405.14432v2 | 2024-05-27T07:25:40Z | 2024-05-23T11:00:31Z | Boosting Robustness by Clipping Gradients in Distributed Learning | Robust distributed learning consists in achieving good learning performance despite the presence of misbehaving workers. State-of-the-art (SOTA) robust distributed gradient descent (Robust-DGD) methods, relying on robust aggregation, have been proven to be optimal: Their learning error matches the lower bound established under the standard heterogeneity model of $(G, B)$-gradient dissimilarity. The learning guarantee of SOTA Robust-DGD cannot be further improved when model initialization is done arbitrarily. However, we show that it is possible to circumvent the lower bound, and improve the learning performance, when the workers' gradients at model initialization are assumed to be bounded. We prove this by proposing pre-aggregation clipping of workers' gradients, using a novel scheme called adaptive robust clipping (ARC). Incorporating ARC in Robust-DGD provably improves the learning, under the aforementioned assumption on model initialization. The factor of improvement is prominent when the tolerable fraction of misbehaving workers approaches the breakdown point. ARC induces this improvement by constricting the search space, while preserving the robustness property of the original aggregation scheme at the same time. We validate this theoretical finding through exhaustive experiments on benchmark image classification tasks. | [
"['Youssef Allouah' 'Rachid Guerraoui' 'Nirupam Gupta' 'Ahmed Jellouli'\n 'Geovani Rizk' 'John Stephan']"
] |
null | null | 2405.14436 | null | null | http://arxiv.org/pdf/2405.14436v1 | 2024-05-23T11:05:42Z | 2024-05-23T11:05:42Z | LARS-VSA: A Vector Symbolic Architecture For Learning with Abstract
Rules | Human cognition excels at symbolic reasoning, deducing abstract rules from limited samples. This has been explained using symbolic and connectionist approaches, inspiring the development of a neuro-symbolic architecture that combines both paradigms. In parallel, recent studies have proposed the use of a "relational bottleneck" that separates object-level features from abstract rules, allowing learning from limited amounts of data . While powerful, it is vulnerable to the curse of compositionality meaning that object representations with similar features tend to interfere with each other. In this paper, we leverage hyperdimensional computing, which is inherently robust to such interference to build a compositional architecture. We adapt the "relational bottleneck" strategy to a high-dimensional space, incorporating explicit vector binding operations between symbols and relational representations. Additionally, we design a novel high-dimensional attention mechanism that leverages this relational representation. Our system benefits from the low overhead of operations in hyperdimensional space, making it significantly more efficient than the state of the art when evaluated on a variety of test datasets, while maintaining higher or equal accuracy. | [
"['Mohamed Mejri' 'Chandramouli Amarnath' 'Abhijit Chatterjee']"
] |
null | null | 2405.14438 | null | null | http://arxiv.org/pdf/2405.14438v1 | 2024-05-23T11:10:32Z | 2024-05-23T11:10:32Z | LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention
Networks | Numerous crucial tasks in real-world decision-making rely on machine learning algorithms with calibrated uncertainty estimates. However, modern methods often yield overconfident and uncalibrated predictions. Various approaches involve training an ensemble of separate models to quantify the uncertainty related to the model itself, known as epistemic uncertainty. In an explicit implementation, the ensemble approach has high computational cost and high memory requirements. This particular challenge is evident in state-of-the-art neural networks such as transformers, where even a single network is already demanding in terms of compute and memory. Consequently, efforts are made to emulate the ensemble model without actually instantiating separate ensemble members, referred to as implicit ensembling. We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks, which is based on Low-Rank Adaptation (LoRA). Initially developed for efficient LLM fine-tuning, we extend LoRA to an implicit ensembling approach. By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections. Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets. | [
"['Michelle Halbheer' 'Dominik J. Mühlematter' 'Alexander Becker'\n 'Dominik Narnhofer' 'Helge Aasen' 'Konrad Schindler'\n 'Mehmet Ozgur Turkoglu']"
] |
null | null | 2405.14440 | null | null | http://arxiv.org/pdf/2405.14440v1 | 2024-05-23T11:14:35Z | 2024-05-23T11:14:35Z | Bayesian Adaptive Calibration and Optimal Design | The process of calibrating computer models of natural phenomena is essential for applications in the physical sciences, where plenty of domain knowledge can be embedded into simulations and then calibrated against real observations. Current machine learning approaches, however, mostly rely on rerunning simulations over a fixed set of designs available in the observed data, potentially neglecting informative correlations across the design space and requiring a large amount of simulations. Instead, we consider the calibration process from the perspective of Bayesian adaptive experimental design and propose a data-efficient algorithm to run maximally informative simulations within a batch-sequential process. At each round, the algorithm jointly estimates the parameters of the posterior distribution and optimal designs by maximising a variational lower bound of the expected information gain. The simulator is modelled as a sample from a Gaussian process, which allows us to correlate simulations and observed data with the unknown calibration parameters. We show the benefits of our method when compared to related approaches across synthetic and real-data problems. | [
"['Rafael Oliveira' 'Dino Sejdinovic' 'David Howard' 'Edwin Bonilla']"
] |
null | null | 2405.14446 | null | null | http://arxiv.org/pdf/2405.14446v2 | 2024-05-27T10:59:22Z | 2024-05-23T11:25:19Z | Worldwide Federated Training of Language Models | The reliance of language model training on massive amounts of computation and vast datasets scraped from potentially low-quality, copyrighted, or sensitive data has come into question practically, legally, and ethically. Federated learning provides a plausible alternative by enabling previously untapped data to be voluntarily gathered from collaborating organizations. However, when scaled globally, federated learning requires collaboration across heterogeneous legal, security, and privacy regimes while accounting for the inherent locality of language data; this further exacerbates the established challenge of federated statistical heterogeneity. We propose a Worldwide Federated Language Model Training~(WorldLM) system based on federations of federations, where each federation has the autonomy to account for factors such as its industry, operating jurisdiction, or competitive environment. WorldLM enables such autonomy in the presence of statistical heterogeneity via partial model localization by allowing sub-federations to attentively aggregate key layers from their constituents. Furthermore, it can adaptively share information across federations via residual layer embeddings. Evaluations of language modeling on naturally heterogeneous datasets show that WorldLM outperforms standard federations by up to $1.91times$, approaches the personalized performance of fully local models, and maintains these advantages under privacy-enhancing techniques. | [
"['Alex Iacob' 'Lorenzo Sani' 'Bill Marino' 'Preslav Aleksandrov'\n 'William F. Shen' 'Nicholas Donald Lane']"
] |
null | null | 2405.14449 | null | null | http://arxiv.org/pdf/2405.14449v1 | 2024-05-23T11:29:33Z | 2024-05-23T11:29:33Z | Adversarial Schrödinger Bridge Matching | The Schr"odinger Bridge (SB) problem offers a powerful framework for combining optimal transport and diffusion models. A promising recent approach to solve the SB problem is the Iterative Markovian Fitting (IMF) procedure, which alternates between Markovian and reciprocal projections of continuous-time stochastic processes. However, the model built by the IMF procedure has a long inference time due to using many steps of numerical solvers for stochastic differential equations. To address this limitation, we propose a novel Discrete-time IMF (D-IMF) procedure in which learning of stochastic processes is replaced by learning just a few transition probabilities in discrete time. Its great advantage is that in practice it can be naturally implemented using the Denoising Diffusion GAN (DD-GAN), an already well-established adversarial generative modeling technique. We show that our D-IMF procedure can provide the same quality of unpaired domain translation as the IMF, using only several generation steps instead of hundreds. | [
"['Nikita Gushchin' 'Daniil Selikhanovych' 'Sergei Kholkin'\n 'Evgeny Burnaev' 'Alexander Korotin']"
] |
null | null | 2405.14453 | null | null | http://arxiv.org/pdf/2405.14453v1 | 2024-05-23T11:35:23Z | 2024-05-23T11:35:23Z | Domain-specific augmentations with resolution agnostic self-attention
mechanism improves choroid segmentation in optical coherence tomography
images | The choroid is a key vascular layer of the eye, supplying oxygen to the retinal photoreceptors. Non-invasive enhanced depth imaging optical coherence tomography (EDI-OCT) has recently improved access and visualisation of the choroid, making it an exciting frontier for discovering novel vascular biomarkers in ophthalmology and wider systemic health. However, current methods to measure the choroid often require use of multiple, independent semi-automatic and deep learning-based algorithms which are not made open-source. Previously, Choroidalyzer -- an open-source, fully automatic deep learning method trained on 5,600 OCT B-scans from 385 eyes -- was developed to fully segment and quantify the choroid in EDI-OCT images, thus addressing these issues. Using the same dataset, we propose a Robust, Resolution-agnostic and Efficient Attention-based network for CHoroid segmentation (REACH). REACHNet leverages multi-resolution training with domain-specific data augmentation to promote generalisation, and uses a lightweight architecture with resolution-agnostic self-attention which is not only faster than Choroidalyzer's previous network (4 images/s vs. 2.75 images/s on a standard laptop CPU), but has greater performance for segmenting the choroid region, vessels and fovea (Dice coefficient for region 0.9769 vs. 0.9749, vessels 0.8612 vs. 0.8192 and fovea 0.8243 vs. 0.3783) due to its improved hyperparameter configuration and model training pipeline. REACHNet can be used with Choroidalyzer as a drop-in replacement for the original model and will be made available upon publication. | [
"['Jamie Burke' 'Justin Engelmann' 'Charlene Hamid' 'Diana Moukaddem'\n 'Dan Pugh' 'Neeraj Dhaun' 'Amos Storkey' 'Niall Strang' 'Stuart King'\n 'Tom MacGillivray' 'Miguel O. Bernabeu' 'Ian J. C. MacCormick']"
] |
null | null | 2405.14457 | null | null | http://arxiv.org/pdf/2405.14457v1 | 2024-05-23T11:38:38Z | 2024-05-23T11:38:38Z | Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model | Machine learning models can be trained with formal privacy guarantees via differentially private optimizers such as DP-SGD. In this work, we study such privacy guarantees when the adversary only accesses the final model, i.e., intermediate model updates are not released. In the existing literature, this hidden state threat model exhibits a significant gap between the lower bound provided by empirical privacy auditing and the theoretical upper bound provided by privacy accounting. To challenge this gap, we propose to audit this threat model with adversaries that craft a gradient sequence to maximize the privacy loss of the final model without accessing intermediate models. We demonstrate experimentally how this approach consistently outperforms prior attempts at auditing the hidden state model. When the crafted gradient is inserted at every optimization step, our results imply that releasing only the final model does not amplify privacy, providing a novel negative result. On the other hand, when the crafted gradient is not inserted at every step, we show strong evidence that a privacy amplification phenomenon emerges in the general non-convex setting (albeit weaker than in convex regimes), suggesting that existing privacy upper bounds can be improved. | [
"['Tudor Cebere' 'Aurélien Bellet' 'Nicolas Papernot']"
] |
null | null | 2405.14467 | null | null | http://arxiv.org/pdf/2405.14467v1 | 2024-05-23T11:54:27Z | 2024-05-23T11:54:27Z | Segformer++: Efficient Token-Merging Strategies for High-Resolution
Semantic Segmentation | Utilizing transformer architectures for semantic segmentation of high-resolution images is hindered by the attention's quadratic computational complexity in the number of tokens. A solution to this challenge involves decreasing the number of tokens through token merging, which has exhibited remarkable enhancements in inference speed, training efficiency, and memory utilization for image classification tasks. In this paper, we explore various token merging strategies within the framework of the Segformer architecture and perform experiments on multiple semantic segmentation and human pose estimation datasets. Notably, without model re-training, we, for example, achieve an inference acceleration of 61% on the Cityscapes dataset while maintaining the mIoU performance. Consequently, this paper facilitates the deployment of transformer-based architectures on resource-constrained devices and in real-time applications. | [
"['Daniel Kienzle' 'Marco Kantonis' 'Robin Schön' 'Rainer Lienhart']"
] |
null | null | 2405.14468 | null | null | http://arxiv.org/pdf/2405.14468v1 | 2024-05-23T11:55:49Z | 2024-05-23T11:55:49Z | Neural Collapse versus Low-rank Bias: Is Deep Neural Collapse Really
Optimal? | Deep neural networks (DNNs) exhibit a surprising structure in their final layer known as neural collapse (NC), and a growing body of works has currently investigated the propagation of neural collapse to earlier layers of DNNs -- a phenomenon called deep neural collapse (DNC). However, existing theoretical results are restricted to special cases: linear models, only two layers or binary classification. In contrast, we focus on non-linear models of arbitrary depth in multi-class classification and reveal a surprising qualitative shift. As soon as we go beyond two layers or two classes, DNC stops being optimal for the deep unconstrained features model (DUFM) -- the standard theoretical framework for the analysis of collapse. The main culprit is a low-rank bias of multi-layer regularization schemes: this bias leads to optimal solutions of even lower rank than the neural collapse. We support our theoretical findings with experiments on both DUFM and real data, which show the emergence of the low-rank structure in the solution found by gradient descent. | [
"['Peter Súkeník' 'Marco Mondelli' 'Christoph Lampert']"
] |
null | null | 2405.14469 | null | null | http://arxiv.org/pdf/2405.14469v1 | 2024-05-23T11:56:05Z | 2024-05-23T11:56:05Z | Generalization of Hamiltonian algorithms | The paper proves generalization results for a class of stochastic learning algorithms. The method applies whenever the algorithm generates an absolutely continuous distribution relative to some a-priori measure and the Radon Nikodym derivative has subgaussian concentration. Applications are bounds for the Gibbs algorithm and randomizations of stable deterministic algorithms as well as PAC-Bayesian bounds with data-dependent priors. | [
"['Andreas Maurer']"
] |
null | null | 2405.14472 | null | null | http://arxiv.org/pdf/2405.14472v2 | 2024-05-30T07:29:58Z | 2024-05-23T12:00:35Z | SolNet: Open-source deep learning models for photovoltaic power
forecasting across the globe | Deep learning models have gained increasing prominence in recent years in the field of solar pho-tovoltaic (PV) forecasting. One drawback of these models is that they require a lot of high-quality data to perform well. This is often infeasible in practice, due to poor measurement infrastructure in legacy systems and the rapid build-up of new solar systems across the world. This paper proposes SolNet: a novel, general-purpose, multivariate solar power forecaster, which addresses these challenges by using a two-step forecasting pipeline which incorporates transfer learning from abundant synthetic data generated from PVGIS, before fine-tuning on observational data. Using actual production data from hundreds of sites in the Netherlands, Australia and Belgium, we show that SolNet improves forecasting performance over data-scarce settings as well as baseline models. We find transfer learning benefits to be the strongest when only limited observational data is available. At the same time we provide several guidelines and considerations for transfer learning practitioners, as our results show that weather data, seasonal patterns, amount of synthetic data and possible mis-specification in source location, can have a major impact on the results. The SolNet models created in this way are applicable for any land-based solar photovoltaic system across the planet where simulated and observed data can be combined to obtain improved forecasting capabilities. | [
"['Joris Depoortere' 'Johan Driesen' 'Johan Suykens' 'Hussain Syed Kazmi']"
] |
null | null | 2405.14473 | null | null | http://arxiv.org/pdf/2405.14473v1 | 2024-05-23T12:02:54Z | 2024-05-23T12:02:54Z | Poisson Variational Autoencoder | Variational autoencoders (VAE) employ Bayesian inference to interpret sensory inputs, mirroring processes that occur in primate vision across both ventral (Higgins et al., 2021) and dorsal (Vafaii et al., 2023) pathways. Despite their success, traditional VAEs rely on continuous latent variables, which deviates sharply from the discrete nature of biological neurons. Here, we developed the Poisson VAE (P-VAE), a novel architecture that combines principles of predictive coding with a VAE that encodes inputs into discrete spike counts. Combining Poisson-distributed latent variables with predictive coding introduces a metabolic cost term in the model loss function, suggesting a relationship with sparse coding which we verify empirically. Additionally, we analyze the geometry of learned representations, contrasting the P-VAE to alternative VAE models. We find that the P-VAEencodes its inputs in relatively higher dimensions, facilitating linear separability of categories in a downstream classification task with a much better (5x) sample efficiency. Our work provides an interpretable computational framework to study brain-like sensory processing and paves the way for a deeper understanding of perception as an inferential process. | [
"['Hadi Vafaii' 'Dekel Galor' 'Jacob L. Yates']"
] |
null | null | 2405.14477 | null | null | http://arxiv.org/pdf/2405.14477v1 | 2024-05-23T12:06:00Z | 2024-05-23T12:06:00Z | LiteVAE: Lightweight and Efficient Variational Autoencoders for Latent
Diffusion Models | Advances in latent diffusion models (LDMs) have revolutionized high-resolution image generation, but the design space of the autoencoder that is central to these systems remains underexplored. In this paper, we introduce LiteVAE, a family of autoencoders for LDMs that leverage the 2D discrete wavelet transform to enhance scalability and computational efficiency over standard variational autoencoders (VAEs) with no sacrifice in output quality. We also investigate the training methodologies and the decoder architecture of LiteVAE and propose several enhancements that improve the training dynamics and reconstruction quality. Our base LiteVAE model matches the quality of the established VAEs in current LDMs with a six-fold reduction in encoder parameters, leading to faster training and lower GPU memory requirements, while our larger model outperforms VAEs of comparable complexity across all evaluated metrics (rFID, LPIPS, PSNR, and SSIM). | [
"['Seyedmorteza Sadat' 'Jakob Buhmann' 'Derek Bradley' 'Otmar Hilliges'\n 'Romann M. Weber']"
] |
null | null | 2405.14492 | null | null | http://arxiv.org/pdf/2405.14492v1 | 2024-05-23T12:25:22Z | 2024-05-23T12:25:22Z | Iterative Methods for Full-Scale Gaussian Process Approximations for
Large Spatial Data | Gaussian processes are flexible probabilistic regression models which are widely used in statistics and machine learning. However, a drawback is their limited scalability to large data sets. To alleviate this, we consider full-scale approximations (FSAs) that combine predictive process methods and covariance tapering, thus approximating both global and local structures. We show how iterative methods can be used to reduce the computational costs for calculating likelihoods, gradients, and predictive distributions with FSAs. We introduce a novel preconditioner and show that it accelerates the conjugate gradient method's convergence speed and mitigates its sensitivity with respect to the FSA parameters and the eigenvalue structure of the original covariance matrix, and we demonstrate empirically that it outperforms a state-of-the-art pivoted Cholesky preconditioner. Further, we present a novel, accurate, and fast way to calculate predictive variances relying on stochastic estimations and iterative methods. In both simulated and real-world data experiments, we find that our proposed methodology achieves the same accuracy as Cholesky-based computations with a substantial reduction in computational time. Finally, we also compare different approaches for determining inducing points in predictive process and FSA models. All methods are implemented in a free C++ software library with high-level Python and R packages. | [
"['Tim Gyger' 'Reinhard Furrer' 'Fabio Sigrist']"
] |
null | null | 2405.14494 | null | null | http://arxiv.org/pdf/2405.14494v1 | 2024-05-23T12:26:25Z | 2024-05-23T12:26:25Z | Entrywise error bounds for low-rank approximations of kernel matrices | In this paper, we derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition (or singular value decomposition). While this approximation is well-known to be optimal with respect to the spectral and Frobenius norm error, little is known about the statistical behaviour of individual entries. Our error bounds fill this gap. A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues, which takes inspiration from the field of Random Matrix Theory. Finally, we validate our theory with an empirical study of a collection of synthetic and real-world datasets. | [
"['Alexander Modell']"
] |
null | null | 2405.14496 | null | null | http://arxiv.org/pdf/2405.14496v1 | 2024-05-23T12:28:16Z | 2024-05-23T12:28:16Z | Hybrid Global Causal Discovery with Local Search | Learning the unique directed acyclic graph corresponding to an unknown causal model is a challenging task. Methods based on functional causal models can identify a unique graph, but either suffer from the curse of dimensionality or impose strong parametric assumptions. To address these challenges, we propose a novel hybrid approach for global causal discovery in observational data that leverages local causal substructures. We first present a topological sorting algorithm that leverages ancestral relationships in linear structural equation models to establish a compact top-down hierarchical ordering, encoding more causal information than linear orderings produced by existing methods. We demonstrate that this approach generalizes to nonlinear settings with arbitrary noise. We then introduce a nonparametric constraint-based algorithm that prunes spurious edges by searching for local conditioning sets, achieving greater accuracy than current methods. We provide theoretical guarantees for correctness and worst-case polynomial time complexities, with empirical validation on synthetic data. | [
"['Sujai Hiremath' 'Jacqueline R. M. A. Maasch' 'Mengxiao Gao'\n 'Promit Ghosal' 'Kyra Gan']"
] |
null | null | 2405.14505 | null | null | http://arxiv.org/abs/2405.14505v1 | 2024-05-23T12:43:06Z | 2024-05-23T12:43:06Z | Explainable automatic industrial carbon footprint estimation from bank
transaction classification using natural language processing | Concerns about the effect of greenhouse gases have motivated the development of certification protocols to quantify the industrial carbon footprint (CF). These protocols are manual, work-intensive, and expensive. All of the above have led to a shift towards automatic data-driven approaches to estimate the CF, including Machine Learning (ML) solutions. Unfortunately, the decision-making processes involved in these solutions lack transparency from the end user's point of view, who must blindly trust their outcomes compared to intelligible traditional manual approaches. In this research, manual and automatic methodologies for CF estimation were reviewed, taking into account their transparency limitations. This analysis led to the proposal of a new explainable ML solution for automatic CF calculations through bank transaction classification. Consideration should be given to the fact that no previous research has considered the explainability of bank transaction classification for this purpose. For classification, different ML models have been employed based on their promising performance in the literature, such as Support Vector Machine, Random Forest, and Recursive Neural Networks. The results obtained were in the 90 % range for accuracy, precision, and recall evaluation metrics. From their decision paths, the proposed solution estimates the CO2 emissions associated with bank transactions. The explainability methodology is based on an agnostic evaluation of the influence of the input terms extracted from the descriptions of transactions using locally interpretable models. The explainability terms were automatically validated using a similarity metric over the descriptions of the target categories. Conclusively, the explanation performance is satisfactory in terms of the proximity of the explanations to the associated activity sector descriptions. | [
"['Jaime González-González' 'Silvia García-Méndez'\n 'Francisco de Arriba-Pérez' 'Francisco J. González-Castaño'\n 'Óscar Barba-Seara']"
] |
null | null | 2405.14507 | null | null | http://arxiv.org/pdf/2405.14507v1 | 2024-05-23T12:45:29Z | 2024-05-23T12:45:29Z | Unchosen Experts Can Contribute Too: Unleashing MoE Models' Power by
Self-Contrast | Mixture-of-Experts (MoE) has emerged as a prominent architecture for scaling model size while maintaining computational efficiency. In MoE, each token in the input sequence activates a different subset of experts determined by a routing mechanism. However, the unchosen experts in MoE models do not contribute to the output, potentially leading to underutilization of the model's capacity. In this work, we first conduct exploratory studies to demonstrate that increasing the number of activated experts does not necessarily improve and can even degrade the output quality. Then, we show that output distributions from an MoE model using different routing strategies substantially differ, indicating that different experts do not always act synergistically. Motivated by these findings, we propose Self-Contrast Mixture-of-Experts (SCMoE), a training-free strategy that utilizes unchosen experts in a self-contrast manner during inference. In SCMoE, the next-token probabilities are determined by contrasting the outputs from strong and weak activation using the same MoE model. Our method is conceptually simple and computationally lightweight, as it incurs minimal latency compared to greedy decoding. Experiments on several benchmarks (GSM8K, StrategyQA, MBPP and HumanEval) demonstrate that SCMoE can consistently enhance Mixtral 8x7B's reasoning capability across various domains. For example, it improves the accuracy on GSM8K from 61.79 to 66.94. Moreover, combining SCMoE with self-consistency yields additional gains, increasing major@20 accuracy from 75.59 to 78.31. | [
"['Chufan Shi' 'Cheng Yang' 'Xinyu Zhu' 'Jiahao Wang' 'Taiqiang Wu'\n 'Siheng Li' 'Deng Cai' 'Yujiu Yang' 'Yu Meng']"
] |
null | null | 2405.14517 | null | null | http://arxiv.org/pdf/2405.14517v1 | 2024-05-23T12:54:25Z | 2024-05-23T12:54:25Z | Identity Inference from CLIP Models using Only Textual Data | The widespread usage of large-scale multimodal models like CLIP has heightened concerns about the leakage of personally identifiable information (PII). Existing methods for identity inference in CLIP models, i.e., to detect the presence of a person's PII used for training a CLIP model, require querying the model with full PII, including textual descriptions of the person and corresponding images (e.g., the name and the face photo of the person). However, this may lead to potential privacy breach of the image, as it may have not been seen by the target model yet. Additionally, traditional membership inference attacks (MIAs) train shadow models to mimic the behaviors of the target model, which incurs high computational costs, especially for large CLIP models. To address these challenges, we propose a textual unimodal detector (TUNI) in CLIP models, a novel method for ID inference that 1) queries the target model with only text data; and 2) does not require training shadow models. Firstly, we develop a feature extraction algorithm, guided by the CLIP model, to extract features from a text description. TUNI starts with randomly generating textual gibberish that were clearly not utilized for training, and leverages their feature vectors to train a system of anomaly detectors. During inference, the feature vector of each test text is fed into the anomaly detectors to determine if the person's PII is in the training set (abnormal) or not (normal). Moreover, TUNI can be further strengthened integrating real images associated with the tested individuals, if available at the detector. Extensive experiments of TUNI across various CLIP model architectures and datasets demonstrate its superior performance over baselines, albeit with only text data. | [
"['Songze Li' 'Ruoxi Cheng' 'Xiaojun Jia']"
] |
null | null | 2405.14519 | null | null | http://arxiv.org/pdf/2405.14519v1 | 2024-05-23T13:01:36Z | 2024-05-23T13:01:36Z | A New Formulation for Zeroth-Order Optimization of Adversarial EXEmples
in Malware Detection | Machine learning malware detectors are vulnerable to adversarial EXEmples, i.e. carefully-crafted Windows programs tailored to evade detection. Unlike other adversarial problems, attacks in this context must be functionality-preserving, a constraint which is challenging to address. As a consequence heuristic algorithms are typically used, that inject new content, either randomly-picked or harvested from legitimate programs. In this paper, we show how learning malware detectors can be cast within a zeroth-order optimization framework which allows to incorporate functionality-preserving manipulations. This permits the deployment of sound and efficient gradient-free optimization algorithms, which come with theoretical guarantees and allow for minimal hyper-parameters tuning. As a by-product, we propose and study ZEXE, a novel zero-order attack against Windows malware detection. Compared to state-of-the-art techniques, ZEXE provides drastic improvement in the evasion rate, while reducing to less than one third the size of the injected content. | [
"['Marco Rando' 'Luca Demetrio' 'Lorenzo Rosasco' 'Fabio Roli']"
] |
null | null | 2405.14521 | null | null | http://arxiv.org/pdf/2405.14521v1 | 2024-05-23T13:03:23Z | 2024-05-23T13:03:23Z | Synthetic Data Generation for Intersectional Fairness by Leveraging
Hierarchical Group Structure | In this paper, we introduce a data augmentation approach specifically tailored to enhance intersectional fairness in classification tasks. Our method capitalizes on the hierarchical structure inherent to intersectionality, by viewing groups as intersections of their parent categories. This perspective allows us to augment data for smaller groups by learning a transformation function that combines data from these parent groups. Our empirical analysis, conducted on four diverse datasets including both text and images, reveals that classifiers trained with this data augmentation approach achieve superior intersectional fairness and are more robust to ``leveling down'' when compared to methods optimizing traditional group fairness metrics. | [
"['Gaurav Maheshwari' 'Aurélien Bellet' 'Pascal Denis' 'Mikaela Keller']"
] |
null | null | 2405.14522 | null | null | http://arxiv.org/pdf/2405.14522v1 | 2024-05-23T13:03:26Z | 2024-05-23T13:03:26Z | Explaining Black-box Model Predictions via Two-level Nested Feature
Attributions with Consistency Property | Techniques that explain the predictions of black-box machine learning models are crucial to make the models transparent, thereby increasing trust in AI systems. The input features to the models often have a nested structure that consists of high- and low-level features, and each high-level feature is decomposed into multiple low-level features. For such inputs, both high-level feature attributions (HiFAs) and low-level feature attributions (LoFAs) are important for better understanding the model's decision. In this paper, we propose a model-agnostic local explanation method that effectively exploits the nested structure of the input to estimate the two-level feature attributions simultaneously. A key idea of the proposed method is to introduce the consistency property that should exist between the HiFAs and LoFAs, thereby bridging the separate optimization problems for estimating them. Thanks to this consistency property, the proposed method can produce HiFAs and LoFAs that are both faithful to the black-box models and consistent with each other, using a smaller number of queries to the models. In experiments on image classification in multiple instance learning and text classification using language models, we demonstrate that the HiFAs and LoFAs estimated by the proposed method are accurate, faithful to the behaviors of the black-box models, and provide consistent explanations. | [
"['Yuya Yoshikawa' 'Masanari Kimura' 'Ryotaro Shimizu' 'Yuki Saito']"
] |
null | null | 2405.14527 | null | null | http://arxiv.org/pdf/2405.14527v2 | 2024-07-03T12:39:58Z | 2024-05-23T13:11:49Z | ArchesWeather: An efficient AI weather forecasting model at 1.5°
resolution | One of the guiding principles for designing AI-based weather forecasting systems is to embed physical constraints as inductive priors in the neural network architecture. A popular prior is locality, where the atmospheric data is processed with local neural interactions, like 3D convolutions or 3D local attention windows as in Pangu-Weather. On the other hand, some works have shown great success in weather forecasting without this locality principle, at the cost of a much higher parameter count. In this paper, we show that the 3D local processing in Pangu-Weather is computationally sub-optimal. We design ArchesWeather, a transformer model that combines 2D attention with a column-wise attention-based feature interaction module, and demonstrate that this design improves forecasting skill. ArchesWeather is trained at 1.5{deg} resolution and 24h lead time, with a training budget of a few GPU-days and a lower inference cost than competing methods. An ensemble of four of our models shows better RMSE scores than the IFS HRES and is competitive with the 1.4{deg} 50-members NeuralGCM ensemble for one to three days ahead forecasting. Our code and models are publicly available at https://github.com/gcouairon/ArchesWeather. | [
"['Guillaume Couairon' 'Christian Lessig' 'Anastase Charantonis'\n 'Claire Monteleoni']"
] |
null | null | 2405.14528 | null | null | http://arxiv.org/pdf/2405.14528v1 | 2024-05-23T13:14:08Z | 2024-05-23T13:14:08Z | Towards Privacy-Aware and Personalised Assistive Robots: A User-Centred
Approach | The global increase in the elderly population necessitates innovative long-term care solutions to improve the quality of life for vulnerable individuals while reducing caregiver burdens. Assistive robots, leveraging advancements in Machine Learning, offer promising personalised support. However, their integration into daily life raises significant privacy concerns. Widely used frameworks like the Robot Operating System (ROS) historically lack inherent privacy mechanisms, complicating data-driven approaches in robotics. This research pioneers user-centric, privacy-aware technologies such as Federated Learning (FL) to advance assistive robotics. FL enables collaborative learning without sharing sensitive data, addressing privacy and scalability issues. This work includes developing solutions for smart wheelchair assistance, enhancing user independence and well-being. By tackling challenges related to non-stationary data and heterogeneous environments, the research aims to improve personalisation and user experience. Ultimately, it seeks to lead the responsible integration of assistive robots into society, enhancing the quality of life for elderly and care-dependent individuals. | [
"['Fernando E. Casado']"
] |
null | null | 2405.14532 | null | null | http://arxiv.org/pdf/2405.14532v1 | 2024-05-23T13:18:51Z | 2024-05-23T13:18:51Z | Aligning Embeddings and Geometric Random Graphs: Informational Results
and Computational Approaches for the Procrustes-Wasserstein Problem | The Procrustes-Wasserstein problem consists in matching two high-dimensional point clouds in an unsupervised setting, and has many applications in natural language processing and computer vision. We consider a planted model with two datasets $X,Y$ that consist of $n$ datapoints in $mathbb{R}^d$, where $Y$ is a noisy version of $X$, up to an orthogonal transformation and a relabeling of the data points. This setting is related to the graph alignment problem in geometric models. In this work, we focus on the euclidean transport cost between the point clouds as a measure of performance for the alignment. We first establish information-theoretic results, in the high ($d gg log n$) and low ($d ll log n$) dimensional regimes. We then study computational aspects and propose the Ping-Pong algorithm, alternatively estimating the orthogonal transformation and the relabeling, initialized via a Franke-Wolfe convex relaxation. We give sufficient conditions for the method to retrieve the planted signal after one single step. We provide experimental results to compare the proposed approach with the state-of-the-art method of Grave et al. (2019). | [
"['Mathieu Even' 'Luca Ganassali' 'Jakob Maier' 'Laurent Massoulié']"
] |
null | null | 2405.14536 | null | null | http://arxiv.org/pdf/2405.14536v1 | 2024-05-23T13:22:17Z | 2024-05-23T13:22:17Z | Regressor-free Molecule Generation to Support Drug Response Prediction | Drug response prediction (DRP) is a crucial phase in drug discovery, and the most important metric for its evaluation is the IC50 score. DRP results are heavily dependent on the quality of the generated molecules. Existing molecule generation methods typically employ classifier-based guidance, enabling sampling within the IC50 classification range. However, these methods fail to ensure the sampling space range's effectiveness, generating numerous ineffective molecules. Through experimental and theoretical study, we hypothesize that conditional generation based on the target IC50 score can obtain a more effective sampling space. As a result, we introduce regressor-free guidance molecule generation to ensure sampling within a more effective space and support DRP. Regressor-free guidance combines a diffusion model's score estimation with a regression controller model's gradient based on number labels. To effectively map regression labels between drugs and cell lines, we design a common-sense numerical knowledge graph that constrains the order of text representations. Experimental results on the real-world dataset for the DRP task demonstrate our method's effectiveness in drug discovery. The code is available at:https://anonymous.4open.science/r/RMCD-DBD1. | [
"['Kun Li' 'Xiuwen Gong' 'Shirui Pan' 'Jia Wu' 'Bo Du' 'Wenbin Hu']"
] |
null | null | 2405.14540 | null | null | http://arxiv.org/pdf/2405.14540v1 | 2024-05-23T13:22:59Z | 2024-05-23T13:22:59Z | This Too Shall Pass: Removing Stale Observations in Dynamic Bayesian
Optimization | Bayesian Optimization (BO) has proven to be very successful at optimizing a static, noisy, costly-to-evaluate black-box function $f : mathcal{S} to mathbb{R}$. However, optimizing a black-box which is also a function of time (i.e., a dynamic function) $f : mathcal{S} times mathcal{T} to mathbb{R}$ remains a challenge, since a dynamic Bayesian Optimization (DBO) algorithm has to keep track of the optimum over time. This changes the nature of the optimization problem in at least three aspects: (i) querying an arbitrary point in $mathcal{S} times mathcal{T}$ is impossible, (ii) past observations become less and less relevant for keeping track of the optimum as time goes by and (iii) the DBO algorithm must have a high sampling frequency so it can collect enough relevant observations to keep track of the optimum through time. In this paper, we design a Wasserstein distance-based criterion able to quantify the relevancy of an observation with respect to future predictions. Then, we leverage this criterion to build W-DBO, a DBO algorithm able to remove irrelevant observations from its dataset on the fly, thus maintaining simultaneously a good predictive performance and a high sampling frequency, even in continuous-time optimization tasks with unknown horizon. Numerical experiments establish the superiority of W-DBO, which outperforms state-of-the-art methods by a comfortable margin. | [
"['Anthony Bardou' 'Patrick Thiran' 'Giovanni Ranieri']"
] |
null | null | 2405.14544 | null | null | http://arxiv.org/pdf/2405.14544v1 | 2024-05-23T13:24:38Z | 2024-05-23T13:24:38Z | Nuclear Norm Regularization for Deep Learning | Penalizing the nuclear norm of a function's Jacobian encourages it to locally behave like a low-rank linear map. Such functions vary locally along only a handful of directions, making the Jacobian nuclear norm a natural regularizer for machine learning problems. However, this regularizer is intractable for high-dimensional problems, as it requires computing a large Jacobian matrix and taking its singular value decomposition. We show how to efficiently penalize the Jacobian nuclear norm using techniques tailor-made for deep learning. We prove that for functions parametrized as compositions $f = g circ h$, one may equivalently penalize the average squared Frobenius norm of $Jg$ and $Jh$. We then propose a denoising-style approximation that avoids the Jacobian computations altogether. Our method is simple, efficient, and accurate, enabling Jacobian nuclear norm regularization to scale to high-dimensional deep learning problems. We complement our theory with an empirical study of our regularizer's performance and investigate applications to denoising and representation learning. | [
"['Christopher Scarvelis' 'Justin Solomon']"
] |
null | null | 2405.14545 | null | null | http://arxiv.org/pdf/2405.14545v1 | 2024-05-23T13:25:20Z | 2024-05-23T13:25:20Z | A Cross-Field Fusion Strategy for Drug-Target Interaction Prediction | Drug-target interaction (DTI) prediction is a critical component of the drug discovery process. In the drug development engineering field, predicting novel drug-target interactions is extremely crucial.However, although existing methods have achieved high accuracy levels in predicting known drugs and drug targets, they fail to utilize global protein information during DTI prediction. This leads to an inability to effectively predict interaction the interactions between novel drugs and their targets. As a result, the cross-field information fusion strategy is employed to acquire local and global protein information. Thus, we propose the siamese drug-target interaction SiamDTI prediction method, which utilizes a double channel network structure for cross-field supervised learning.Experimental results on three benchmark datasets demonstrate that SiamDTI achieves higher accuracy levels than other state-of-the-art (SOTA) methods on novel drugs and targets.Additionally, SiamDTI's performance with known drugs and targets is comparable to that of SOTA approachs. The code is available at https://anonymous.4open.science/r/DDDTI-434D. | [
"['Hongzhi Zhang' 'Xiuwen Gong' 'Shirui Pan' 'Jia Wu' 'Bo Du' 'Wenbin Hu']"
] |
null | null | 2405.14547 | null | null | http://arxiv.org/pdf/2405.14547v1 | 2024-05-23T13:25:41Z | 2024-05-23T13:25:41Z | Causal Effect Identification in a Sub-Population with Latent Variables | The s-ID problem seeks to compute a causal effect in a specific sub-population from the observational data pertaining to the same sub population (Abouei et al., 2023). This problem has been addressed when all the variables in the system are observable. In this paper, we consider an extension of the s-ID problem that allows for the presence of latent variables. To tackle the challenges induced by the presence of latent variables in a sub-population, we first extend the classical relevant graphical definitions, such as c-components and Hedges, initially defined for the so-called ID problem (Pearl, 1995; Tian & Pearl, 2002), to their new counterparts. Subsequently, we propose a sound algorithm for the s-ID problem with latent variables. | [
"['Amir Mohammad Abouei' 'Ehsan Mokhtarian' 'Negar Kiyavash'\n 'Matthias Grossglauser']"
] |
null | null | 2405.14555 | null | null | http://arxiv.org/pdf/2405.14555v4 | 2024-06-03T16:43:16Z | 2024-05-23T13:35:34Z | Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating
Representative and Affinity Bias in Large Language Models | Research on Large Language Models (LLMs) has often neglected subtle biases that, although less apparent, can significantly influence the models' outputs toward particular social narratives. This study addresses two such biases within LLMs: representative bias, which denotes a tendency of LLMs to generate outputs that mirror the experiences of certain identity groups, and affinity bias, reflecting the models' evaluative preferences for specific narratives or viewpoints. We introduce two novel metrics to measure these biases: the Representative Bias Score (RBS) and the Affinity Bias Score (ABS), and present the Creativity-Oriented Generation Suite (CoGS), a collection of open-ended tasks such as short story writing and poetry composition, designed with customized rubrics to detect these subtle biases. Our analysis uncovers marked representative biases in prominent LLMs, with a preference for identities associated with being white, straight, and men. Furthermore, our investigation of affinity bias reveals distinctive evaluative patterns within each model, akin to `bias fingerprints'. This trend is also seen in human evaluators, highlighting a complex interplay between human and machine bias perceptions. | [
"['Abhishek Kumar' 'Sarfaroz Yunusov' 'Ali Emami']"
] |
null | null | 2405.14558 | null | null | http://arxiv.org/pdf/2405.14558v1 | 2024-05-23T13:37:26Z | 2024-05-23T13:37:26Z | FUSE: Fast Unified Simulation and Estimation for PDEs | The joint prediction of continuous fields and statistical estimation of the underlying discrete parameters is a common problem for many physical systems, governed by PDEs. Hitherto, it has been separately addressed by employing operator learning surrogates for field prediction while using simulation-based inference (and its variants) for statistical parameter determination. Here, we argue that solving both problems within the same framework can lead to consistent gains in accuracy and robustness. To this end, We propose a novel and flexible formulation of the operator learning problem that allows jointly predicting continuous quantities and inferring distributions of discrete parameters, and thus amortizing the cost of both the inverse and the surrogate models to a joint pre-training step. We present the capabilities of the proposed methodology for predicting continuous and discrete biomarkers in full-body haemodynamics simulations under different levels of missing information. We also consider a test case for atmospheric large-eddy simulation of a two-dimensional dry cold bubble, where we infer both continuous time-series and information about the systems conditions. We present comparisons against different baselines to showcase significantly increased accuracy in both the inverse and the surrogate tasks. | [
"['Levi E. Lingsch' 'Dana Grund' 'Siddhartha Mishra' 'Georgios Kissas']"
] |
null | null | 2405.14567 | null | null | http://arxiv.org/pdf/2405.14567v2 | 2024-05-24T02:22:21Z | 2024-05-23T13:43:29Z | EHRMamba: Towards Generalizable and Scalable Foundation Models for
Electronic Health Records | Transformers have significantly advanced the modeling of Electronic Health Records (EHR), yet their deployment in real-world healthcare is limited by several key challenges. Firstly, the quadratic computational cost and insufficient context length of these models pose significant obstacles for hospitals in processing the extensive medical histories typical in EHR data. Additionally, existing models employ separate finetuning for each clinical task, complicating maintenance in healthcare environments. Moreover, these models focus exclusively on either clinical prediction or EHR forecasting, lacking the flexibility to perform well across both. To overcome these limitations, we introduce EHRMamba, a robust foundation model built on the Mamba architecture. EHRMamba can process sequences up to four times longer than previous models due to its linear computational cost. We also introduce a novel approach to Multitask Prompted Finetuning (MTF) for EHR data, which enables EHRMamba to simultaneously learn multiple clinical tasks in a single finetuning phase, significantly enhancing deployment and cross-task generalization. Furthermore, our model leverages the HL7 FHIR data standard to simplify integration into existing hospital systems. Alongside EHRMamba, we open-source Odyssey, a toolkit designed to support the development and deployment of EHR foundation models, with an emphasis on data standardization and interpretability. Our evaluations on the MIMIC-IV dataset demonstrate that EHRMamba advances state-of-the-art performance across 6 major clinical tasks and excels in EHR forecasting, marking a significant leap forward in the field. | [
"['Adibvafa Fallahpour' 'Mahshid Alinoori' 'Arash Afkanpour'\n 'Amrit Krishnan']"
] |
null | null | 2405.14573 | null | null | http://arxiv.org/pdf/2405.14573v2 | 2024-06-10T17:30:49Z | 2024-05-23T13:48:54Z | AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents | Autonomous agents that execute human tasks by controlling computers can enhance human productivity and application accessibility. However, progress in this field will be driven by realistic and reproducible benchmarks. We present AndroidWorld, a fully functional Android environment that provides reward signals for 116 programmatic tasks across 20 real-world Android apps. Unlike existing interactive environments, which provide a static test set, AndroidWorld dynamically constructs tasks that are parameterized and expressed in natural language in unlimited ways, thus enabling testing on a much larger and more realistic suite of tasks. Reward signals are derived from the computer's system state, making them durable across task variations and extensible across different apps. To demonstrate AndroidWorld's benefits and mode of operation, we introduce a new computer control agent, M3A. M3A can complete 30.6% of the AndroidWorld's tasks, leaving ample room for future work. Furthermore, we adapt a popular desktop web agent to work on Android, which we find to be less effective on mobile, suggesting future research is needed to achieve universal, cross-domain agents. Finally, we conduct a robustness analysis by testing M3A against a range of task variations on a representative subset of tasks, demonstrating that variations in task parameters can significantly alter a task's complexity and, consequently, an agent's performance, highlighting the importance of testing agents under diverse conditions. AndroidWorld and the experiments in this paper are available at https://github.com/google-research/android_world. | [
"['Christopher Rawles' 'Sarah Clinckemaillie' 'Yifan Chang'\n 'Jonathan Waltz' 'Gabrielle Lau' 'Marybeth Fair' 'Alice Li'\n 'William Bishop' 'Wei Li' 'Folawiyo Campbell-Ajala' 'Daniel Toyama'\n 'Robert Berry' 'Divya Tyamagundlu' 'Timothy Lillicrap' 'Oriana Riva']"
] |
null | null | 2405.14574 | null | null | http://arxiv.org/pdf/2405.14574v1 | 2024-05-23T13:49:37Z | 2024-05-23T13:49:37Z | Learning with Fitzpatrick Losses | Fenchel-Young losses are a family of convex loss functions, encompassing the squared, logistic and sparsemax losses, among others. Each Fenchel-Young loss is implicitly associated with a link function, for mapping model outputs to predictions. For instance, the logistic loss is associated with the soft argmax link function. Can we build new loss functions associated with the same link function as Fenchel-Young losses? In this paper, we introduce Fitzpatrick losses, a new family of convex loss functions based on the Fitzpatrick function. A well-known theoretical tool in maximal monotone operator theory, the Fitzpatrick function naturally leads to a refined Fenchel-Young inequality, making Fitzpatrick losses tighter than Fenchel-Young losses, while maintaining the same link function for prediction. As an example, we introduce the Fitzpatrick logistic loss and the Fitzpatrick sparsemax loss, counterparts of the logistic and the sparsemax losses. This yields two new tighter losses associated with the soft argmax and the sparse argmax, two of the most ubiquitous output layers used in machine learning. We study in details the properties of Fitzpatrick losses and in particular, we show that they can be seen as Fenchel-Young losses using a modified, target-dependent generating function. We demonstrate the effectiveness of Fitzpatrick losses for label proportion estimation. | [
"['Seta Rakotomandimby' 'Jean-Philippe Chancelier' 'Michel de Lara'\n 'Mathieu Blondel']"
] |
null | null | 2405.14577 | null | null | http://arxiv.org/pdf/2405.14577v1 | 2024-05-23T13:51:55Z | 2024-05-23T13:51:55Z | Representation noising effectively prevents harmful fine-tuning on LLMs | Releasing open-source large language models (LLMs) presents a dual-use risk since bad actors can easily fine-tune these models for harmful purposes. Even without the open release of weights, weight stealing and fine-tuning APIs make closed models vulnerable to harmful fine-tuning attacks (HFAs). While safety measures like preventing jailbreaks and improving safety guardrails are important, such measures can easily be reversed through fine-tuning. In this work, we propose Representation Noising (RepNoise), a defence mechanism that is effective even when attackers have access to the weights and the defender no longer has any control. RepNoise works by removing information about harmful representations such that it is difficult to recover them during fine-tuning. Importantly, our defence is also able to generalize across different subsets of harm that have not been seen during the defence process. Our method does not degrade the general capability of LLMs and retains the ability to train the model on harmless tasks. We provide empirical evidence that the effectiveness of our defence lies in its "depth": the degree to which information about harmful representations is removed across all layers of the LLM. | [
"['Domenic Rosati' 'Jan Wehner' 'Kai Williams' 'Łukasz Bartoszcze'\n 'David Atanasov' 'Robie Gonzales' 'Subhabrata Majumdar' 'Carsten Maple'\n 'Hassan Sajjad' 'Frank Rudzicz']"
] |
null | null | 2405.14578 | null | null | http://arxiv.org/pdf/2405.14578v3 | 2024-06-04T09:11:13Z | 2024-05-23T13:52:36Z | Surge Phenomenon in Optimal Learning Rate and Batch Size Scaling | In current deep learning tasks, Adam style optimizers such as Adam, Adagrad, RMSProp, Adafactor, and Lion have been widely used as alternatives to SGD style optimizers. These optimizers typically update model parameters using the sign of gradients, resulting in more stable convergence curves. The learning rate and the batch size are the most critical hyperparameters for optimizers, which require careful tuning to enable effective convergence. Previous research has shown that the optimal learning rate increases linearly or follows similar rules with batch size for SGD style optimizers. However, this conclusion is not applicable to Adam style optimizers. In this paper, we elucidate the connection between optimal learning rates and batch sizes for Adam style optimizers through both theoretical analysis and extensive experiments. First, we raise the scaling law between batch sizes and optimal learning rates in the sign of gradient case, in which we prove that the optimal learning rate first rises and then falls as the batch size increases. Moreover, the peak value of the surge will gradually move toward the larger batch size as training progresses. Second, we conducted experiments on various CV and NLP tasks and verified the correctness of the scaling law. | [
"['Shuaipeng Li' 'Penghao Zhao' 'Hailin Zhang' 'Xingwu Sun' 'Hao Wu'\n 'Dian Jiao' 'Weiyan Wang' 'Chengjun Liu' 'Zheng Fang' 'Jinbao Xue'\n 'Yangyu Tao' 'Bin Cui' 'Di Wang']"
] |
null | null | 2405.14596 | null | null | http://arxiv.org/pdf/2405.14596v1 | 2024-05-23T14:11:26Z | 2024-05-23T14:11:26Z | Linear Mode Connectivity in Differentiable Tree Ensembles | Linear Mode Connectivity (LMC) refers to the phenomenon that performance remains consistent for linearly interpolated models in the parameter space. For independently optimized model pairs from different random initializations, achieving LMC is considered crucial for validating the stable success of the non-convex optimization in modern machine learning models and for facilitating practical parameter-based operations such as model merging. While LMC has been achieved for neural networks by considering the permutation invariance of neurons in each hidden layer, its attainment for other models remains an open question. In this paper, we first achieve LMC for soft tree ensembles, which are tree-based differentiable models extensively used in practice. We show the necessity of incorporating two invariances: subtree flip invariance and splitting order invariance, which do not exist in neural networks but are inherent to tree architectures, in addition to permutation invariance of trees. Moreover, we demonstrate that it is even possible to exclude such additional invariances while keeping LMC by designing decision list-based tree architectures, where such invariances do not exist by definition. Our findings indicate the significance of accounting for architecture-specific invariances in achieving LMC. | [
"['Ryuichi Kanoh' 'Mahito Sugiyama']"
] |
null | null | 2405.14597 | null | null | http://arxiv.org/pdf/2405.14597v2 | 2024-05-28T07:17:47Z | 2024-05-23T14:12:58Z | Integer Scale: A Free Lunch for Faster Fine-grained Quantization of LLMs | We introduce Integer Scale, a novel post-training quantization scheme for large language models that effectively resolves the inference bottleneck in current fine-grained quantization approaches while maintaining similar accuracies. Integer Scale is a free lunch as it requires no extra calibration or fine-tuning which will otherwise incur additional costs. It can be used plug-and-play for most fine-grained quantization methods. Its integration results in at most 1.85x end-to-end speed boost over the original counterpart with comparable accuracy. Additionally, due to the orchestration of the proposed Integer Scale and fine-grained quantization, we resolved the quantization difficulty for Mixtral-8x7B and LLaMA-3 models with negligible performance degradation, and it comes with an end-to-end speed boost of 2.13x, and 2.31x compared with their FP16 versions respectively. | [
"['Qingyuan Li' 'Ran Meng' 'Yiduo Li' 'Bo Zhang' 'Yifan Lu' 'Yerui Sun'\n 'Lin Ma' 'Yuchen Xie']"
] |
null | null | 2405.14598 | null | null | http://arxiv.org/pdf/2405.14598v2 | 2024-05-24T15:21:13Z | 2024-05-23T14:13:16Z | Visual Echoes: A Simple Unified Transformer for Audio-Visual Generation | In recent years, with the realistic generation results and a wide range of personalized applications, diffusion-based generative models gain huge attention in both visual and audio generation areas. Compared to the considerable advancements of text2image or text2audio generation, research in audio2visual or visual2audio generation has been relatively slow. The recent audio-visual generation methods usually resort to huge large language model or composable diffusion models. Instead of designing another giant model for audio-visual generation, in this paper we take a step back showing a simple and lightweight generative transformer, which is not fully investigated in multi-modal generation, can achieve excellent results on image2audio generation. The transformer operates in the discrete audio and visual Vector-Quantized GAN space, and is trained in the mask denoising manner. After training, the classifier-free guidance could be deployed off-the-shelf achieving better performance, without any extra training or modification. Since the transformer model is modality symmetrical, it could also be directly deployed for audio2image generation and co-generation. In the experiments, we show that our simple method surpasses recent image2audio generation methods. Generated audio samples can be found at https://docs.google.com/presentation/d/1ZtC0SeblKkut4XJcRaDsSTuCRIXB3ypxmSi7HTY3IyQ/ | [
"['Shiqi Yang' 'Zhi Zhong' 'Mengjie Zhao' 'Shusuke Takahashi'\n 'Masato Ishii' 'Takashi Shibuya' 'Yuki Mitsufuji']"
] |
null | null | 2405.14599 | null | null | http://arxiv.org/pdf/2405.14599v1 | 2024-05-23T14:14:27Z | 2024-05-23T14:14:27Z | Neuroexplicit Diffusion Models for Inpainting of Optical Flow Fields | Deep learning has revolutionized the field of computer vision by introducing large scale neural networks with millions of parameters. Training these networks requires massive datasets and leads to intransparent models that can fail to generalize. At the other extreme, models designed from partial differential equations (PDEs) embed specialized domain knowledge into mathematical equations and usually rely on few manually chosen hyperparameters. This makes them transparent by construction and if designed and calibrated carefully, they can generalize well to unseen scenarios. In this paper, we show how to bring model- and data-driven approaches together by combining the explicit PDE-based approaches with convolutional neural networks to obtain the best of both worlds. We illustrate a joint architecture for the task of inpainting optical flow fields and show that the combination of model- and data-driven modeling leads to an effective architecture. Our model outperforms both fully explicit and fully data-driven baselines in terms of reconstruction quality, robustness and amount of required training data. Averaging the endpoint error across different mask densities, our method outperforms the explicit baselines by 11-27%, the GAN baseline by 47% and the Probabilisitic Diffusion baseline by 42%. With that, our method sets a new state of the art for inpainting of optical flow fields from random masks. | [
"['Tom Fischer' 'Pascal Peter' 'Joachim Weickert' 'Eddy Ilg']"
] |
null | null | 2405.14600 | null | null | http://arxiv.org/pdf/2405.14600v1 | 2024-05-23T14:16:44Z | 2024-05-23T14:16:44Z | Discretization of continuous input spaces in the hippocampal autoencoder | The hippocampus has been associated with both spatial cognition and episodic memory formation, but integrating these functions into a unified framework remains challenging. Here, we demonstrate that forming discrete memories of visual events in sparse autoencoder neurons can produce spatial tuning similar to hippocampal place cells. We then show that the resulting very high-dimensional code enables neurons to discretize and tile the underlying image space with minimal overlap. Additionally, we extend our results to the auditory domain, showing that neurons similarly tile the frequency space in an experience-dependent manner. Lastly, we show that reinforcement learning agents can effectively perform various visuo-spatial cognitive tasks using these sparse, very high-dimensional representations. | [
"['Adrian F. Amil' 'Ismael T. Freire' 'Paul F. M. J. Verschure']"
] |
null | null | 2405.14602 | null | null | http://arxiv.org/pdf/2405.14602v3 | 2024-05-28T09:55:11Z | 2024-05-23T14:17:01Z | Controllable Continual Test-Time Adaptation | Continual Test-Time Adaptation (CTTA) is an emerging and challenging task where a model trained in a source domain must adapt to continuously changing conditions during testing, without access to the original source data. CTTA is prone to error accumulation due to uncontrollable domain shifts, leading to blurred decision boundaries between categories. Existing CTTA methods primarily focus on suppressing domain shifts, which proves inadequate during the unsupervised test phase. In contrast, we introduce a novel approach that guides rather than suppresses these shifts. Specifically, we propose $textbf{C}$ontrollable $textbf{Co}$ntinual $textbf{T}$est-$textbf{T}$ime $textbf{A}$daptation (C-CoTTA), which explicitly prevents any single category from encroaching on others, thereby mitigating the mutual influence between categories caused by uncontrollable shifts. Moreover, our method reduces the sensitivity of model to domain transformations, thereby minimizing the magnitude of category shifts. Extensive quantitative experiments demonstrate the effectiveness of our method, while qualitative analyses, such as t-SNE plots, confirm the theoretical validity of our approach. | [
"['Ziqi Shi' 'Fan Lyu' 'Ye Liu' 'Fanhua Shang' 'Fuyuan Hu' 'Wei Feng'\n 'Zhang Zhang' 'Liang Wang']"
] |
null | null | 2405.14608 | null | null | http://arxiv.org/pdf/2405.14608v1 | 2024-05-23T14:21:35Z | 2024-05-23T14:21:35Z | ShapeFormer: Shapelet Transformer for Multivariate Time Series
Classification | Multivariate time series classification (MTSC) has attracted significant research attention due to its diverse real-world applications. Recently, exploiting transformers for MTSC has achieved state-of-the-art performance. However, existing methods focus on generic features, providing a comprehensive understanding of data, but they ignore class-specific features crucial for learning the representative characteristics of each class. This leads to poor performance in the case of imbalanced datasets or datasets with similar overall patterns but differing in minor class-specific details. In this paper, we propose a novel Shapelet Transformer (ShapeFormer), which comprises class-specific and generic transformer modules to capture both of these features. In the class-specific module, we introduce the discovery method to extract the discriminative subsequences of each class (i.e. shapelets) from the training set. We then propose a Shapelet Filter to learn the difference features between these shapelets and the input time series. We found that the difference feature for each shapelet contains important class-specific features, as it shows a significant distinction between its class and others. In the generic module, convolution filters are used to extract generic features that contain information to distinguish among all classes. For each module, we employ the transformer encoder to capture the correlation between their features. As a result, the combination of two transformer modules allows our model to exploit the power of both types of features, thereby enhancing the classification performance. Our experiments on 30 UEA MTSC datasets demonstrate that ShapeFormer has achieved the highest accuracy ranking compared to state-of-the-art methods. The code is available at https://github.com/xuanmay2701/shapeformer. | [
"['Xuan-May Le' 'Ling Luo' 'Uwe Aickelin' 'Minh-Tuan Tran']"
] |
null | null | 2405.14616 | null | null | http://arxiv.org/pdf/2405.14616v1 | 2024-05-23T14:27:07Z | 2024-05-23T14:27:07Z | TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting | Time series forecasting is widely used in extensive applications, such as traffic planning and weather forecasting. However, real-world time series usually present intricate temporal variations, making forecasting extremely challenging. Going beyond the mainstream paradigms of plain decomposition and multiperiodicity analysis, we analyze temporal variations in a novel view of multiscale-mixing, which is based on an intuitive but important observation that time series present distinct patterns in different sampling scales. The microscopic and the macroscopic information are reflected in fine and coarse scales respectively, and thereby complex variations can be inherently disentangled. Based on this observation, we propose TimeMixer as a fully MLP-based architecture with Past-Decomposable-Mixing (PDM) and Future-Multipredictor-Mixing (FMM) blocks to take full advantage of disentangled multiscale series in both past extraction and future prediction phases. Concretely, PDM applies the decomposition to multiscale series and further mixes the decomposed seasonal and trend components in fine-to-coarse and coarse-to-fine directions separately, which successively aggregates the microscopic seasonal and macroscopic trend information. FMM further ensembles multiple predictors to utilize complementary forecasting capabilities in multiscale observations. Consequently, TimeMixer is able to achieve consistent state-of-the-art performances in both long-term and short-term forecasting tasks with favorable run-time efficiency. | [
"['Shiyu Wang' 'Haixu Wu' 'Xiaoming Shi' 'Tengge Hu' 'Huakun Luo'\n 'Lintao Ma' 'James Y. Zhang' 'Jun Zhou']"
] |
null | null | 2405.14620 | null | null | http://arxiv.org/pdf/2405.14620v1 | 2024-05-23T14:29:15Z | 2024-05-23T14:29:15Z | Closed-form Symbolic Solutions: A New Perspective on Solving Partial
Differential Equations | Solving partial differential equations (PDEs) in Euclidean space with closed-form symbolic solutions has long been a dream for mathematicians. Inspired by deep learning, Physics-Informed Neural Networks (PINNs) have shown great promise in numerically solving PDEs. However, since PINNs essentially approximate solutions within the continuous function space, their numerical solutions fall short in both precision and interpretability compared to symbolic solutions. This paper proposes a novel framework: a closed-form textbf{Sym}bolic framework for textbf{PDE}s (SymPDE), exploring the use of deep reinforcement learning to directly obtain symbolic solutions for PDEs. SymPDE alleviates the challenges PINNs face in fitting high-frequency and steeply changing functions. To our knowledge, no prior work has implemented this approach. Experiments on solving the Poisson's equation and heat equation in time-independent and spatiotemporal dynamical systems respectively demonstrate that SymPDE can provide accurate closed-form symbolic solutions for various types of PDEs. | [
"['Shu Wei' 'Yanjie Li' 'Lina Yu' 'Min Wu' 'Weijun Li' 'Meilan Hao'\n 'Wenqiang Li' 'Jingyi Liu' 'Yusong Deng']"
] |
null | null | 2405.14622 | null | null | http://arxiv.org/pdf/2405.14622v3 | 2024-05-31T16:37:53Z | 2024-05-23T14:30:33Z | Calibrated Self-Rewarding Vision Language Models | Large Vision-Language Models (LVLMs) have made substantial progress by integrating pre-trained large language models (LLMs) and vision models through instruction tuning. Despite these advancements, LVLMs often exhibit the hallucination phenomenon, where generated text responses appear linguistically plausible but contradict the input image, indicating a misalignment between image and text pairs. This misalignment arises because the model tends to prioritize textual information over visual input, even when both the language model and visual representations are of high quality. Existing methods leverage additional models or human annotations to curate preference data and enhance modality alignment through preference optimization. These approaches may not effectively reflect the target LVLM's preferences, making the curated preferences easily distinguishable. Our work addresses these challenges by proposing the Calibrated Self-Rewarding (CSR) approach, which enables the model to self-improve by iteratively generating candidate responses, evaluating the reward for each response, and curating preference data for fine-tuning. In the reward modeling, we employ a step-wise strategy and incorporate visual constraints into the self-rewarding process to place greater emphasis on visual input. Empirical results demonstrate that CSR enhances performance and reduces hallucinations across ten benchmarks and tasks, achieving substantial improvements over existing methods by 7.62%. Our empirical results are further supported by rigorous theoretical analysis, under mild assumptions, verifying the effectiveness of introducing visual constraints into the self-rewarding paradigm. Additionally, CSR shows compatibility with different vision-language models and the ability to incrementally improve performance through iterative fine-tuning. Our data and code are available at https://github.com/YiyangZhou/CSR. | [
"['Yiyang Zhou' 'Zhiyuan Fan' 'Dongjie Cheng' 'Sihan Yang' 'Zhaorun Chen'\n 'Chenhang Cui' 'Xiyao Wang' 'Yun Li' 'Linjun Zhang' 'Huaxiu Yao']"
] |
null | null | 2405.14623 | null | null | http://arxiv.org/pdf/2405.14623v2 | 2024-06-10T14:30:19Z | 2024-05-23T14:31:53Z | U-TELL: Unsupervised Task Expert Lifelong Learning | Continual learning (CL) models are designed to learn new tasks arriving sequentially without re-training the network. However, real-world ML applications have very limited label information and these models suffer from catastrophic forgetting. To address these issues, we propose an unsupervised CL model with task experts called Unsupervised Task Expert Lifelong Learning (U-TELL) to continually learn the data arriving in a sequence addressing catastrophic forgetting. During training of U-TELL, we introduce a new expert on arrival of a new task. Our proposed architecture has task experts, a structured data generator and a task assigner. Each task expert is composed of 3 blocks; i) a variational autoencoder to capture the task distribution and perform data abstraction, ii) a k-means clustering module, and iii) a structure extractor to preserve latent task data signature. During testing, task assigner selects a suitable expert to perform clustering. U-TELL does not store or replay task samples, instead, we use generated structured samples to train the task assigner. We compared U-TELL with five SOTA unsupervised CL methods. U-TELL outperformed all baselines on seven benchmarks and one industry dataset for various CL scenarios with a training time over 6 times faster than the best performing baseline. | [
"['Indu Solomon' 'Aye Phyu Phyu Aung' 'Uttam Kumar' 'Senthilnath Jayavelu']"
] |
null | null | 2405.14629 | null | null | http://arxiv.org/pdf/2405.14629v1 | 2024-05-23T14:35:56Z | 2024-05-23T14:35:56Z | Which Experiences Are Influential for RL Agents? Efficiently Estimating
The Influence of Experiences | In reinforcement learning (RL) with experience replay, experiences stored in a replay buffer influence the RL agent's performance. Information about the influence of these experiences is valuable for various purposes, such as identifying experiences that negatively influence poorly performing RL agents. One method for estimating the influence of experiences is the leave-one-out (LOO) method. However, this method is usually computationally prohibitive. In this paper, we present Policy Iteration with Turn-over Dropout (PIToD), which efficiently estimates the influence of experiences. We evaluate how accurately PIToD estimates the influence of experiences and its efficiency compared to LOO. We then apply PIToD to amend poorly performing RL agents, i.e., we use PIToD to estimate negatively influential experiences for the RL agents and to delete the influence of these experiences. We show that RL agents' performance is significantly improved via amendments with PIToD. | [
"['Takuya Hiraoka' 'Guanquan Wang' 'Takashi Onishi' 'Yoshimasa Tsuruoka']"
] |
null | null | 2405.14630 | null | null | http://arxiv.org/pdf/2405.14630v1 | 2024-05-23T14:36:52Z | 2024-05-23T14:36:52Z | Bounds for the smallest eigenvalue of the NTK for arbitrary spherical
data of arbitrary dimension | Bounds on the smallest eigenvalue of the neural tangent kernel (NTK) are a key ingredient in the analysis of neural network optimization and memorization. However, existing results require distributional assumptions on the data and are limited to a high-dimensional setting, where the input dimension $d_0$ scales at least logarithmically in the number of samples $n$. In this work we remove both of these requirements and instead provide bounds in terms of a measure of the collinearity of the data: notably these bounds hold with high probability even when $d_0$ is held constant versus $n$. We prove our results through a novel application of the hemisphere transform. | [
"['Kedar Karhadkar' 'Michael Murray' 'Guido Montúfar']"
] |
null | null | 2405.14632 | null | null | http://arxiv.org/pdf/2405.14632v1 | 2024-05-23T14:39:35Z | 2024-05-23T14:39:35Z | Reinforcement Learning for Fine-tuning Text-to-speech Diffusion Models | Recent advancements in generative models have sparked significant interest within the machine learning community. Particularly, diffusion models have demonstrated remarkable capabilities in synthesizing images and speech. Studies such as those by Lee et al. [19], Black et al. [4], Wang et al. [36], and Fan et al. [8] illustrate that Reinforcement Learning with Human Feedback (RLHF) can enhance diffusion models for image synthesis. However, due to architectural differences between these models and those employed in speech synthesis, it remains uncertain whether RLHF could similarly benefit speech synthesis models. In this paper, we explore the practical application of RLHF to diffusion-based text-to-speech synthesis, leveraging the mean opinion score (MOS) as predicted by UTokyo-SaruLab MOS prediction system [29] as a proxy loss. We introduce diffusion model loss-guided RL policy optimization (DLPO) and compare it against other RLHF approaches, employing the NISQA speech quality and naturalness assessment model [21] and human preference experiments for further evaluation. Our results show that RLHF can enhance diffusion-based text-to-speech synthesis models, and, moreover, DLPO can better improve diffusion models in generating natural and high quality speech audios. | [
"['Jingyi Chen' 'Ju-Seung Byun' 'Micha Elsner' 'Andrew Perrault']"
] |
null | null | 2405.14645 | null | null | http://arxiv.org/pdf/2405.14645v2 | 2024-05-26T21:03:09Z | 2024-05-23T14:47:07Z | Lagrangian Neural Networks for Reversible Dissipative Evolution | There is a growing attention given to utilizing Lagrangian and Hamiltonian mechanics with network training in order to incorporate physics into the network. Most commonly, conservative systems are modeled, in which there are no frictional losses, so the system may be run forward and backward in time without requiring regularization. This work addresses systems in which the reverse direction is ill-posed because of the dissipation that occurs in forward evolution. The novelty is the use of Morse-Feshbach Lagrangian, which models dissipative dynamics by doubling the number of dimensions of the system in order to create a mirror latent representation that would counterbalance the dissipation of the observable system, making it a conservative system, albeit embedded in a larger space. We start with their formal approach by redefining a new Dissipative Lagrangian, such that the unknown matrices in the Euler-Lagrange's equations arise as partial derivatives of the Lagrangian with respect to only the observables. We then train a network from simulated training data for dissipative systems such as Fickian diffusion that arise in materials sciences. It is shown by experiments that the systems can be evolved in both forward and reverse directions without regularization beyond that provided by the Morse-Feshbach Lagrangian. Experiments of dissipative systems, such as Fickian diffusion, demonstrate the degree to which dynamics can be reversed. | [
"['Veera Sundararaghavan' 'Megna N. Shah' 'Jeff P. Simmons']"
] |
null | null | 2405.14650 | null | null | http://arxiv.org/pdf/2405.14650v1 | 2024-05-23T14:50:59Z | 2024-05-23T14:50:59Z | PhiNets: Brain-inspired Non-contrastive Learning Based on Temporal
Prediction Hypothesis | SimSiam is a prominent self-supervised learning method that achieves impressive results in various vision tasks under static environments. However, it has two critical issues: high sensitivity to hyperparameters, especially weight decay, and unsatisfactory performance in online and continual learning, where neuroscientists believe that powerful memory functions are necessary, as in brains. In this paper, we propose PhiNet, inspired by a hippocampal model based on the temporal prediction hypothesis. Unlike SimSiam, which aligns two augmented views of the original image, PhiNet integrates an additional predictor block that estimates the original image representation to imitate the CA1 region in the hippocampus. Moreover, we model the neocortex inspired by the Complementary Learning Systems theory with a momentum encoder block as a slow learner, which works as long-term memory. We demonstrate through analysing the learning dynamics that PhiNet benefits from the additional predictor to prevent the complete collapse of learned representations, a notorious challenge in non-contrastive learning. This dynamics analysis may partially corroborate why this hippocampal model is biologically plausible. Experimental results demonstrate that PhiNet is more robust to weight decay and performs better than SimSiam in memory-intensive tasks like online and continual learning. | [
"['Satoki Ishikawa' 'Makoto Yamada' 'Han Bao' 'Yuki Takezawa']"
] |
null | null | 2405.14655 | null | null | http://arxiv.org/pdf/2405.14655v1 | 2024-05-23T14:53:54Z | 2024-05-23T14:53:54Z | Multi-turn Reinforcement Learning from Preference Human Feedback | Reinforcement Learning from Human Feedback (RLHF) has become the standard approach for aligning Large Language Models (LLMs) with human preferences, allowing LLMs to demonstrate remarkable abilities in various tasks. Existing methods work by emulating the preferences at the single decision (turn) level, limiting their capabilities in settings that require planning or multi-turn interactions to achieve a long-term goal. In this paper, we address this issue by developing novel methods for Reinforcement Learning (RL) from preference feedback between two full multi-turn conversations. In the tabular setting, we present a novel mirror-descent-based policy optimization algorithm for the general multi-turn preference-based RL problem, and prove its convergence to Nash equilibrium. To evaluate performance, we create a new environment, Education Dialogue, where a teacher agent guides a student in learning a random topic, and show that a deep RL variant of our algorithm outperforms RLHF baselines. Finally, we show that in an environment with explicit rewards, our algorithm recovers the same performance as a reward-based RL baseline, despite relying solely on a weaker preference signal. | [
"['Lior Shani' 'Aviv Rosenberg' 'Asaf Cassel' 'Oran Lang'\n 'Daniele Calandriello' 'Avital Zipori' 'Hila Noga' 'Orgad Keller'\n 'Bilal Piot' 'Idan Szpektor' 'Avinatan Hassidim' 'Yossi Matias'\n 'Rémi Munos']"
] |
null | null | 2405.14657 | null | null | http://arxiv.org/pdf/2405.14657v1 | 2024-05-23T14:55:18Z | 2024-05-23T14:55:18Z | Heteroscedastic Preferential Bayesian Optimization with Informative
Noise Distributions | Preferential Bayesian optimization (PBO) is a sample-efficient framework for learning human preferences between candidate designs. PBO classically relies on homoscedastic noise models to represent human aleatoric uncertainty. Yet, such noise fails to accurately capture the varying levels of human aleatoric uncertainty, particularly when the user possesses partial knowledge among different pairs of candidates. For instance, a chemist with solid expertise in glucose-related molecules may easily compare two compounds from that family while struggling to compare alcohol-related molecules. Currently, PBO overlooks this uncertainty during the search for a new candidate through the maximization of the acquisition function, consequently underestimating the risk associated with human uncertainty. To address this issue, we propose a heteroscedastic noise model to capture human aleatoric uncertainty. This model adaptively assigns noise levels based on the distance of a specific input to a predefined set of reliable inputs known as anchors provided by the human. Anchors encapsulate partial knowledge and offer insight into the comparative difficulty of evaluating different candidate pairs. Such a model can be seamlessly integrated into the acquisition function, thus leading to candidate design pairs that elegantly trade informativeness and ease of comparison for the human expert. We perform an extensive empirical evaluation of the proposed approach, demonstrating a consistent improvement over homoscedastic PBO. | [
"['Marshal Arijona Sinaga' 'Julien Martinelli' 'Vikas Garg' 'Samuel Kaski']"
] |
null | null | 2405.14660 | null | null | http://arxiv.org/pdf/2405.14660v1 | 2024-05-23T14:57:52Z | 2024-05-23T14:57:52Z | Implicit In-context Learning | In-context Learning (ICL) empowers large language models (LLMs) to adapt to unseen tasks during inference by prefixing a few demonstration examples prior to test queries. Despite its versatility, ICL incurs substantial computational and memory overheads compared to zero-shot learning and is susceptible to the selection and order of demonstration examples. In this work, we introduce Implicit In-context Learning (I2CL), an innovative paradigm that addresses the challenges associated with traditional ICL by absorbing demonstration examples within the activation space. I2CL first generates a condensed vector representation, namely a context vector, from the demonstration examples. It then integrates the context vector during inference by injecting a linear combination of the context vector and query activations into the model's residual streams. Empirical evaluation on nine real-world tasks across three model architectures demonstrates that I2CL achieves few-shot performance with zero-shot cost and exhibits robustness against the variation of demonstration examples. Furthermore, I2CL facilitates a novel representation of "task-ids", enhancing task similarity detection and enabling effective transfer learning. We provide a comprehensive analysis of I2CL, offering deeper insights into its mechanisms and broader implications for ICL. The source code is available at: https://github.com/LzVv123456/I2CL. | [
"['Zhuowei Li' 'Zihao Xu' 'Ligong Han' 'Yunhe Gao' 'Song Wen' 'Di Liu'\n 'Hao Wang' 'Dimitris N. Metaxas']"
] |
null | null | 2405.14664 | null | null | http://arxiv.org/pdf/2405.14664v3 | 2024-05-28T20:18:16Z | 2024-05-23T15:02:11Z | Fisher Flow Matching for Generative Modeling over Discrete Data | Generative modeling over discrete data has recently seen numerous success stories, with applications spanning language modeling, biological sequence design, and graph-structured molecular data. The predominant generative modeling paradigm for discrete data is still autoregressive, with more recent alternatives based on diffusion or flow-matching falling short of their impressive performance in continuous data settings, such as image or video generation. In this work, we introduce Fisher-Flow, a novel flow-matching model for discrete data. Fisher-Flow takes a manifestly geometric perspective by considering categorical distributions over discrete data as points residing on a statistical manifold equipped with its natural Riemannian metric: the $textit{Fisher-Rao metric}$. As a result, we demonstrate discrete data itself can be continuously reparameterised to points on the positive orthant of the $d$-hypersphere $mathbb{S}^d_+$, which allows us to define flows that map any source distribution to target in a principled manner by transporting mass along (closed-form) geodesics of $mathbb{S}^d_+$. Furthermore, the learned flows in Fisher-Flow can be further bootstrapped by leveraging Riemannian optimal transport leading to improved training dynamics. We prove that the gradient flow induced by Fisher-Flow is optimal in reducing the forward KL divergence. We evaluate Fisher-Flow on an array of synthetic and diverse real-world benchmarks, including designing DNA Promoter, and DNA Enhancer sequences. Empirically, we find that Fisher-Flow improves over prior diffusion and flow-matching models on these benchmarks. | [
"['Oscar Davis' 'Samuel Kessler' 'Mircea Petrache' 'İsmail İlkan Ceylan'\n 'Michael Bronstein' 'Avishek Joey Bose']"
] |
null | null | 2405.14669 | null | null | http://arxiv.org/pdf/2405.14669v1 | 2024-05-23T15:06:02Z | 2024-05-23T15:06:02Z | Efficiency for Free: Ideal Data Are Transportable Representations | Data, the seminal opportunity and challenge in modern machine learning, currently constrains the scalability of representation learning and impedes the pace of model evolution. Existing paradigms tackle the issue of learning efficiency over massive datasets from the perspective of self-supervised learning and dataset distillation independently, while neglecting the untapped potential of accelerating representation learning from an intermediate standpoint. In this work, we delve into defining the ideal data properties from both optimization and generalization perspectives. We propose that model-generated representations, despite being trained on diverse tasks and architectures, converge to a shared linear space, facilitating effective linear transport between models. Furthermore, we demonstrate that these representations exhibit properties conducive to the formation of ideal data. The theoretical/empirical insights therein inspire us to propose a Representation Learning Accelerator (ReLA), which leverages a task- and architecture-agnostic, yet publicly available, free model to form a dynamic data subset and thus accelerate (self-)supervised learning. For instance, employing a CLIP ViT B/16 as a prior model for dynamic data generation, ReLA-aided BYOL can train a ResNet-50 from scratch with 50% of ImageNet-1K, yielding performance surpassing that of training on the full dataset. Additionally, employing a ResNet-18 pre-trained on CIFAR-10 can enhance ResNet-50 training on 10% of ImageNet-1K, resulting in a 7.7% increase in accuracy. | [
"['Peng Sun' 'Yi Jiang' 'Tao Lin']"
] |
null | null | 2405.14670 | null | null | http://arxiv.org/pdf/2405.14670v1 | 2024-05-23T15:07:21Z | 2024-05-23T15:07:21Z | Overcoming the Challenges of Batch Normalization in Federated Learning | Batch normalization has proven to be a very beneficial mechanism to accelerate the training and improve the accuracy of deep neural networks in centralized environments. Yet, the scheme faces significant challenges in federated learning, especially under high data heterogeneity. Essentially, the main challenges arise from external covariate shifts and inconsistent statistics across clients. We introduce in this paper Federated BatchNorm (FBN), a novel scheme that restores the benefits of batch normalization in federated learning. Essentially, FBN ensures that the batch normalization during training is consistent with what would be achieved in a centralized execution, hence preserving the distribution of the data, and providing running statistics that accurately approximate the global statistics. FBN thereby reduces the external covariate shift and matches the evaluation performance of the centralized setting. We also show that, with a slight increase in complexity, we can robustify FBN to mitigate erroneous statistics and potentially adversarial attacks. | [
"['Rachid Guerraoui' 'Rafael Pinot' 'Geovani Rizk' 'John Stephan'\n 'François Taiani']"
] |
null | null | 2405.14677 | null | null | http://arxiv.org/pdf/2405.14677v1 | 2024-05-23T15:12:15Z | 2024-05-23T15:12:15Z | RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance | Customizing diffusion models to generate identity-preserving images from user-provided reference images is an intriguing new problem. The prevalent approaches typically require training on extensive domain-specific images to achieve identity preservation, which lacks flexibility across different use cases. To address this issue, we exploit classifier guidance, a training-free technique that steers diffusion models using an existing classifier, for personalized image generation. Our study shows that based on a recent rectified flow framework, the major limitation of vanilla classifier guidance in requiring a special classifier can be resolved with a simple fixed-point solution, allowing flexible personalization with off-the-shelf image discriminators. Moreover, its solving procedure proves to be stable when anchored to a reference flow trajectory, with a convergence guarantee. The derived method is implemented on rectified flow with different off-the-shelf image discriminators, delivering advantageous personalization results for human faces, live subjects, and certain objects. Code is available at https://github.com/feifeiobama/RectifID. | [
"['Zhicheng Sun' 'Zhenhao Yang' 'Yang Jin' 'Haozhe Chi' 'Kun Xu' 'Kun Xu'\n 'Liwei Chen' 'Hao Jiang' 'Di Zhang' 'Yang Song' 'Kun Gai' 'Yadong Mu']"
] |
null | null | 2405.14681 | null | null | http://arxiv.org/pdf/2405.14681v1 | 2024-05-23T15:15:17Z | 2024-05-23T15:15:17Z | Recursive PAC-Bayes: A Frequentist Approach to Sequential Prior Updates
with No Information Loss | PAC-Bayesian analysis is a frequentist framework for incorporating prior knowledge into learning. It was inspired by Bayesian learning, which allows sequential data processing and naturally turns posteriors from one processing step into priors for the next. However, despite two and a half decades of research, the ability to update priors sequentially without losing confidence information along the way remained elusive for PAC-Bayes. While PAC-Bayes allows construction of data-informed priors, the final confidence intervals depend only on the number of points that were not used for the construction of the prior, whereas confidence information in the prior, which is related to the number of points used to construct the prior, is lost. This limits the possibility and benefit of sequential prior updates, because the final bounds depend only on the size of the final batch. We present a novel and, in retrospect, surprisingly simple and powerful PAC-Bayesian procedure that allows sequential prior updates with no information loss. The procedure is based on a novel decomposition of the expected loss of randomized classifiers. The decomposition rewrites the loss of the posterior as an excess loss relative to a downscaled loss of the prior plus the downscaled loss of the prior, which is bounded recursively. As a side result, we also present a generalization of the split-kl and PAC-Bayes-split-kl inequalities to discrete random variables, which we use for bounding the excess losses, and which can be of independent interest. In empirical evaluation the new procedure significantly outperforms state-of-the-art. | [
"['Yi-Shan Wu' 'Yijie Zhang' 'Badr-Eddine Chérief-Abdellatif'\n 'Yevgeny Seldin']"
] |
null | null | 2405.14689 | null | null | http://arxiv.org/pdf/2405.14689v2 | 2024-05-29T08:18:03Z | 2024-05-23T15:25:56Z | Cascade of phase transitions in the training of Energy-based models | In this paper, we investigate the feature encoding process in a prototypical energy-based generative model, the Restricted Boltzmann Machine (RBM). We start with an analytical investigation using simplified architectures and data structures, and end with numerical analysis of real trainings on real datasets. Our study tracks the evolution of the model's weight matrix through its singular value decomposition, revealing a series of phase transitions associated to a progressive learning of the principal modes of the empirical probability distribution. The model first learns the center of mass of the modes and then progressively resolve all modes through a cascade of phase transitions. We first describe this process analytically in a controlled setup that allows us to study analytically the training dynamics. We then validate our theoretical results by training the Bernoulli-Bernoulli RBM on real data sets. By using data sets of increasing dimension, we show that learning indeed leads to sharp phase transitions in the high-dimensional limit. Moreover, we propose and test a mean-field finite-size scaling hypothesis. This shows that the first phase transition is in the same universality class of the one we studied analytically, and which is reminiscent of the mean-field paramagnetic-to-ferromagnetic phase transition. | [
"['Dimitrios Bachtis' 'Giulio Biroli' 'Aurélien Decelle' 'Beatriz Seoane']"
] |
null | null | 2405.14714 | null | null | http://arxiv.org/pdf/2405.14714v1 | 2024-05-23T15:46:34Z | 2024-05-23T15:46:34Z | Defining error accumulation in ML atmospheric simulators | Machine learning (ML) has recently shown significant promise in modelling atmospheric systems, such as the weather. Many of these ML models are autoregressive, and error accumulation in their forecasts is a key problem. However, there is no clear definition of what `error accumulation' actually entails. In this paper, we propose a definition and an associated metric to measure it. Our definition distinguishes between errors which are due to model deficiencies, which we may hope to fix, and those due to the intrinsic properties of atmospheric systems (chaos, unobserved variables), which are not fixable. We illustrate the usefulness of this definition by proposing a simple regularization loss penalty inspired by it. This approach shows performance improvements (according to RMSE and spread/skill) in a selection of atmospheric systems, including the real-world weather prediction task. | [
"['Raghul Parthipan' 'Mohit Anand' 'Hannah M. Christensen'\n 'J. Scott Hosking' 'Damon J. Wischik']"
] |
null | null | 2405.14719 | null | null | http://arxiv.org/pdf/2405.14719v1 | 2024-05-23T15:48:46Z | 2024-05-23T15:48:46Z | Decision-Focused Forecasting: Decision Losses for Multistage
Optimisation | Decision-focused learning has emerged as a promising approach for decision making under uncertainty by training the upstream predictive aspect of the pipeline with respect to the quality of the downstream decisions. Most existing work has focused on single stage problems. Many real-world decision problems are more appropriately modelled using multistage optimisation as contextual information such as prices or demand is revealed over time and decisions now have a bearing on future decisions. We propose decision-focused forecasting, a multiple-implicitlayer model which in its training accounts for the intertemporal decision effects of forecasts using differentiable optimisation. The recursive model reflects a fully differentiable multistage optimisation approach. We present an analysis of the gradients produced by this model showing the adjustments made to account for the state-path caused by forecasting. We demonstrate an application of the model to an energy storage arbitrage task and report that our model outperforms existing approaches. | [
"['Egon Peršak' 'Miguel F. Anjos']"
] |
null | null | 2405.14725 | null | null | http://arxiv.org/pdf/2405.14725v1 | 2024-05-23T15:54:03Z | 2024-05-23T15:54:03Z | A Systematic and Formal Study of the Impact of Local Differential
Privacy on Fairness: Preliminary Results | Machine learning (ML) algorithms rely primarily on the availability of training data, and, depending on the domain, these data may include sensitive information about the data providers, thus leading to significant privacy issues. Differential privacy (DP) is the predominant solution for privacy-preserving ML, and the local model of DP is the preferred choice when the server or the data collector are not trusted. Recent experimental studies have shown that local DP can impact ML prediction for different subgroups of individuals, thus affecting fair decision-making. However, the results are conflicting in the sense that some studies show a positive impact of privacy on fairness while others show a negative one. In this work, we conduct a systematic and formal study of the effect of local DP on fairness. Specifically, we perform a quantitative study of how the fairness of the decisions made by the ML model changes under local DP for different levels of privacy and data distributions. In particular, we provide bounds in terms of the joint distributions and the privacy level, delimiting the extent to which local DP can impact the fairness of the model. We characterize the cases in which privacy reduces discrimination and those with the opposite effect. We validate our theoretical findings on synthetic and real-world datasets. Our results are preliminary in the sense that, for now, we study only the case of one sensitive attribute, and only statistical disparity, conditional statistical disparity, and equal opportunity difference. | [
"['Karima Makhlouf' 'Tamara Stefanovic' 'Heber H. Arcolezi'\n 'Catuscia Palamidessi']"
] |
null | null | 2405.14728 | null | null | http://arxiv.org/pdf/2405.14728v1 | 2024-05-23T15:55:38Z | 2024-05-23T15:55:38Z | Intervention and Conditioning in Causal Bayesian Networks | Causal models are crucial for understanding complex systems and identifying causal relationships among variables. Even though causal models are extremely popular, conditional probability calculation of formulas involving interventions pose significant challenges. In case of Causal Bayesian Networks (CBNs), Pearl assumes autonomy of mechanisms that determine interventions to calculate a range of probabilities. We show that by making simple yet often realistic independence assumptions, it is possible to uniquely estimate the probability of an interventional formula (including the well-studied notions of probability of sufficiency and necessity). We discuss when these assumptions are appropriate. Importantly, in many cases of interest, when the assumptions are appropriate, these probability estimates can be evaluated using observational data, which carries immense significance in scenarios where conducting experiments is impractical or unfeasible. | [
"['Sainyam Galhotra' 'Joseph Y. Halpern']"
] |
null | null | 2405.14730 | null | null | http://arxiv.org/pdf/2405.14730v1 | 2024-05-23T15:57:11Z | 2024-05-23T15:57:11Z | Embedding Compression for Efficient Re-Identification | Real world re-identfication (ReID) algorithms aim to map new observations of an object to previously recorded instances. These systems are often constrained by quantity and size of the stored embeddings. To combat this scaling problem, we attempt to shrink the size of these vectors by using a variety of compression techniques. In this paper, we benchmark quantization-aware-training along with three different dimension reduction methods: iterative structured pruning, slicing the embeddings at initialize, and using low rank embeddings. We find that ReID embeddings can be compressed by up to 96x with minimal drop in performance. This implies that modern re-identification paradigms do not fully leverage the high dimensional latent space, opening up further research to increase the capabilities of these systems. | [
"['Luke McDermott']"
] |
null | null | 2405.14734 | null | null | http://arxiv.org/pdf/2405.14734v2 | 2024-07-08T17:55:24Z | 2024-05-23T16:01:46Z | SimPO: Simple Preference Optimization with a Reference-Free Reward | Direct Preference Optimization (DPO) is a widely used offline preference optimization algorithm that reparameterizes reward functions in reinforcement learning from human feedback (RLHF) to enhance simplicity and training stability. In this work, we propose SimPO, a simpler yet more effective approach. The effectiveness of SimPO is attributed to a key design: using the average log probability of a sequence as the implicit reward. This reward formulation better aligns with model generation and eliminates the need for a reference model, making it more compute and memory efficient. Additionally, we introduce a target reward margin to the Bradley-Terry objective to encourage a larger margin between the winning and losing responses, further enhancing the algorithm's performance. We compare SimPO to DPO and its latest variants across various state-of-the-art training setups, including both base and instruction-tuned models like Mistral and Llama3. We evaluated on extensive instruction-following benchmarks, including AlpacaEval 2, MT-Bench, and the recent challenging Arena-Hard benchmark. Our results demonstrate that SimPO consistently and significantly outperforms existing approaches without substantially increasing response length. Specifically, SimPO outperforms DPO by up to 6.4 points on AlpacaEval 2 and by up to 7.5 points on Arena-Hard. Our top-performing model, built on Llama3-8B-Instruct, achieves a remarkable 53.7 length-controlled win rate on AlpacaEval 2 -- surpassing Claude 3 Opus on the leaderboard, and a 36.5 win rate on Arena-Hard -- making it the strongest 8B open-source model. | [
"['Yu Meng' 'Mengzhou Xia' 'Danqi Chen']"
] |
null | null | 2405.14736 | null | null | http://arxiv.org/pdf/2405.14736v1 | 2024-05-23T16:02:30Z | 2024-05-23T16:02:30Z | GIFT: Unlocking Full Potential of Labels in Distilled Dataset at
Near-zero Cost | Recent advancements in dataset distillation have demonstrated the significant benefits of employing soft labels generated by pre-trained teacher models. In this paper, we introduce a novel perspective by emphasizing the full utilization of labels. We first conduct a comprehensive comparison of various loss functions for soft label utilization in dataset distillation, revealing that the model trained on the synthetic dataset exhibits high sensitivity to the choice of loss function for soft label utilization. This finding highlights the necessity of a universal loss function for training models on synthetic datasets. Building on these insights, we introduce an extremely simple yet surprisingly effective plug-and-play approach, GIFT, which encompasses soft label refinement and a cosine similarity-based loss function to efficiently leverage full label information. Extensive experiments demonstrate that GIFT consistently enhances the state-of-the-art dataset distillation methods across various scales datasets without incurring additional computational costs. For instance, on ImageNet-1K with IPC = 10, GIFT improves the SOTA method RDED by 3.9% and 1.8% on ConvNet and ResNet-18, respectively. Code: https://github.com/LINs-lab/GIFT. | [
"['Xinyi Shang' 'Peng Sun' 'Tao Lin']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.