categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.13512 | null | null | http://arxiv.org/pdf/2405.13512v1 | 2024-05-22T10:14:47Z | 2024-05-22T10:14:47Z | Coverage Path Planning for Thermal Interface Materials | Thermal management of power electronics and Electronic Control Units is crucial in times of increasing power densities and limited assembly space. Electric and autonomous vehicles are a prominent application field. Thermal Interface Materials are used to transfer heat from a semiconductor to a heatsink. They are applied along a dispense path onto the semiconductor and spread over its entire surface once the heatsink is joined. To plan this application path, design engineers typically perform an iterative trial-and-error procedure of elaborate simulations and manual experiments. We propose a fully automated optimization approach, which clearly outperforms the current manual path planning and respects all relevant manufacturing constraints. An optimum dispense path increases the reliability of the thermal interface and makes the manufacturing more sustainable by reducing material waste. We show results on multiple real products from automotive series production, including an experimental validation on actual series manufacturing equipment. | [
"['Simon Baeuerle' 'Andreas Steimer' 'Ralf Mikut']"
] |
null | null | 2405.13515 | null | null | http://arxiv.org/pdf/2405.13515v1 | 2024-05-22T10:19:34Z | 2024-05-22T10:19:34Z | Multi-Scale Feature Fusion Quantum Depthwise Convolutional Neural
Networks for Text Classification | In recent years, with the development of quantum machine learning, quantum neural networks (QNNs) have gained increasing attention in the field of natural language processing (NLP) and have achieved a series of promising results. However, most existing QNN models focus on the architectures of quantum recurrent neural network (QRNN) and self-attention mechanism (QSAM). In this work, we propose a novel QNN model based on quantum convolution. We develop the quantum depthwise convolution that significantly reduces the number of parameters and lowers computational complexity. We also introduce the multi-scale feature fusion mechanism to enhance model performance by integrating word-level and sentence-level features. Additionally, we propose the quantum word embedding and quantum sentence embedding, which provide embedding vectors more efficiently. Through experiments on two benchmark text classification datasets, we demonstrate our model outperforms a wide range of state-of-the-art QNN models. Notably, our model achieves a new state-of-the-art test accuracy of 96.77% on the RP dataset. We also show the advantages of our quantum model over its classical counterparts in its ability to improve test accuracy using fewer parameters. Finally, an ablation test confirms the effectiveness of the multi-scale feature fusion mechanism and quantum depthwise convolution in enhancing model performance. | [
"['Yixiong Chen' 'Weichuan Fang']"
] |
null | null | 2405.13516 | null | null | http://arxiv.org/pdf/2405.13516v2 | 2024-06-04T08:21:05Z | 2024-05-22T10:21:50Z | LIRE: listwise reward enhancement for preference alignment | Recently, tremendous strides have been made to align the generation of Large Language Models (LLMs) with human values to mitigate toxic or unhelpful content. Leveraging Reinforcement Learning from Human Feedback (RLHF) proves effective and is widely adopted by researchers. However, implementing RLHF is complex, and its sensitivity to hyperparameters renders achieving stable performance and scalability challenging. Furthermore, prevailing approaches to preference alignment primarily concentrate on pairwise comparisons, with limited exploration into multi-response scenarios, thereby overlooking the potential richness within the candidate pool. For the above reasons, we propose a new approach: Listwise Reward Enhancement for Preference Alignment (LIRE), a gradient-based reward optimization approach that incorporates the offline rewards of multiple responses into a streamlined listwise framework, thus eliminating the need for online sampling during training. LIRE is straightforward to implement, requiring minimal parameter tuning, and seamlessly aligns with the pairwise paradigm while naturally extending to multi-response scenarios. Moreover, we introduce a self-enhancement algorithm aimed at iteratively refining the reward during training. Our experiments demonstrate that LIRE consistently outperforms existing methods across several benchmarks on dialogue and summarization tasks, with good transferability to out-of-distribution data, assessed using proxy reward models and human annotators. | [
"['Mingye Zhu' 'Yi Liu' 'Lei Zhang' 'Junbo Guo' 'Zhendong Mao']"
] |
null | null | 2405.13522 | null | null | http://arxiv.org/pdf/2405.13522v2 | 2024-05-24T15:10:27Z | 2024-05-22T10:45:50Z | Beyond Trend and Periodicity: Guiding Time Series Forecasting with
Textual Cues | This work introduces a novel Text-Guided Time Series Forecasting (TGTSF) task. By integrating textual cues, such as channel descriptions and dynamic news, TGTSF addresses the critical limitations of traditional methods that rely purely on historical data. To support this task, we propose TGForecaster, a robust baseline model that fuses textual cues and time series data using cross-attention mechanisms. We then present four meticulously curated benchmark datasets to validate the proposed framework, ranging from simple periodic data to complex, event-driven fluctuations. Our comprehensive evaluations demonstrate that TGForecaster consistently achieves state-of-the-art performance, highlighting the transformative potential of incorporating textual information into time series forecasting. This work not only pioneers a novel forecasting task but also establishes a new benchmark for future research, driving advancements in multimodal data integration for time series models. | [
"['Zhijian Xu' 'Yuxuan Bian' 'Jianyuan Zhong' 'Xiangyu Wen' 'Qiang Xu']"
] |
null | null | 2405.13526 | null | null | http://arxiv.org/pdf/2405.13526v1 | 2024-05-22T10:51:12Z | 2024-05-22T10:51:12Z | Understanding Virtual Nodes: Oversmoothing, Oversquashing, and Node
Heterogeneity | Message passing neural networks (MPNNs) have been shown to have limitations in terms of expressivity and modeling long-range interactions. Augmenting MPNNs with a virtual node (VN) removes the locality constraint of the layer aggregation and has been found to improve performance on a range of benchmarks. We provide a comprehensive theoretical analysis of the role of VNs and benefits thereof, through the lenses of oversmoothing, oversquashing, and sensitivity analysis. First, in contrast to prior belief, we find that VNs typically avoid replicating anti-smoothing approaches to maintain expressive power. Second, we characterize, precisely, how the improvement afforded by VNs on the mixing abilities of the network and hence in mitigating oversquashing, depends on the underlying topology. Finally, we highlight that, unlike Graph-Transformers (GT), classical instantiations of the VN are often constrained to assign uniform importance to different nodes. Consequently, we propose a variant of VN with the same computational complexity, which can have different sensitivity to nodes based on the graph structure. We show that this is an extremely effective and computationally efficient baseline on graph-level tasks. | [
"['Joshua Southern' 'Francesco Di Giovanni' 'Michael Bronstein'\n 'Johannes F. Lutzeyer']"
] |
null | null | 2405.13535 | null | null | http://arxiv.org/pdf/2405.13535v2 | 2024-05-24T06:31:55Z | 2024-05-22T11:11:42Z | Generalized Laplace Approximation | In recent years, the inconsistency in Bayesian deep learning has garnered increasing attention. Tempered or generalized posterior distributions often offer a direct and effective solution to this issue. However, understanding the underlying causes and evaluating the effectiveness of generalized posteriors remain active areas of research. In this study, we introduce a unified theoretical framework to attribute Bayesian inconsistency to model misspecification and inadequate priors. We interpret the generalization of the posterior with a temperature factor as a correction for misspecified models through adjustments to the joint probability model, and the recalibration of priors by redistributing probability mass on models within the hypothesis space using data samples. Additionally, we highlight a distinctive feature of Laplace approximation, which ensures that the generalized normalizing constant can be treated as invariant, unlike the typical scenario in general Bayesian learning where this constant varies with model parameters post-generalization. Building on this insight, we propose the generalized Laplace approximation, which involves a simple adjustment to the computation of the Hessian matrix of the regularized loss function. This method offers a flexible and scalable framework for obtaining high-quality posterior distributions. We assess the performance and properties of the generalized Laplace approximation on state-of-the-art neural networks and real-world datasets. | [
"['Yinsong Chen' 'Samson S. Yu' 'Zhong Li' 'Chee Peng Lim']"
] |
null | null | 2405.13536 | null | null | http://arxiv.org/pdf/2405.13536v1 | 2024-05-22T11:14:00Z | 2024-05-22T11:14:00Z | Attention Mechanisms Don't Learn Additive Models: Rethinking Feature
Importance for Transformers | We address the critical challenge of applying feature attribution methods to the transformer architecture, which dominates current applications in natural language processing and beyond. Traditional attribution methods to explainable AI (XAI) explicitly or implicitly rely on linear or additive surrogate models to quantify the impact of input features on a model's output. In this work, we formally prove an alarming incompatibility: transformers are structurally incapable to align with popular surrogate models for feature attribution, undermining the grounding of these conventional explanation methodologies. To address this discrepancy, we introduce the Softmax-Linked Additive Log-Odds Model (SLALOM), a novel surrogate model specifically designed to align with the transformer framework. Unlike existing methods, SLALOM demonstrates the capacity to deliver a range of faithful and insightful explanations across both synthetic and real-world datasets. Showing that diverse explanations computed from SLALOM outperform common surrogate explanations on different tasks, we highlight the need for task-specific feature attributions rather than a one-size-fits-all approach. | [
"['Tobias Leemann' 'Alina Fastowski' 'Felix Pfeiffer' 'Gjergji Kasneci']"
] |
null | null | 2405.13541 | null | null | http://arxiv.org/pdf/2405.13541v1 | 2024-05-22T11:23:03Z | 2024-05-22T11:23:03Z | Annotation-Efficient Preference Optimization for Language Model
Alignment | Preference optimization is a standard approach to fine-tuning large language models to align with human preferences. The quality, diversity, and quantity of the preference dataset are critical to the effectiveness of preference optimization. However, obtaining a large amount of high-quality and diverse preference annotations is difficult in many applications. This raises the question of how to use the limited annotation budget to create an effective preference dataset. To this end, we propose Annotation-Efficient Preference Optimization (AEPO). Instead of exhaustively annotating preference over all available response texts, AEPO selects a subset of responses that maximizes quality and diversity from the available responses, and then annotates preference over the selected ones. In this way, AEPO focuses the annotation budget on labeling preference over a smaller subset of responses with diversity and of high quality. We evaluate the performance of Direct Preference Optimization (DPO) using AEPO and show that it outperforms models trained using a standard DPO with the same annotation budget. Our code is available at https://github.com/CyberAgentAILab/annotation-efficient-po | [
"['Yuu Jinnai' 'Ukyo Honda']"
] |
null | null | 2405.13551 | null | null | http://arxiv.org/pdf/2405.13551v1 | 2024-05-22T11:39:11Z | 2024-05-22T11:39:11Z | Large Language Models are Effective Priors for Causal Graph Discovery | Causal structure discovery from observations can be improved by integrating background knowledge provided by an expert to reduce the hypothesis space. Recently, Large Language Models (LLMs) have begun to be considered as sources of prior information given the low cost of querying them relative to a human expert. In this work, firstly, we propose a set of metrics for assessing LLM judgments for causal graph discovery independently of the downstream algorithm. Secondly, we systematically study a set of prompting designs that allows the model to specify priors about the structure of the causal graph. Finally, we present a general methodology for the integration of LLM priors in graph discovery algorithms, finding that they help improve performance on common-sense benchmarks and especially when used for assessing edge directionality. Our work highlights the potential as well as the shortcomings of the use of LLMs in this problem space. | [
"['Victor-Alexandru Darvariu' 'Stephen Hailes' 'Mirco Musolesi']"
] |
null | null | 2405.13557 | null | null | http://arxiv.org/pdf/2405.13557v1 | 2024-05-22T11:44:57Z | 2024-05-22T11:44:57Z | MotionCraft: Physics-based Zero-Shot Video Generation | Generating videos with realistic and physically plausible motion is one of the main recent challenges in computer vision. While diffusion models are achieving compelling results in image generation, video diffusion models are limited by heavy training and huge models, resulting in videos that are still biased to the training dataset. In this work we propose MotionCraft, a new zero-shot video generator to craft physics-based and realistic videos. MotionCraft is able to warp the noise latent space of an image diffusion model, such as Stable Diffusion, by applying an optical flow derived from a physics simulation. We show that warping the noise latent space results in coherent application of the desired motion while allowing the model to generate missing elements consistent with the scene evolution, which would otherwise result in artefacts or missing content if the flow was applied in the pixel space. We compare our method with the state-of-the-art Text2Video-Zero reporting qualitative and quantitative improvements, demonstrating the effectiveness of our approach to generate videos with finely-prescribed complex motion dynamics. Project page: https://mezzelfo.github.io/MotionCraft/ | [
"['Luca Savant Aira' 'Antonio Montanaro' 'Emanuele Aiello' 'Diego Valsesia'\n 'Enrico Magli']"
] |
null | null | 2405.13568 | null | null | http://arxiv.org/pdf/2405.13568v1 | 2024-05-22T12:05:17Z | 2024-05-22T12:05:17Z | CPE-Identifier: Automated CPE identification and CVE summaries
annotation with Deep Learning and NLP | With the drastic increase in the number of new vulnerabilities in the National Vulnerability Database (NVD) every year, the workload for NVD analysts to associate the Common Platform Enumeration (CPE) with the Common Vulnerabilities and Exposures (CVE) summaries becomes increasingly laborious and slow. The delay causes organisations, which depend on NVD for vulnerability management and security measurement, to be more vulnerable to zero-day attacks. Thus, it is essential to come out with a technique and tool to extract the CPEs in the CVE summaries accurately and quickly. In this work, we propose the CPE-Identifier system, an automated CPE annotating and extracting system, from the CVE summaries. The system can be used as a tool to identify CPE entities from new CVE text inputs. Moreover, we also automate the data generating and labeling processes using deep learning models. Due to the complexity of the CVE texts, new technical terminologies appear frequently. To identify novel words in future CVE texts, we apply Natural Language Processing (NLP) Named Entity Recognition (NER), to identify new technical jargons in the text. Our proposed model achieves an F1 score of 95.48%, an accuracy score of 99.13%, a precision of 94.83%, and a recall of 96.14%. We show that it outperforms prior works on automated CVE-CPE labeling by more than 9% on all metrics. | [
"['Wanyu Hu' 'Vrizlynn L. L. Thing']"
] |
null | null | 2405.13574 | null | null | http://arxiv.org/pdf/2405.13574v1 | 2024-05-22T12:11:12Z | 2024-05-22T12:11:12Z | Reinforcement Learning for Adaptive MCMC | An informal observation, made by several authors, is that the adaptive design of a Markov transition kernel has the flavour of a reinforcement learning task. Yet, to-date it has remained unclear how to actually exploit modern reinforcement learning technologies for adaptive MCMC. The aim of this paper is to set out a general framework, called Reinforcement Learning Metropolis--Hastings, that is theoretically supported and empirically validated. Our principal focus is on learning fast-mixing Metropolis--Hastings transition kernels, which we cast as deterministic policies and optimise via a policy gradient. Control of the learning rate provably ensures conditions for ergodicity are satisfied. The methodology is used to construct a gradient-free sampler that out-performs a popular gradient-free adaptive Metropolis--Hastings algorithm on $approx 90 %$ of tasks in the PosteriorDB benchmark. | [
"['Congye Wang' 'Wilson Chen' 'Heishiro Kanagawa' 'Chris. J. Oates']"
] |
null | null | 2405.13575 | null | null | http://arxiv.org/pdf/2405.13575v2 | 2024-05-28T02:14:18Z | 2024-05-22T12:12:20Z | PDMLP: Patch-based Decomposed MLP for Long-Term Time Series Forecasting | Recent studies have attempted to refine the Transformer architecture to demonstrate its effectiveness in Long-Term Time Series Forecasting (LTSF) tasks. Despite surpassing many linear forecasting models with ever-improving performance, we remain skeptical of Transformers as a solution for LTSF. We attribute the effectiveness of these models largely to the adopted Patch mechanism, which enhances sequence locality to an extent yet fails to fully address the loss of temporal information inherent to the permutation-invariant self-attention mechanism. Further investigation suggests that simple linear layers augmented with the Patch mechanism may outperform complex Transformer-based LTSF models. Moreover, diverging from models that use channel independence, our research underscores the importance of cross-variable interactions in enhancing the performance of multivariate time series forecasting. The interaction information between variables is highly valuable but has been misapplied in past studies, leading to suboptimal cross-variable models. Based on these insights, we propose a novel and simple Patch-based Decomposed MLP (PDMLP) for LTSF tasks. Specifically, we employ simple moving averages to extract smooth components and noise-containing residuals from time series data, engaging in semantic information interchange through channel mixing and specializing in random noise with channel independence processing. The PDMLP model consistently achieves state-of-the-art results on several real-world datasets. We hope this surprising finding will spur new research directions in the LTSF field and pave the way for more efficient and concise solutions. | [
"['Peiwang Tang' 'Weitai Zhang']"
] |
null | null | 2405.13584 | null | null | http://arxiv.org/pdf/2405.13584v1 | 2024-05-22T12:27:24Z | 2024-05-22T12:27:24Z | Emulating Full Client Participation: A Long-Term Client Selection
Strategy for Federated Learning | Client selection significantly affects the system convergence efficiency and is a crucial problem in federated learning. Existing methods often select clients by evaluating each round individually and overlook the necessity for long-term optimization, resulting in suboptimal performance and potential fairness issues. In this study, we propose a novel client selection strategy designed to emulate the performance achieved with full client participation. In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set. In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected. This constraint guides the client selection process from a long-term perspective. We employ Lyapunov optimization and submodular functions to efficiently identify the optimal subset of clients, and provide a theoretical analysis of the convergence ability. Experiments demonstrate that the proposed strategy significantly improves both accuracy and fairness compared to previous methods while also exhibiting efficiency by incurring minimal time overhead. | [
"['Qingming Li' 'Juzheng Miao' 'Puning Zhao' 'Li Zhou' 'Shouling Ji'\n 'Bowen Zhou' 'Furui Liu']"
] |
null | null | 2405.13586 | null | null | http://arxiv.org/pdf/2405.13586v1 | 2024-05-22T12:30:25Z | 2024-05-22T12:30:25Z | Bond Graphs for multi-physics informed Neural Networks for multi-variate
time series | In the trend of hybrid Artificial Intelligence (AI) techniques, Physic Informed Machine Learning has seen a growing interest. It operates mainly by imposing a data, learning or inductive bias with simulation data, Partial Differential Equations or equivariance and invariance properties. While these models have shown great success on tasks involving one physical domain such as fluid dynamics, existing methods still struggle on tasks with complex multi-physical and multi-domain phenomena. To address this challenge, we propose to leverage Bond Graphs, a multi-physics modeling approach together with Graph Neural Network. We thus propose Neural Bond Graph Encoder (NBgE), a model agnostic physical-informed encoder tailored for multi-physics systems. It provides an unified framework for any multi-physics informed AI with a graph encoder readable for any deep learning model. Our experiments on two challenging multi-domain physical systems - a Direct Current Motor and the Respiratory system - demonstrate the effectiveness of our approach on a multi-variate time series forecasting task. | [
"['Alexis-Raja Brachet' 'Pierre-Yves Richard' 'Céline Hudelot']"
] |
null | null | 2405.13587 | null | null | http://arxiv.org/pdf/2405.13587v1 | 2024-05-22T12:34:04Z | 2024-05-22T12:34:04Z | Exact Gradients for Stochastic Spiking Neural Networks Driven by Rough
Signals | We introduce a mathematically rigorous framework based on rough path theory to model stochastic spiking neural networks (SSNNs) as stochastic differential equations with event discontinuities (Event SDEs) and driven by c`adl`ag rough paths. Our formalism is general enough to allow for potential jumps to be present both in the solution trajectories as well as in the driving noise. We then identify a set of sufficient conditions ensuring the existence of pathwise gradients of solution trajectories and event times with respect to the network's parameters and show how these gradients satisfy a recursive relation. Furthermore, we introduce a general-purpose loss function defined by means of a new class of signature kernels indexed on c`adl`ag rough paths and use it to train SSNNs as generative models. We provide an end-to-end autodifferentiable solver for Event SDEs and make its implementation available as part of the $texttt{diffrax}$ library. Our framework is, to our knowledge, the first enabling gradient-based training of SSNNs with noise affecting both the spike timing and the network's dynamics. | [
"['Christian Holberg' 'Cristopher Salvi']"
] |
null | null | 2405.13592 | null | null | http://arxiv.org/pdf/2405.13592v2 | 2024-05-27T09:43:50Z | 2024-05-22T12:40:57Z | Almost sure convergence rates of stochastic gradient methods under
gradient domination | Stochastic gradient methods are among the most important algorithms in training machine learning problems. While classical assumptions such as strong convexity allow a simple analysis they are rarely satisfied in applications. In recent years, global and local gradient domination properties have shown to be a more realistic replacement of strong convexity. They were proved to hold in diverse settings such as (simple) policy gradient methods in reinforcement learning and training of deep neural networks with analytic activation functions. We prove almost sure convergence rates $f(X_n)-f^*in obig( n^{-frac{1}{4beta-1}+epsilon}big)$ of the last iterate for stochastic gradient descent (with and without momentum) under global and local $beta$-gradient domination assumptions. The almost sure rates get arbitrarily close to recent rates in expectation. Finally, we demonstrate how to apply our results to the training task in both supervised and reinforcement learning. | [
"['Simon Weissmann' 'Sara Klein' 'Waïss Azizian' 'Leif Döring']"
] |
null | null | 2405.13599 | null | null | http://arxiv.org/pdf/2405.13599v1 | 2024-05-22T12:50:56Z | 2024-05-22T12:50:56Z | LogRCA: Log-based Root Cause Analysis for Distributed Services | To assist IT service developers and operators in managing their increasingly complex service landscapes, there is a growing effort to leverage artificial intelligence in operations. To speed up troubleshooting, log anomaly detection has received much attention in particular, dealing with the identification of log events that indicate the reasons for a system failure. However, faults often propagate extensively within systems, which can result in a large number of anomalies being detected by existing approaches. In this case, it can remain very challenging for users to quickly identify the actual root cause of a failure. We propose LogRCA, a novel method for identifying a minimal set of log lines that together describe a root cause. LogRCA uses a semi-supervised learning approach to deal with rare and unknown errors and is designed to handle noisy data. We evaluated our approach on a large-scale production log data set of 44.3 million log lines, which contains 80 failures, whose root causes were labeled by experts. LogRCA consistently outperforms baselines based on deep learning and statistical analysis in terms of precision and recall to detect candidate root causes. In addition, we investigated the impact of our deployed data balancing approach, demonstrating that it considerably improves performance on rare failures. | [
"['Thorsten Wittkopp' 'Philipp Wiesner' 'Odej Kao']"
] |
null | null | 2405.13602 | null | null | http://arxiv.org/pdf/2405.13602v1 | 2024-05-22T12:53:12Z | 2024-05-22T12:53:12Z | COTET: Cross-view Optimal Transport for Knowledge Graph Entity Typing | Knowledge graph entity typing (KGET) aims to infer missing entity type instances in knowledge graphs. Previous research has predominantly centered around leveraging contextual information associated with entities, which provides valuable clues for inference. However, they have long ignored the dual nature of information inherent in entities, encompassing both high-level coarse-grained cluster knowledge and fine-grained type knowledge. This paper introduces Cross-view Optimal Transport for knowledge graph Entity Typing (COTET), a method that effectively incorporates the information on how types are clustered into the representation of entities and types. COTET comprises three modules: i) Multi-view Generation and Encoder, which captures structured knowledge at different levels of granularity through entity-type, entity-cluster, and type-cluster-type perspectives; ii) Cross-view Optimal Transport, transporting view-specific embeddings to a unified space by minimizing the Wasserstein distance from a distributional alignment perspective; iii) Pooling-based Entity Typing Prediction, employing a mixture pooling mechanism to aggregate prediction scores from diverse neighbors of an entity. Additionally, we introduce a distribution-based loss function to mitigate the occurrence of false negatives during training. Extensive experiments demonstrate the effectiveness of COTET when compared to existing baselines. | [
"['Zhiwei Hu' 'Víctor Gutiérrez-Basulto' 'Zhiliang Xiang' 'Ru Li'\n 'Jeff Z. Pan']"
] |
null | null | 2405.13609 | null | null | http://arxiv.org/pdf/2405.13609v1 | 2024-05-22T13:01:37Z | 2024-05-22T13:01:37Z | Tackling Decision Processes with Non-Cumulative Objectives using
Reinforcement Learning | Markov decision processes (MDPs) are used to model a wide variety of applications ranging from game playing over robotics to finance. Their optimal policy typically maximizes the expected sum of rewards given at each step of the decision process. However, a large class of problems does not fit straightforwardly into this framework: Non-cumulative Markov decision processes (NCMDPs), where instead of the expected sum of rewards, the expected value of an arbitrary function of the rewards is maximized. Example functions include the maximum of the rewards or their mean divided by their standard deviation. In this work, we introduce a general mapping of NCMDPs to standard MDPs. This allows all techniques developed to find optimal policies for MDPs, such as reinforcement learning or dynamic programming, to be directly applied to the larger class of NCMDPs. Focusing on reinforcement learning, we show applications in a diverse set of tasks, including classical control, portfolio optimization in finance, and discrete optimization problems. Given our approach, we can improve both final performance and training time compared to relying on standard MDPs. | [
"['Maximilian Nägele' 'Jan Olle' 'Thomas Fösel' 'Remmy Zen'\n 'Florian Marquardt']"
] |
null | null | 2405.13629 | null | null | http://arxiv.org/pdf/2405.13629v1 | 2024-05-22T13:26:26Z | 2024-05-22T13:26:26Z | Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow | Existing Maximum-Entropy (MaxEnt) Reinforcement Learning (RL) methods for continuous action spaces are typically formulated based on actor-critic frameworks and optimized through alternating steps of policy evaluation and policy improvement. In the policy evaluation steps, the critic is updated to capture the soft Q-function. In the policy improvement steps, the actor is adjusted in accordance with the updated soft Q-function. In this paper, we introduce a new MaxEnt RL framework modeled using Energy-Based Normalizing Flows (EBFlow). This framework integrates the policy evaluation steps and the policy improvement steps, resulting in a single objective training process. Our method enables the calculation of the soft value function used in the policy evaluation target without Monte Carlo approximation. Moreover, this design supports the modeling of multi-modal action distributions while facilitating efficient action sampling. To evaluate the performance of our method, we conducted experiments on the MuJoCo benchmark suite and a number of high-dimensional robotic tasks simulated by Omniverse Isaac Gym. The evaluation results demonstrate that our method achieves superior performance compared to widely-adopted representative baselines. | [
"['Chen-Hao Chao' 'Chien Feng' 'Wei-Fang Sun' 'Cheng-Kuang Lee' 'Simon See'\n 'Chun-Yi Lee']"
] |
null | null | 2405.13632 | null | null | http://arxiv.org/pdf/2405.13632v1 | 2024-05-22T13:30:01Z | 2024-05-22T13:30:01Z | Task agnostic continual learning with Pairwise layer architecture | Most of the dominant approaches to continual learning are based on either memory replay, parameter isolation, or regularization techniques that require task boundaries to calculate task statistics. We propose a static architecture-based method that doesn't use any of these. We show that we can improve the continual learning performance by replacing the final layer of our networks with our pairwise interaction layer. The pairwise interaction layer uses sparse representations from a Winner-take-all style activation function to find the relevant correlations in the hidden layer representations. The networks using this architecture show competitive performance in MNIST and FashionMNIST-based continual image classification experiments. We demonstrate this in an online streaming continual learning setup where the learning system cannot access task labels or boundaries. | [
"['Santtu Keskinen']"
] |
null | null | 2405.13637 | null | null | http://arxiv.org/pdf/2405.13637v2 | 2024-05-24T13:14:40Z | 2024-05-22T13:36:48Z | Curriculum Direct Preference Optimization for Diffusion and Consistency
Models | Direct Preference Optimization (DPO) has been proposed as an effective and efficient alternative to reinforcement learning from human feedback (RLHF). In this paper, we propose a novel and enhanced version of DPO based on curriculum learning for text-to-image generation. Our method is divided into two training stages. First, a ranking of the examples generated for each prompt is obtained by employing a reward model. Then, increasingly difficult pairs of examples are sampled and provided to a text-to-image generative (diffusion or consistency) model. Generated samples that are far apart in the ranking are considered to form easy pairs, while those that are close in the ranking form hard pairs. In other words, we use the rank difference between samples as a measure of difficulty. The sampled pairs are split into batches according to their difficulty levels, which are gradually used to train the generative model. Our approach, Curriculum DPO, is compared against state-of-the-art fine-tuning approaches on three benchmarks, outperforming the competing methods in terms of text alignment, aesthetics and human preference. Our code is available at https://anonymous.4open.science/r/Curriculum-DPO-EE14. | [
"['Florinel-Alin Croitoru' 'Vlad Hondru' 'Radu Tudor Ionescu' 'Nicu Sebe'\n 'Mubarak Shah']"
] |
null | null | 2405.13639 | null | null | http://arxiv.org/pdf/2405.13639v1 | 2024-05-22T13:38:47Z | 2024-05-22T13:38:47Z | On Hardware-efficient Inference in Probabilistic Circuits | Probabilistic circuits (PCs) offer a promising avenue to perform embedded reasoning under uncertainty. They support efficient and exact computation of various probabilistic inference tasks by design. Hence, hardware-efficient computation of PCs is highly interesting for edge computing applications. As computations in PCs are based on arithmetic with probability values, they are typically performed in the log domain to avoid underflow. Unfortunately, performing the log operation on hardware is costly. Hence, prior work has focused on computations in the linear domain, resulting in high resolution and energy requirements. This work proposes the first dedicated approximate computing framework for PCs that allows for low-resolution logarithm computations. We leverage Addition As Int, resulting in linear PC computation with simple hardware elements. Further, we provide a theoretical approximation error analysis and present an error compensation mechanism. Empirically, our method obtains up to 357x and 649x energy reduction on custom hardware for evidence and MAP queries respectively with little or no computational error. | [
"['Lingyun Yao' 'Martin Trapp' 'Jelin Leslin' 'Gaurav Singh' 'Peng Zhang'\n 'Karthekeyan Periasamy' 'Martin Andraud']"
] |
null | null | 2405.13640 | null | null | http://arxiv.org/pdf/2405.13640v1 | 2024-05-22T13:39:33Z | 2024-05-22T13:39:33Z | Knowledge Graph Reasoning with Self-supervised Reinforcement Learning | Reinforcement learning (RL) is an effective method of finding reasoning pathways in incomplete knowledge graphs (KGs). To overcome the challenges of a large action space, a self-supervised pre-training method is proposed to warm up the policy network before the RL training stage. To alleviate the distributional mismatch issue in general self-supervised RL (SSRL), in our supervised learning (SL) stage, the agent selects actions based on the policy network and learns from generated labels; this self-generation of labels is the intuition behind the name self-supervised. With this training framework, the information density of our SL objective is increased and the agent is prevented from getting stuck with the early rewarded paths. Our self-supervised RL (SSRL) method improves the performance of RL by pairing it with the wide coverage achieved by SL during pretraining, since the breadth of the SL objective makes it infeasible to train an agent with that alone. We show that our SSRL model meets or exceeds current state-of-the-art results on all Hits@k and mean reciprocal rank (MRR) metrics on four large benchmark KG datasets. This SSRL method can be used as a plug-in for any RL architecture for a KGR task. We adopt two RL architectures, i.e., MINERVA and MultiHopKG as our baseline RL models and experimentally show that our SSRL model consistently outperforms both baselines on all of these four KG reasoning tasks. Full code for the paper available at https://github.com/owenonline/Knowledge-Graph-Reasoning-with-Self-supervised-Reinforcement-Learning. | [
"['Ying Ma' 'Owen Burns' 'Mingqiu Wang' 'Gang Li' 'Nan Du'\n 'Laurent El Shafey' 'Liqiang Wang' 'Izhak Shafran' 'Hagen Soltau']"
] |
null | null | 2405.13646 | null | null | http://arxiv.org/pdf/2405.13646v1 | 2024-05-22T13:50:42Z | 2024-05-22T13:50:42Z | A Transformer variant for multi-step forecasting of water level and
hydrometeorological sensitivity analysis based on explainable artificial
intelligence technology | Understanding the combined influences of meteorological and hydrological factors on water level and flood events is essential, particularly in today's changing climate environments. Transformer, as one kind of the cutting-edge deep learning methods, offers an effective approach to model intricate nonlinear processes, enables the extraction of key features and water level predictions. EXplainable Artificial Intelligence (XAI) methods play important roles in enhancing the understandings of how different factors impact water level. In this study, we propose a Transformer variant by integrating sparse attention mechanism and introducing nonlinear output layer for the decoder module. The variant model is utilized for multi-step forecasting of water level, by considering meteorological and hydrological factors simultaneously. It is shown that the variant model outperforms traditional Transformer across different lead times with respect to various evaluation metrics. The sensitivity analyses based on XAI technology demonstrate the significant influence of meteorological factors on water level evolution, in which temperature is shown to be the most dominant meteorological factor. Therefore, incorporating both meteorological and hydrological factors is necessary for reliable hydrological prediction and flood prevention. In the meantime, XAI technology provides insights into certain predictions, which is beneficial for understanding the prediction results and evaluating the reasonability. | [
"['Mingyu Liu' 'Nana Bao' 'Xingting Yan' 'Chenyang Li' 'Kai Peng']"
] |
null | null | 2405.13666 | null | null | http://arxiv.org/pdf/2405.13666v1 | 2024-05-22T14:07:25Z | 2024-05-22T14:07:25Z | Generalization Bounds for Dependent Data using Online-to-Batch
Conversion | In this work, we give generalization bounds of statistical learning algorithms trained on samples drawn from a dependent data source, both in expectation and with high probability, using the Online-to-Batch conversion paradigm. We show that the generalization error of statistical learners in the dependent data setting is equivalent to the generalization error of statistical learners in the i.i.d. setting up to a term that depends on the decay rate of the underlying mixing stochastic process and is independent of the complexity of the statistical learner. Our proof techniques involve defining a new notion of stability of online learning algorithms based on Wasserstein distances and employing "near-martingale" concentration bounds for dependent random variables to arrive at appropriate upper bounds for the generalization error of statistical learners trained on dependent data. | [
"['Sagnik Chatterjee' 'Manuj Mukherjee' 'Alhad Sethi']"
] |
null | null | 2405.13670 | null | null | http://arxiv.org/pdf/2405.13670v1 | 2024-05-22T14:11:10Z | 2024-05-22T14:11:10Z | GNN-based Anomaly Detection for Encoded Network Traffic | The early research report explores the possibility of using Graph Neural Networks (GNNs) for anomaly detection in internet traffic data enriched with information. While recent studies have made significant progress in using GNNs for anomaly detection in finance, multivariate time-series, and biochemistry domains, there is limited research in the context of network flow data. In this report, we explore the idea that leverages information-enriched features extracted from network flow packet data to improve the performance of GNN in anomaly detection. The idea is to utilize feature encoding (binary, numerical, and string) to capture the relationships between the network components, allowing the GNN to learn latent relationships and better identify anomalies. | [
"['Anasuya Chattopadhyay' 'Daniel Reti' 'Hans D. Schotten']"
] |
null | null | 2405.13677 | null | null | http://arxiv.org/pdf/2405.13677v1 | 2024-05-22T14:20:56Z | 2024-05-22T14:20:56Z | Naturally Private Recommendations with Determinantal Point Processes | Often we consider machine learning models or statistical analysis methods which we endeavour to alter, by introducing a randomized mechanism, to make the model conform to a differential privacy constraint. However, certain models can often be implicitly differentially private or require significantly fewer alterations. In this work, we discuss Determinantal Point Processes (DPPs) which are dispersion models that balance recommendations based on both the popularity and the diversity of the content. We introduce DPPs, derive and discuss the alternations required for them to satisfy epsilon-Differential Privacy and provide an analysis of their sensitivity. We conclude by proposing simple alternatives to DPPs which would make them more efficient with respect to their privacy-utility trade-off. | [
"['Jack Fitzsimons' 'Agustín Freitas Pasqualini' 'Robert Pisarczyk'\n 'Dmitrii Usynin']"
] |
null | null | 2405.13682 | null | null | http://arxiv.org/pdf/2405.13682v1 | 2024-05-22T14:25:02Z | 2024-05-22T14:25:02Z | Constructive Universal Approximation Theorems for Deep Joint-Equivariant
Networks by Schur's Lemma | We present a unified constructive universal approximation theorem covering a wide range of learning machines including both shallow and deep neural networks based on the group representation theory. Constructive here means that the distribution of parameters is given in a closed-form expression (called the ridgelet transform). Contrary to the case of shallow models, expressive power analysis of deep models has been conducted in a case-by-case manner. Recently, Sonoda et al. (2023a,b) developed a systematic method to show a constructive approximation theorem from scalar-valued joint-group-invariant feature maps, covering a formal deep network. However, each hidden layer was formalized as an abstract group action, so it was not possible to cover real deep networks defined by composites of nonlinear activation function. In this study, we extend the method for vector-valued joint-group-equivariant feature maps, so to cover such real networks. | [
"['Sho Sonoda' 'Yuka Hashimoto' 'Isao Ishikawa' 'Masahiro Ikeda']"
] |
null | null | 2405.13692 | null | null | http://arxiv.org/pdf/2405.13692v1 | 2024-05-22T14:38:48Z | 2024-05-22T14:38:48Z | Challenging Gradient Boosted Decision Trees with Tabular Transformers
for Fraud Detection at Booking.com | Transformer-based neural networks, empowered by Self-Supervised Learning (SSL), have demonstrated unprecedented performance across various domains. However, related literature suggests that tabular Transformers may struggle to outperform classical Machine Learning algorithms, such as Gradient Boosted Decision Trees (GBDT). In this paper, we aim to challenge GBDTs with tabular Transformers on a typical task faced in e-commerce, namely fraud detection. Our study is additionally motivated by the problem of selection bias, often occurring in real-life fraud detection systems. It is caused by the production system affecting which subset of traffic becomes labeled. This issue is typically addressed by sampling randomly a small part of the whole production data, referred to as a Control Group. This subset follows a target distribution of production data and therefore is usually preferred for training classification models with standard ML algorithms. Our methodology leverages the capabilities of Transformers to learn transferable representations using all available data by means of SSL, giving it an advantage over classical methods. Furthermore, we conduct large-scale experiments, pre-training tabular Transformers on vast amounts of data instances and fine-tuning them on smaller target datasets. The proposed approach outperforms heavily tuned GBDTs by a considerable margin of the Average Precision (AP) score. Pre-trained models show more consistent performance than the ones trained from scratch when fine-tuning data is limited. Moreover, they require noticeably less labeled data for reaching performance comparable to their GBDT competitor that utilizes the whole dataset. | [
"['Sergei Krutikov' 'Bulat Khaertdinov' 'Rodion Kiriukhin'\n 'Shubham Agrawal' 'Kees Jan De Vries']"
] |
null | null | 2405.13693 | null | null | http://arxiv.org/pdf/2405.13693v1 | 2024-05-22T14:39:07Z | 2024-05-22T14:39:07Z | Uncovering Algorithmic Discrimination: An Opportunity to Revisit the
Comparator | Causal reasoning, in particular, counterfactual reasoning plays a central role in testing for discrimination. Counterfactual reasoning materializes when testing for discrimination, what is known as the counterfactual model of discrimination, when we compare the discrimination comparator with the discrimination complainant, where the comparator is a similar (or similarly situated) profile to that of the complainant used for testing the discrimination claim of the complainant. In this paper, we revisit the comparator by presenting two kinds of comparators based on the sort of causal intervention we want to represent. We present the ceteris paribus and the mutatis mutandis comparator, where the former is the standard and the latter is a new kind of comparator. We argue for the use of the mutatis mutandis comparator, which is built on the fairness given the difference notion, for testing future algorithmic discrimination cases. | [
"['Jose M. Alvarez' 'Salvatore Ruggieri']"
] |
null | null | 2405.13698 | null | null | http://arxiv.org/pdf/2405.13698v1 | 2024-05-22T14:43:02Z | 2024-05-22T14:43:02Z | How to set AdamW's weight decay as you scale model and dataset size | We show that weights learned by AdamW can be understood as an exponential moving average (EMA) of recent updates. This gives critical insights for how to set the weight decay in AdamW, and how the weight decay should scale with model and dataset size. In particular, the key hyperparameter for an exponential moving average is the EMA timescale. Intuitively, the EMA timescale can be understood as the number of recent iterations the EMA averages over. Given a fixed learning rate, there is a one-to-one mapping from the EMA timescale to the usual weight decay hyperparameter. Thus, choosing an EMA timescale implicitly sets the weight decay. Importantly, there are natural guidelines for sensible values for the EMA timescale: we need to average over all datapoints, so the EMA timescale should not be (much) smaller than 1 epoch, and we need to forget early updates, so the EMA timescale should not be (much) bigger than the total number of training epochs. In our experiments, we find that optimal EMA timescales are consistent with these guidelines, as are the hyperparameters chosen in recent large-scale LLM pretraining runs (e.g. Llama 1+2 and Stable LM). Critically, these guidelines suggest that the optimal EMA timescale should not change (much) as we scale the model and dataset. That implies that as the dataset size increases, the optimal weight decay should fall. Moreover, as the model size increases, the optimal weight decay should also increase (if we follow the muP recommendation for scaling the learning rate). | [
"['Xi Wang' 'Laurence Aitchison']"
] |
null | null | 2405.13699 | null | null | http://arxiv.org/pdf/2405.13699v1 | 2024-05-22T14:43:29Z | 2024-05-22T14:43:29Z | Uncertainty-aware Evaluation of Auxiliary Anomalies with the Expected
Anomaly Posterior | Anomaly detection is the task of identifying examples that do not behave as expected. Because anomalies are rare and unexpected events, collecting real anomalous examples is often challenging in several applications. In addition, learning an anomaly detector with limited (or no) anomalies often yields poor prediction performance. One option is to employ auxiliary synthetic anomalies to improve the model training. However, synthetic anomalies may be of poor quality: anomalies that are unrealistic or indistinguishable from normal samples may deteriorate the detector's performance. Unfortunately, no existing methods quantify the quality of auxiliary anomalies. We fill in this gap and propose the expected anomaly posterior (EAP), an uncertainty-based score function that measures the quality of auxiliary anomalies by quantifying the total uncertainty of an anomaly detector. Experimentally on 40 benchmark datasets of images and tabular data, we show that EAP outperforms 12 adapted data quality estimators in the majority of cases. | [
"['Lorenzo Perini' 'Maja Rudolph' 'Sabrina Schmedding' 'Chen Qiu']"
] |
null | null | 2405.13707 | null | null | http://arxiv.org/pdf/2405.13707v1 | 2024-05-22T14:57:09Z | 2024-05-22T14:57:09Z | Rethinking and Accelerating Graph Condensation: A Training-Free Approach
with Class Partition | The increasing prevalence of large-scale graphs poses a significant challenge for graph neural network training, attributed to their substantial computational requirements. In response, graph condensation (GC) emerges as a promising data-centric solution aiming to substitute the large graph with a small yet informative condensed graph to facilitate data-efficient GNN training. However, existing GC methods suffer from intricate optimization processes, necessitating excessive computing resources. In this paper, we revisit existing GC optimization strategies and identify two pervasive issues: 1. various GC optimization strategies converge to class-level node feature matching between the original and condensed graphs, making the optimization target coarse-grained despite the complex computations; 2. to bridge the original and condensed graphs, existing GC methods rely on a Siamese graph network architecture that requires time-consuming bi-level optimization with iterative gradient computations. To overcome these issues, we propose a training-free GC framework termed Class-partitioned Graph Condensation (CGC), which refines the node feature matching from the class-to-class paradigm into a novel class-to-node paradigm. Remarkably, this refinement also simplifies the GC optimization as a class partition problem, which can be efficiently solved by any clustering methods. Moreover, CGC incorporates a pre-defined graph structure to enable a closed-form solution for condensed node features, eliminating the back-and-forth gradient descent in existing GC approaches without sacrificing accuracy. Extensive experiments demonstrate that CGC achieves state-of-the-art performance with a more efficient condensation process. For instance, compared with the seminal GC method (i.e., GCond), CGC condenses the largest Reddit graph within 10 seconds, achieving a 2,680X speedup and a 1.4% accuracy increase. | [
"['Xinyi Gao' 'Tong Chen' 'Wentao Zhang' 'Junliang Yu' 'Guanhua Ye'\n 'Quoc Viet Hung Nguyen' 'Hongzhi Yin']"
] |
null | null | 2405.13710 | null | null | http://arxiv.org/pdf/2405.13710v1 | 2024-05-22T14:59:50Z | 2024-05-22T14:59:50Z | Optimizing Lymphocyte Detection in Breast Cancer Whole Slide Imaging
through Data-Centric Strategies | Efficient and precise quantification of lymphocytes in histopathology slides is imperative for the characterization of the tumor microenvironment and immunotherapy response insights. We developed a data-centric optimization pipeline that attain great lymphocyte detection performance using an off-the-shelf YOLOv5 model, without any architectural modifications. Our contribution that rely on strategic dataset augmentation strategies, includes novel biological upsampling and custom visual cohesion transformations tailored to the unique properties of tissue imagery, and enables to dramatically improve model performances. Our optimization reveals a pivotal realization: given intensive customization, standard computational pathology models can achieve high-capability biomarker development, without increasing the architectural complexity. We showcase the interest of this approach in the context of breast cancer where our strategies lead to good lymphocyte detection performances, echoing a broadly impactful paradigm shift. Furthermore, our data curation techniques enable crucial histological analysis benchmarks, highlighting improved generalizable potential. | [
"['Amine Marzouki' 'Zhuxian Guo' 'Qinghe Zeng' 'Camille Kurtz'\n 'Nicolas Loménie']"
] |
null | null | 2405.13711 | null | null | http://arxiv.org/pdf/2405.13711v1 | 2024-05-22T15:01:05Z | 2024-05-22T15:01:05Z | VAE-Var: Variational-Autoencoder-Enhanced Variational Assimilation | Data assimilation refers to a set of algorithms designed to compute the optimal estimate of a system's state by refining the prior prediction (known as background states) using observed data. Variational assimilation methods rely on the maximum likelihood approach to formulate a variational cost, with the optimal state estimate derived by minimizing this cost. Although traditional variational methods have achieved great success and have been widely used in many numerical weather prediction centers, they generally assume Gaussian errors in the background states, which limits the accuracy of these algorithms due to the inherent inaccuracies of this assumption. In this paper, we introduce VAE-Var, a novel variational algorithm that leverages a variational autoencoder (VAE) to model a non-Gaussian estimate of the background error distribution. We theoretically derive the variational cost under the VAE estimation and present the general formulation of VAE-Var; we implement VAE-Var on low-dimensional chaotic systems and demonstrate through experimental results that VAE-Var consistently outperforms traditional variational assimilation methods in terms of accuracy across various observational settings. | [
"['Yi Xiao' 'Qilong Jia' 'Wei Xue' 'Lei Bai']"
] |
null | null | 2405.13712 | null | null | http://arxiv.org/pdf/2405.13712v2 | 2024-07-08T21:12:54Z | 2024-05-22T15:04:06Z | Learning Diffusion Priors from Observations by Expectation Maximization | Diffusion models recently proved to be remarkable priors for Bayesian inverse problems. However, training these models typically requires access to large amounts of clean data, which could prove difficult in some settings. In this work, we present a novel method based on the expectation-maximization algorithm for training diffusion models from incomplete and noisy observations only. Unlike previous works, our method leads to proper diffusion models, which is crucial for downstream tasks. As part of our method, we propose and motivate a new posterior sampling scheme for unconditional diffusion models. We present empirical evidence supporting the effectiveness of our method. | [
"['François Rozet' 'Gérôme Andry' 'François Lanusse' 'Gilles Louppe']"
] |
null | null | 2405.13718 | null | null | http://arxiv.org/pdf/2405.13718v1 | 2024-05-22T15:09:41Z | 2024-05-22T15:09:41Z | Upper and lower memory capacity bounds of transformers for next-token
prediction | Given a sequence of tokens, such as words, the task of next-token prediction is to predict the next-token conditional probability distribution. Decoder-only transformers have become effective models for this task, but their properties are still not fully understood. In particular, the largest number of distinct context sequences that a decoder-only transformer can interpolate next-token distributions for has not been established. To fill this gap, we prove upper and lower bounds on this number, which are equal up to a multiplicative constant. We prove these bounds in the general setting where next-token distributions can be arbitrary as well as the empirical setting where they are calculated from a finite number of document sequences. Our lower bounds are for one-layer transformers and our proofs highlight an important injectivity property satisfied by self-attention. Furthermore, we provide numerical evidence that the minimal number of parameters for memorization is sufficient for being able to train the model to the entropy lower bound. | [
"['Liam Madden' 'Curtis Fox' 'Christos Thrampoulidis']"
] |
null | null | 2405.13721 | null | null | http://arxiv.org/pdf/2405.13721v1 | 2024-05-22T15:12:14Z | 2024-05-22T15:12:14Z | Connectivity Shapes Implicit Regularization in Matrix Factorization
Models for Matrix Completion | Matrix factorization models have been extensively studied as a valuable test-bed for understanding the implicit biases of overparameterized models. Although both low nuclear norm and low rank regularization have been studied for these models, a unified understanding of when, how, and why they achieve different implicit regularization effects remains elusive. In this work, we systematically investigate the implicit regularization of matrix factorization for solving matrix completion problems. We empirically discover that the connectivity of observed data plays a crucial role in the implicit bias, with a transition from low nuclear norm to low rank as data shifts from disconnected to connected with increased observations. We identify a hierarchy of intrinsic invariant manifolds in the loss landscape that guide the training trajectory to evolve from low-rank to higher-rank solutions. Based on this finding, we theoretically characterize the training trajectory as following the hierarchical invariant manifold traversal process, generalizing the characterization of Li et al. (2020) to include the disconnected case. Furthermore, we establish conditions that guarantee minimum nuclear norm, closely aligning with our experimental findings, and we provide a dynamics characterization condition for ensuring minimum rank. Our work reveals the intricate interplay between data connectivity, training dynamics, and implicit regularization in matrix factorization models. | [
"['Zhiwei Bai' 'Jiajie Zhao' 'Yaoyu Zhang']"
] |
null | null | 2405.13726 | null | null | http://arxiv.org/pdf/2405.13726v1 | 2024-05-22T15:20:27Z | 2024-05-22T15:20:27Z | Score-based Generative Models with Adaptive Momentum | Score-based generative models have demonstrated significant practical success in data-generating tasks. The models establish a diffusion process that perturbs the ground truth data to Gaussian noise and then learn the reverse process to transform noise into data. However, existing denoising methods such as Langevin dynamic and numerical stochastic differential equation solvers enjoy randomness but generate data slowly with a large number of score function evaluations, and the ordinary differential equation solvers enjoy faster sampling speed but no randomness may influence the sample quality. To this end, motivated by the Stochastic Gradient Descent (SGD) optimization methods and the high connection between the model sampling process with the SGD, we propose adaptive momentum sampling to accelerate the transforming process without introducing additional hyperparameters. Theoretically, we proved our method promises convergence under given conditions. In addition, we empirically show that our sampler can produce more faithful images/graphs in small sampling steps with 2 to 5 times speed up and obtain competitive scores compared to the baselines on image and graph generation tasks. | [
"['Ziqing Wen' 'Xiaoge Deng' 'Ping Luo' 'Tao Sun' 'Dongsheng Li']"
] |
null | null | 2405.13729 | null | null | http://arxiv.org/pdf/2405.13729v2 | 2024-05-24T07:05:59Z | 2024-05-22T15:23:10Z | ComboStoc: Combinatorial Stochasticity for Diffusion Generative Models | In this paper, we study an under-explored but important factor of diffusion generative models, i.e., the combinatorial complexity. Data samples are generally high-dimensional, and for various structured generation tasks, there are additional attributes which are combined to associate with data samples. We show that the space spanned by the combination of dimensions and attributes is insufficiently sampled by existing training scheme of diffusion generative models, causing degraded test time performance. We present a simple fix to this problem by constructing stochastic processes that fully exploit the combinatorial structures, hence the name ComboStoc. Using this simple strategy, we show that network training is significantly accelerated across diverse data modalities, including images and 3D structured shapes. Moreover, ComboStoc enables a new way of test time generation which uses insynchronized time steps for different dimensions and attributes, thus allowing for varying degrees of control over them. | [
"['Rui Xu' 'Jiepeng Wang' 'Hao Pan' 'Yang Liu' 'Xin Tong' 'Shiqing Xin'\n 'Changhe Tu' 'Taku Komura' 'Wenping Wang']"
] |
null | null | 2405.13731 | null | null | http://arxiv.org/pdf/2405.13731v1 | 2024-05-22T15:24:48Z | 2024-05-22T15:24:48Z | Control, Transport and Sampling: Towards Better Loss Design | Leveraging connections between diffusion-based sampling, optimal transport, and optimal stochastic control through their shared links to the Schr"odinger bridge problem, we propose novel objective functions that can be used to transport $nu$ to $mu$, consequently sample from the target $mu$, via optimally controlled dynamics. We highlight the importance of the pathwise perspective and the role various optimality conditions on the path measure can play for the design of valid training losses, the careful choice of which offer numerical advantages in practical implementation. | [
"['Qijia Jiang' 'David Nabergoj']"
] |
null | null | 2405.13735 | null | null | http://arxiv.org/pdf/2405.13735v2 | 2024-05-24T19:29:48Z | 2024-05-22T15:28:43Z | Transfer of Safety Controllers Through Learning Deep Inverse Dynamics
Model | Control barrier certificates have proven effective in formally guaranteeing the safety of the control systems. However, designing a control barrier certificate is a time-consuming and computationally expensive endeavor that requires expert input in the form of domain knowledge and mathematical maturity. Additionally, when a system undergoes slight changes, the new controller and its correctness certificate need to be recomputed, incurring similar computational challenges as those faced during the design of the original controller. Prior approaches have utilized transfer learning to transfer safety guarantees in the form of a barrier certificate while maintaining the control invariant. Unfortunately, in practical settings, the source and the target environments often deviate substantially in their control inputs, rendering the aforementioned approach impractical. To address this challenge, we propose integrating emph{inverse dynamics} -- a neural network that suggests required action given a desired successor state -- of the target system with the barrier certificate of the source system to provide formal proof of safety. In addition, we propose a validity condition that, when met, guarantees correctness of the controller. We demonstrate the effectiveness of our approach through three case studies. | [
"['Alireza Nadali' 'Ashutosh Trivedi' 'Majid Zamani']"
] |
null | null | 2405.13738 | null | null | http://arxiv.org/pdf/2405.13738v1 | 2024-05-22T15:29:45Z | 2024-05-22T15:29:45Z | Memory capacity of three-layer neural networks with non-polynomial
activations | The minimal number of neurons required for a feedforward neural network to interpolate $n$ generic input-output pairs from $mathbb{R}^dtimes mathbb{R}$ is $Theta(sqrt{n})$. While previous results have shown that $Theta(sqrt{n})$ neurons are sufficient, they have been limited to logistic, Heaviside, and rectified linear unit (ReLU) as the activation function. Using a different approach, we prove that $Theta(sqrt{n})$ neurons are sufficient as long as the activation function is real analytic at a point and not a polynomial there. Thus, the only practical activation functions that our result does not apply to are piecewise polynomials. Importantly, this means that activation functions can be freely chosen in a problem-dependent manner without loss of interpolation power. | [
"['Liam Madden']"
] |
null | null | 2405.13740 | null | null | http://arxiv.org/pdf/2405.13740v1 | 2024-05-22T15:31:09Z | 2024-05-22T15:31:09Z | Mining Action Rules for Defect Reduction Planning | Defect reduction planning plays a vital role in enhancing software quality and minimizing software maintenance costs. By training a black box machine learning model and "explaining" its predictions, explainable AI for software engineering aims to identify the code characteristics that impact maintenance risks. However, post-hoc explanations do not always faithfully reflect what the original model computes. In this paper, we introduce CounterACT, a Counterfactual ACTion rule mining approach that can generate defect reduction plans without black-box models. By leveraging action rules, CounterACT provides a course of action that can be considered as a counterfactual explanation for the class (e.g., buggy or not buggy) assigned to a piece of code. We compare the effectiveness of CounterACT with the original action rule mining algorithm and six established defect reduction approaches on 9 software projects. Our evaluation is based on (a) overlap scores between proposed code changes and actual developer modifications; (b) improvement scores in future releases; and (c) the precision, recall, and F1-score of the plans. Our results show that, compared to competing approaches, CounterACT's explainable plans achieve higher overlap scores at the release level (median 95%) and commit level (median 85.97%), and they offer better trade-off between precision and recall (median F1-score 88.12%). Finally, we venture beyond planning and explore leveraging Large Language models (LLM) for generating code edits from our generated plans. Our results show that suggested LLM code edits supported by our plans are actionable and are more likely to pass relevant test cases than vanilla LLM code recommendations. | [
"['Khouloud Oueslati' 'Gabriel Laberge' 'Maxime Lamothe' 'Foutse Khomh']"
] |
null | null | 2405.13746 | null | null | http://arxiv.org/pdf/2405.13746v2 | 2024-05-24T03:17:41Z | 2024-05-22T15:32:38Z | CG-FedLLM: How to Compress Gradients in Federated Fune-tuning for Large
Language Models | The success of current Large-Language Models (LLMs) hinges on extensive training data that is collected and stored centrally, called Centralized Learning (CL). However, such a collection manner poses a privacy threat, and one potential solution is Federated Learning (FL), which transfers gradients, not raw data, among clients. Unlike traditional networks, FL for LLMs incurs significant communication costs due to their tremendous parameters. This study introduces an innovative approach to compress gradients to improve communication efficiency during LLM FL, formulating the new FL pipeline named CG-FedLLM. This approach integrates an encoder on the client side to acquire the compressed gradient features and a decoder on the server side to reconstruct the gradients. We also developed a novel training strategy that comprises Temporal-ensemble Gradient-Aware Pre-training (TGAP) to identify characteristic gradients of the target model and Federated AutoEncoder-Involved Fine-tuning (FAF) to compress gradients adaptively. Extensive experiments confirm that our approach reduces communication costs and improves performance (e.g., average 3 points increment compared with traditional CL- and FL-based fine-tuning with LlaMA on a well-recognized benchmark, C-Eval). This improvement is because our encoder-decoder, trained via TGAP and FAF, can filter gradients while selectively preserving critical features. Furthermore, we present a series of experimental analyses focusing on the signal-to-noise ratio, compression rate, and robustness within this privacy-centric framework, providing insight into developing more efficient and secure LLMs. | [
"['Huiwen Wu' 'Xiaohan Li' 'Deyi Zhang' 'Xiaogang Xu' 'Jiafei Wu'\n 'Puning Zhao' 'Zhe Liu']"
] |
null | null | 2405.13753 | null | null | http://arxiv.org/pdf/2405.13753v2 | 2024-06-06T09:27:11Z | 2024-05-22T15:38:30Z | A Dynamic Model of Performative Human-ML Collaboration: Theory and
Empirical Evidence | Machine learning (ML) models are increasingly used in various applications, from recommendation systems in e-commerce to diagnosis prediction in healthcare. In this paper, we present a novel dynamic framework for thinking about the deployment of ML models in a performative, human-ML collaborative system. In our framework, the introduction of ML recommendations changes the data generating process of human decisions, which are only a proxy to the ground truth and which are then used to train future versions of the model. We show that this dynamic process in principle can converge to different stable points, i.e. where the ML model and the Human+ML system have the same performance. Some of these stable points are suboptimal with respect to the actual ground truth. We conduct an empirical user study with 1,408 participants to showcase this process. In the study, humans solve instances of the knapsack problem with the help of machine learning predictions. This is an ideal setting because we can see how ML models learn to imitate human decisions and how this learning process converges to a stable point. We find that for many levels of ML performance, humans can improve the ML predictions to dynamically reach an equilibrium performance that is around 92% of the maximum knapsack value. We also find that the equilibrium performance could be even higher if humans rationally followed the ML recommendations. Finally, we test whether monetary incentives can increase the quality of human decisions, but we fail to find any positive effect. Our results have practical implications for the deployment of ML models in contexts where human decisions may deviate from the indisputable ground truth. | [
"['Tom Sühr' 'Samira Samadi' 'Chiara Farronato']"
] |
null | null | 2405.13755 | null | null | http://arxiv.org/pdf/2405.13755v1 | 2024-05-22T15:39:05Z | 2024-05-22T15:39:05Z | Offline RL via Feature-Occupancy Gradient Ascent | We study offline Reinforcement Learning in large infinite-horizon discounted Markov Decision Processes (MDPs) when the reward and transition models are linearly realizable under a known feature map. Starting from the classic linear-program formulation of the optimal control problem in MDPs, we develop a new algorithm that performs a form of gradient ascent in the space of feature occupancies, defined as the expected feature vectors that can potentially be generated by executing policies in the environment. We show that the resulting simple algorithm satisfies strong computational and sample complexity guarantees, achieved under the least restrictive data coverage assumptions known in the literature. In particular, we show that the sample complexity of our method scales optimally with the desired accuracy level and depends on a weak notion of coverage that only requires the empirical feature covariance matrix to cover a single direction in the feature space (as opposed to covering a full subspace). Additionally, our method is easy to implement and requires no prior knowledge of the coverage ratio (or even an upper bound on it), which altogether make it the strongest known algorithm for this setting to date. | [
"['Gergely Neu' 'Nneka Okolo']"
] |
null | null | 2405.13757 | null | null | http://arxiv.org/pdf/2405.13757v1 | 2024-05-22T15:39:31Z | 2024-05-22T15:39:31Z | A label-free and data-free training strategy for vasculature
segmentation in serial sectioning OCT data | Serial sectioning Optical Coherence Tomography (sOCT) is a high-throughput, label free microscopic imaging technique that is becoming increasingly popular to study post-mortem neurovasculature. Quantitative analysis of the vasculature requires highly accurate segmentation; however, sOCT has low signal-to-noise-ratio and displays a wide range of contrasts and artifacts that depend on acquisition parameters. Furthermore, labeled data is scarce and extremely time consuming to generate. Here, we leverage synthetic datasets of vessels to train a deep learning segmentation model. We construct the vessels with semi-realistic splines that simulate the vascular geometry and compare our model with realistic vascular labels generated by constrained constructive optimization. Both approaches yield similar Dice scores, although with very different false positive and false negative rates. This method addresses the complexity inherent in OCT images and paves the way for more accurate and efficient analysis of neurovascular structures. | [
"['Etienne Chollet' 'Yael Balbastre' 'Caroline Magnain' 'Bruce Fischl'\n 'Hui Wang']"
] |
null | null | 2405.13758 | null | null | http://arxiv.org/pdf/2405.13758v1 | 2024-05-22T15:39:54Z | 2024-05-22T15:39:54Z | Counterfactual Gradients-based Quantification of Prediction Trust in
Neural Networks | The widespread adoption of deep neural networks in machine learning calls for an objective quantification of esoteric trust. In this paper we propose GradTrust, a classification trust measure for large-scale neural networks at inference. The proposed method utilizes variance of counterfactual gradients, i.e. the required changes in the network parameters if the label were different. We show that GradTrust is superior to existing techniques for detecting misprediction rates on $50000$ images from ImageNet validation dataset. Depending on the network, GradTrust detects images where either the ground truth is incorrect or ambiguous, or the classes are co-occurring. We extend GradTrust to Video Action Recognition on Kinetics-400 dataset. We showcase results on $14$ architectures pretrained on ImageNet and $5$ architectures pretrained on Kinetics-400. We observe the following: (i) simple methodologies like negative log likelihood and margin classifiers outperform state-of-the-art uncertainty and out-of-distribution detection techniques for misprediction rates, and (ii) the proposed GradTrust is in the Top-2 performing methods on $37$ of the considered $38$ experimental modalities. The code is available at: https://github.com/olivesgatech/GradTrust | [
"['Mohit Prabhushankar' 'Ghassan AlRegib']"
] |
null | null | 2405.13759 | null | null | http://arxiv.org/pdf/2405.13759v1 | 2024-05-22T15:40:05Z | 2024-05-22T15:40:05Z | Enhancing Multiscale Simulations with Constitutive Relations-Aware Deep
Operator Networks | Multiscale problems are widely observed across diverse domains in physics and engineering. Translating these problems into numerical simulations and solving them using numerical schemes, e.g. the finite element method, is costly due to the demand of solving initial boundary-value problems at multiple scales. On the other hand, multiscale finite element computations are commended for their ability to integrate micro-structural properties into macroscopic computational analyses using homogenization techniques. Recently, neural operator-based surrogate models have shown trustworthy performance for solving a wide range of partial differential equations. In this work, we propose a hybrid method in which we utilize deep operator networks for surrogate modeling of the microscale physics. This allows us to embed the constitutive relations of the microscale into the model architecture and to predict microscale strains and stresses based on the prescribed macroscale strain inputs. Furthermore, numerical homogenization is carried out to obtain the macroscale quantities of interest. We apply the proposed approach to quasi-static problems of solid mechanics. The results demonstrate that our constitutive relations-aware DeepONet can yield accurate solutions even when being confronted with a restricted dataset during model development. | [
"['Hamidreza Eivazi' 'Mahyar Alikhani' 'Jendrik-Alexander Tröger'\n 'Stefan Wittek' 'Stefan Hartmann' 'Andreas Rausch']"
] |
null | null | 2405.13762 | null | null | http://arxiv.org/pdf/2405.13762v1 | 2024-05-22T15:47:14Z | 2024-05-22T15:47:14Z | A Versatile Diffusion Transformer with Mixture of Noise Levels for
Audiovisual Generation | Training diffusion models for audiovisual sequences allows for a range of generation tasks by learning conditional distributions of various input-output combinations of the two modalities. Nevertheless, this strategy often requires training a separate model for each task which is expensive. Here, we propose a novel training approach to effectively learn arbitrary conditional distributions in the audiovisual space.Our key contribution lies in how we parameterize the diffusion timestep in the forward diffusion process. Instead of the standard fixed diffusion timestep, we propose applying variable diffusion timesteps across the temporal dimension and across modalities of the inputs. This formulation offers flexibility to introduce variable noise levels for various portions of the input, hence the term mixture of noise levels. We propose a transformer-based audiovisual latent diffusion model and show that it can be trained in a task-agnostic fashion using our approach to enable a variety of audiovisual generation tasks at inference time. Experiments demonstrate the versatility of our method in tackling cross-modal and multimodal interpolation tasks in the audiovisual space. Notably, our proposed approach surpasses baselines in generating temporally and perceptually consistent samples conditioned on the input. Project page: avdit2024.github.io | [
"['Gwanghyun Kim' 'Alonso Martinez' 'Yu-Chuan Su' 'Brendan Jou'\n 'José Lezama' 'Agrim Gupta' 'Lijun Yu' 'Lu Jiang' 'Aren Jansen'\n 'Jacob Walker' 'Krishna Somandepalli']"
] |
null | null | 2405.13763 | null | null | http://arxiv.org/pdf/2405.13763v1 | 2024-05-22T15:47:35Z | 2024-05-22T15:47:35Z | Banded Square Root Matrix Factorization for Differentially Private Model
Training | Current state-of-the-art methods for differentially private model training are based on matrix factorization techniques. However, these methods suffer from high computational overhead because they require numerically solving a demanding optimization problem to determine an approximately optimal factorization prior to the actual model training. In this work, we present a new matrix factorization approach, BSR, which overcomes this computational bottleneck. By exploiting properties of the standard matrix square root, BSR allows to efficiently handle also large-scale problems. For the key scenario of stochastic gradient descent with momentum and weight decay, we even derive analytical expressions for BSR that render the computational overhead negligible. We prove bounds on the approximation quality that hold both in the centralized and in the federated learning setting. Our numerical experiments demonstrate that models trained using BSR perform on par with the best existing methods, while completely avoiding their computational overhead. | [
"['Nikita Kalinin' 'Christoph Lampert']"
] |
null | null | 2405.13765 | null | null | http://arxiv.org/pdf/2405.13765v1 | 2024-05-22T15:50:40Z | 2024-05-22T15:50:40Z | On the stability of second order gradient descent for time varying
convex functions | Gradient based optimization algorithms deployed in Machine Learning (ML) applications are often analyzed and compared by their convergence rates or regret bounds. While these rates and bounds convey valuable information they don't always directly translate to stability guarantees. Stability and similar concepts, like robustness, will become ever more important as we move towards deploying models in real-time and safety critical systems. In this work we build upon the results in Gaudio et al. 2021 and Moreu and Annaswamy 2022 for second order gradient descent when applied to explicitly time varying cost functions and provide more general stability guarantees. These more general results can aid in the design and certification of these optimization schemes so as to help ensure safe and reliable deployment for real-time learning applications. We also hope that the techniques provided here will stimulate and cross-fertilize the analysis that occurs on the same algorithms from the online learning and stochastic optimization communities. | [
"['Travis E. Gibson' 'Sawal Acharya' 'Anjali Parashar' 'Joseph E. Gaudio'\n 'Anurdha M. Annaswamy']"
] |
null | null | 2405.13771 | null | null | http://arxiv.org/pdf/2405.13771v1 | 2024-05-22T15:57:44Z | 2024-05-22T15:57:44Z | Multi-Dataset Multi-Task Learning for COVID-19 Prognosis | In the fight against the COVID-19 pandemic, leveraging artificial intelligence to predict disease outcomes from chest radiographic images represents a significant scientific aim. The challenge, however, lies in the scarcity of large, labeled datasets with compatible tasks for training deep learning models without leading to overfitting. Addressing this issue, we introduce a novel multi-dataset multi-task training framework that predicts COVID-19 prognostic outcomes from chest X-rays (CXR) by integrating correlated datasets from disparate sources, distant from conventional multi-task learning approaches, which rely on datasets with multiple and correlated labeling schemes. Our framework hypothesizes that assessing severity scores enhances the model's ability to classify prognostic severity groups, thereby improving its robustness and predictive power. The proposed architecture comprises a deep convolutional network that receives inputs from two publicly available CXR datasets, AIforCOVID for severity prognostic prediction and BRIXIA for severity score assessment, and branches into task-specific fully connected output networks. Moreover, we propose a multi-task loss function, incorporating an indicator function, to exploit multi-dataset integration. The effectiveness and robustness of the proposed approach are demonstrated through significant performance improvements in prognosis classification tasks across 18 different convolutional neural network backbones in different evaluation strategies. This improvement is evident over single-task baselines and standard transfer learning strategies, supported by extensive statistical analysis, showing great application potential. | [
"['Filippo Ruffini' 'Lorenzo Tronchin' 'Zhuoru Wu' 'Wenting Chen'\n 'Paolo Soda' 'Linlin Shen' 'Valerio Guarrasi']"
] |
null | null | 2405.13779 | null | null | http://arxiv.org/pdf/2405.13779v1 | 2024-05-22T16:07:05Z | 2024-05-22T16:07:05Z | Robust Disaster Assessment from Aerial Imagery Using Text-to-Image
Synthetic Data | We present a simple and efficient method to leverage emerging text-to-image generative models in creating large-scale synthetic supervision for the task of damage assessment from aerial images. While significant recent advances have resulted in improved techniques for damage assessment using aerial or satellite imagery, they still suffer from poor robustness to domains where manual labeled data is unavailable, directly impacting post-disaster humanitarian assistance in such under-resourced geographies. Our contribution towards improving domain robustness in this scenario is two-fold. Firstly, we leverage the text-guided mask-based image editing capabilities of generative models and build an efficient and easily scalable pipeline to generate thousands of post-disaster images from low-resource domains. Secondly, we propose a simple two-stage training approach to train robust models while using manual supervision from different source domains along with the generated synthetic target domain data. We validate the strength of our proposed framework under cross-geography domain transfer setting from xBD and SKAI images in both single-source and multi-source settings, achieving significant improvements over a source-only baseline in each case. | [
"['Tarun Kalluri' 'Jihyeon Lee' 'Kihyuk Sohn' 'Sahil Singla'\n 'Manmohan Chandraker' 'Joseph Xu' 'Jeremiah Liu']"
] |
null | null | 2405.13785 | null | null | http://arxiv.org/pdf/2405.13785v1 | 2024-05-22T16:11:29Z | 2024-05-22T16:11:29Z | Efficient Two-Stage Gaussian Process Regression Via Automatic Kernel
Search and Subsampling | Gaussian Process Regression (GPR) is widely used in statistics and machine learning for prediction tasks requiring uncertainty measures. Its efficacy depends on the appropriate specification of the mean function, covariance kernel function, and associated hyperparameters. Severe misspecifications can lead to inaccurate results and problematic consequences, especially in safety-critical applications. However, a systematic approach to handle these misspecifications is lacking in the literature. In this work, we propose a general framework to address these issues. Firstly, we introduce a flexible two-stage GPR framework that separates mean prediction and uncertainty quantification (UQ) to prevent mean misspecification, which can introduce bias into the model. Secondly, kernel function misspecification is addressed through a novel automatic kernel search algorithm, supported by theoretical analysis, that selects the optimal kernel from a candidate set. Additionally, we propose a subsampling-based warm-start strategy for hyperparameter initialization to improve efficiency and avoid hyperparameter misspecification. With much lower computational cost, our subsampling-based strategy can yield competitive or better performance than training exclusively on the full dataset. Combining all these components, we recommend two GPR methods-exact and scalable-designed to match available computational resources and specific UQ requirements. Extensive evaluation on real-world datasets, including UCI benchmarks and a safety-critical medical case study, demonstrates the robustness and precision of our methods. | [
"['Shifan Zhao' 'Jiaying Lu' 'Ji Yang' 'Edmond Chow' 'Yuanzhe Xi']"
] |
null | null | 2405.13787 | null | null | http://arxiv.org/pdf/2405.13787v1 | 2024-05-22T16:12:28Z | 2024-05-22T16:12:28Z | Disentangle Sample Size and Initialization Effect on Perfect
Generalization for Single-Neuron Target | Overparameterized models like deep neural networks have the intriguing ability to recover target functions with fewer sampled data points than parameters (see arXiv:2307.08921). To gain insights into this phenomenon, we concentrate on a single-neuron target recovery scenario, offering a systematic examination of how initialization and sample size influence the performance of two-layer neural networks. Our experiments reveal that a smaller initialization scale is associated with improved generalization, and we identify a critical quantity called the "initial imbalance ratio" that governs training dynamics and generalization under small initialization, supported by theoretical proofs. Additionally, we empirically delineate two critical thresholds in sample size--termed the "optimistic sample size" and the "separation sample size"--that align with the theoretical frameworks established by (see arXiv:2307.08921 and arXiv:2309.00508). Our results indicate a transition in the model's ability to recover the target function: below the optimistic sample size, recovery is unattainable; at the optimistic sample size, recovery becomes attainable albeit with a set of initialization of zero measure. Upon reaching the separation sample size, the set of initialization that can successfully recover the target function shifts from zero to positive measure. These insights, derived from a simplified context, provide a perspective on the intricate yet decipherable complexities of perfect generalization in overparameterized neural networks. | [
"['Jiajie Zhao' 'Zhiwei Bai' 'Yaoyu Zhang']"
] |
null | null | 2405.13791 | null | null | http://arxiv.org/pdf/2405.13791v1 | 2024-05-22T16:14:37Z | 2024-05-22T16:14:37Z | Multi-Type Point Cloud Autoencoder: A Complete Equivariant Embedding for
Molecule Conformation and Pose | The point cloud is a flexible representation for a wide variety of data types, and is a particularly natural fit for the 3D conformations of molecules. Extant molecule embedding/representation schemes typically focus on internal degrees of freedom, ignoring the global 3D orientation. For tasks that depend on knowledge of both molecular conformation and 3D orientation, such as the generation of molecular dimers, clusters, or condensed phases, we require a representation which is provably complete in the types and positions of atomic nuclei and roto-inversion equivariant with respect to the input point cloud. We develop, train, and evaluate a new type of autoencoder, molecular O(3) encoding net (Mo3ENet), for multi-type point clouds, for which we propose a new reconstruction loss, capitalizing on a Gaussian mixture representation of the input and output point clouds. Mo3ENet is end-to-end equivariant, meaning the learned representation can be manipulated on O(3), a practical bonus for downstream learning tasks. An appropriately trained Mo3ENet latent space comprises a universal embedding for scalar and vector molecule property prediction tasks, as well as other downstream tasks incorporating the 3D molecular pose. | [
"['Michael Kilgour' 'Jutta Rogal' 'Mark Tuckerman']"
] |
null | null | 2405.13794 | null | null | http://arxiv.org/pdf/2405.13794v1 | 2024-05-22T16:17:03Z | 2024-05-22T16:17:03Z | Conditioning diffusion models by explicit forward-backward bridging | Given an unconditional diffusion model $pi(x, y)$, using it to perform conditional simulation $pi(x mid y)$ is still largely an open question and is typically achieved by learning conditional drifts to the denoising SDE after the fact. In this work, we express conditional simulation as an inference problem on an augmented space corresponding to a partial SDE bridge. This perspective allows us to implement efficient and principled particle Gibbs and pseudo-marginal samplers marginally targeting the conditional distribution $pi(x mid y)$. Contrary to existing methodology, our methods do not introduce any additional approximation to the unconditional diffusion model aside from the Monte Carlo error. We showcase the benefits and drawbacks of our approach on a series of synthetic and real data examples. | [
"['Adrien Corenflos' 'Zheng Zhao' 'Simo Särkkä' 'Jens Sjölund'\n 'Thomas B. Schön']"
] |
null | null | 2405.13796 | null | null | http://arxiv.org/pdf/2405.13796v3 | 2024-05-29T11:02:48Z | 2024-05-22T16:21:02Z | Generalizing Weather Forecast to Fine-grained Temporal Scales via
Physics-AI Hybrid Modeling | Data-driven artificial intelligence (AI) models have made significant advancements in weather forecasting, particularly in medium-range and nowcasting. However, most data-driven weather forecasting models are black-box systems that focus on learning data mapping rather than fine-grained physical evolution in the time dimension. Consequently, the limitations in the temporal scale of datasets prevent these models from forecasting at finer time scales. This paper proposes a physics-AI hybrid model (i.e., WeatherGFT) which Generalizes weather forecasts to Finer-grained Temporal scales beyond training dataset. Specifically, we employ a carefully designed PDE kernel to simulate physical evolution on a small time scale (e.g., 300 seconds) and use a parallel neural networks with a learnable router for bias correction. Furthermore, we introduce a lead time-aware training framework to promote the generalization of the model at different lead times. The weight analysis of physics-AI modules indicates that physics conducts major evolution while AI performs corrections adaptively. Extensive experiments show that WeatherGFT trained on an hourly dataset, achieves state-of-the-art performance across multiple lead times and exhibits the capability to generalize 30-minute forecasts. | [
"['Wanghan Xu' 'Fenghua Ling' 'Wenlong Zhang' 'Tao Han' 'Hao Chen'\n 'Wanli Ouyang' 'Lei Bai']"
] |
null | null | 2405.13805 | null | null | http://arxiv.org/pdf/2405.13805v1 | 2024-05-22T16:32:20Z | 2024-05-22T16:32:20Z | Perceptual Fairness in Image Restoration | Fairness in image restoration tasks is the desire to treat different sub-groups of images equally well. Existing definitions of fairness in image restoration are highly restrictive. They consider a reconstruction to be a correct outcome for a group (e.g., women) only if it falls within the group's set of ground truth images (e.g., natural images of women); otherwise, it is considered entirely incorrect. Consequently, such definitions are prone to controversy, as errors in image restoration can manifest in various ways. In this work we offer an alternative approach towards fairness in image restoration, by considering the Group Perceptual Index (GPI), which we define as the statistical distance between the distribution of the group's ground truth images and the distribution of their reconstructions. We assess the fairness of an algorithm by comparing the GPI of different groups, and say that it achieves perfect Perceptual Fairness (PF) if the GPIs of all groups are identical. We motivate and theoretically study our new notion of fairness, draw its connection to previous ones, and demonstrate its utility on state-of-the-art face image super-resolution algorithms. | [
"['Guy Ohayon' 'Michael Elad' 'Tomer Michaeli']"
] |
null | null | 2405.13806 | null | null | http://arxiv.org/pdf/2405.13806v1 | 2024-05-22T16:32:27Z | 2024-05-22T16:32:27Z | Advancing Graph Convolutional Networks via General Spectral Wavelets | Spectral graph convolution, an important tool of data filtering on graphs, relies on two essential decisions; selecting spectral bases for signal transformation and parameterizing the kernel for frequency analysis. While recent techniques mainly focus on standard Fourier transform and vector-valued spectral functions, they fall short in flexibility to describe specific signal distribution for each node, and expressivity of spectral function. In this paper, we present a novel wavelet-based graph convolution network, namely WaveGC, which integrates multi-resolution spectral bases and a matrix-valued filter kernel. Theoretically, we establish that WaveGC can effectively capture and decouple short-range and long-range information, providing superior filtering flexibility, surpassing existing graph convolutional networks and graph Transformers (GTs). To instantiate WaveGC, we introduce a novel technique for learning general graph wavelets by separately combining odd and even terms of Chebyshev polynomials. This approach strictly satisfies wavelet admissibility criteria. Our numerical experiments showcase the capabilities of the new network. By replacing the Transformer part in existing architectures with WaveGC, we consistently observe improvements in both short-range and long-range tasks. This underscores the effectiveness of the proposed model in handling different scenarios. Our code is available at https://github.com/liun-online/WaveGC. | [
"['Nian Liu' 'Xiaoxin He' 'Thomas Laurent' 'Francesco Di Giovanni'\n 'Michael M. Bronstein' 'Xavier Bresson']"
] |
null | null | 2405.13810 | null | null | http://arxiv.org/pdf/2405.13810v1 | 2024-05-22T16:41:21Z | 2024-05-22T16:41:21Z | Leveraging 2D Information for Long-term Time Series Forecasting with
Vanilla Transformers | Time series prediction is crucial for understanding and forecasting complex dynamics in various domains, ranging from finance and economics to climate and healthcare. Based on Transformer architecture, one approach involves encoding multiple variables from the same timestamp into a single temporal token to model global dependencies. In contrast, another approach embeds the time points of individual series into separate variate tokens. The former method faces challenges in learning variate-centric representations, while the latter risks missing essential temporal information critical for accurate forecasting. In our work, we introduce GridTST, a model that combines the benefits of two approaches using innovative multi-directional attentions based on a vanilla Transformer. We regard the input time series data as a grid, where the $x$-axis represents the time steps and the $y$-axis represents the variates. A vertical slicing of this grid combines the variates at each time step into a textit{time token}, while a horizontal slicing embeds the individual series across all time steps into a textit{variate token}. Correspondingly, a textit{horizontal attention mechanism} focuses on time tokens to comprehend the correlations between data at various time steps, while a textit{vertical}, variate-aware textit{attention} is employed to grasp multivariate correlations. This combination enables efficient processing of information across both time and variate dimensions, thereby enhancing the model's analytical strength. % We also integrate the patch technique, segmenting time tokens into subseries-level patches, ensuring that local semantic information is retained in the embedding. The GridTST model consistently delivers state-of-the-art performance across various real-world datasets. | [
"['Xin Cheng' 'Xiuying Chen' 'Shuqi Li' 'Di Luo' 'Xun Wang' 'Dongyan Zhao'\n 'Rui Yan']"
] |
null | null | 2405.13812 | null | null | http://arxiv.org/pdf/2405.13812v1 | 2024-05-22T16:42:32Z | 2024-05-22T16:42:32Z | Interpretable Multivariate Time Series Forecasting Using Neural Fourier
Transform | Multivariate time series forecasting is a pivotal task in several domains, including financial planning, medical diagnostics, and climate science. This paper presents the Neural Fourier Transform (NFT) algorithm, which combines multi-dimensional Fourier transforms with Temporal Convolutional Network layers to improve both the accuracy and interpretability of forecasts. The Neural Fourier Transform is empirically validated on fourteen diverse datasets, showing superior performance across multiple forecasting horizons and lookbacks, setting new benchmarks in the field. This work advances multivariate time series forecasting by providing a model that is both interpretable and highly predictive, making it a valuable tool for both practitioners and researchers. The code for this study is publicly available. | [
"['Noam Koren' 'Kira Radinsky']"
] |
null | null | 2405.13817 | null | null | http://arxiv.org/pdf/2405.13817v1 | 2024-05-22T16:47:03Z | 2024-05-22T16:47:03Z | Thermodynamic Natural Gradient Descent | Second-order training methods have better convergence properties than gradient descent but are rarely used in practice for large-scale training due to their computational overhead. This can be viewed as a hardware limitation (imposed by digital computers). Here we show that natural gradient descent (NGD), a second-order method, can have a similar computational complexity per iteration to a first-order method, when employing appropriate hardware. We present a new hybrid digital-analog algorithm for training neural networks that is equivalent to NGD in a certain parameter regime but avoids prohibitively costly linear system solves. Our algorithm exploits the thermodynamic properties of an analog system at equilibrium, and hence requires an analog thermodynamic computer. The training occurs in a hybrid digital-analog loop, where the gradient and Fisher information matrix (or any other positive semi-definite curvature matrix) are calculated at given time intervals while the analog dynamics take place. We numerically demonstrate the superiority of this approach over state-of-the-art digital first- and second-order training methods on classification tasks and language model fine-tuning tasks. | [
"['Kaelan Donatella' 'Samuel Duffield' 'Maxwell Aifer' 'Denis Melanson'\n 'Gavin Crooks' 'Patrick J. Coles']"
] |
null | null | 2405.13818 | null | null | http://arxiv.org/pdf/2405.13818v1 | 2024-05-22T16:48:47Z | 2024-05-22T16:48:47Z | Identifiability of Differential-Algebraic Systems | Data-driven modeling of dynamical systems often faces numerous data-related challenges. A fundamental requirement is the existence of a unique set of parameters for a chosen model structure, an issue commonly referred to as identifiability. Although this problem is well studied for ordinary differential equations (ODEs), few studies have focused on the more general class of systems described by differential-algebraic equations (DAEs). Examples of DAEs include dynamical systems with algebraic equations representing conservation laws or approximating fast dynamics. This work introduces a novel identifiability test for models characterized by nonlinear DAEs. Unlike previous approaches, our test only requires prior knowledge of the system equations and does not need nonlinear transformation, index reduction, or numerical integration of the DAEs. We employed our identifiability analysis across a diverse range of DAE models, illustrating how system identifiability depends on the choices of sensors, experimental conditions, and model structures. Given the added challenges involved in identifying DAEs when compared to ODEs, we anticipate that our findings will have broad applicability and contribute significantly to the development and validation of data-driven methods for DAEs and other structure-preserving models. | [
"['Arthur N. Montanari' 'François Lamoline' 'Robert Bereza'\n 'Jorge Gonçalves']"
] |
null | null | 2405.13832 | null | null | http://arxiv.org/pdf/2405.13832v1 | 2024-05-22T16:59:50Z | 2024-05-22T16:59:50Z | Federated Learning in Healthcare: Model Misconducts, Security,
Challenges, Applications, and Future Research Directions -- A Systematic
Review | Data privacy has become a major concern in healthcare due to the increasing digitization of medical records and data-driven medical research. Protecting sensitive patient information from breaches and unauthorized access is critical, as such incidents can have severe legal and ethical complications. Federated Learning (FL) addresses this concern by enabling multiple healthcare institutions to collaboratively learn from decentralized data without sharing it. FL's scope in healthcare covers areas such as disease prediction, treatment customization, and clinical trial research. However, implementing FL poses challenges, including model convergence in non-IID (independent and identically distributed) data environments, communication overhead, and managing multi-institutional collaborations. A systematic review of FL in healthcare is necessary to evaluate how effectively FL can provide privacy while maintaining the integrity and usability of medical data analysis. In this study, we analyze existing literature on FL applications in healthcare. We explore the current state of model security practices, identify prevalent challenges, and discuss practical applications and their implications. Additionally, the review highlights promising future research directions to refine FL implementations, enhance data security protocols, and expand FL's use to broader healthcare applications, which will benefit future researchers and practitioners. | [
"['Md Shahin Ali' 'Md Manjurul Ahsan' 'Lamia Tasnim' 'Sadia Afrin'\n 'Koushik Biswas' 'Md Maruf Hossain' 'Md Mahfuz Ahmed' 'Ronok Hashan'\n 'Md Khairul Islam' 'Shivakumar Raman']"
] |
null | null | 2405.13846 | null | null | http://arxiv.org/pdf/2405.13846v1 | 2024-05-22T17:14:03Z | 2024-05-22T17:14:03Z | Regression Trees Know Calculus | Regression trees have emerged as a preeminent tool for solving real-world regression problems due to their ability to deal with nonlinearities, interaction effects and sharp discontinuities. In this article, we rather study regression trees applied to well-behaved, differentiable functions, and determine the relationship between node parameters and the local gradient of the function being approximated. We find a simple estimate of the gradient which can be efficiently computed using quantities exposed by popular tree learning libraries. This allows the tools developed in the context of differentiable algorithms, like neural nets and Gaussian processes, to be deployed to tree-based models. To demonstrate this, we study measures of model sensitivity defined in terms of integrals of gradients and demonstrate how to compute them for regression trees using the proposed gradient estimates. Quantitative and qualitative numerical experiments reveal the capability of gradients estimated by regression trees to improve predictive analysis, solve tasks in uncertainty quantification, and provide interpretation of model behavior. | [
"['Nathan Wycoff']"
] |
null | null | 2405.13848 | null | null | http://arxiv.org/pdf/2405.13848v1 | 2024-05-22T17:19:30Z | 2024-05-22T17:19:30Z | Maximum Manifold Capacity Representations in State Representation
Learning | The expanding research on manifold-based self-supervised learning (SSL) builds on the manifold hypothesis, which suggests that the inherent complexity of high-dimensional data can be unraveled through lower-dimensional manifold embeddings. Capitalizing on this, DeepInfomax with an unbalanced atlas (DIM-UA) has emerged as a powerful tool and yielded impressive results for state representations in reinforcement learning. Meanwhile, Maximum Manifold Capacity Representation (MMCR) presents a new frontier for SSL by optimizing class separability via manifold compression. However, MMCR demands extensive input views, resulting in significant computational costs and protracted pre-training durations. Bridging this gap, we present an innovative integration of MMCR into existing SSL methods, incorporating a discerning regularization strategy that enhances the lower bound of mutual information. We also propose a novel state representation learning method extending DIM-UA, embedding a nuclear norm loss to enforce manifold consistency robustly. On experimentation with the Atari Annotated RAM Interface, our method improves DIM-UA significantly with the same number of target encoding dimensions. The mean F1 score averaged over categories is 78% compared to 75% of DIM-UA. There are also compelling gains when implementing SimCLR and Barlow Twins. This supports our SSL innovation as a paradigm shift, enabling more nuanced high-dimensional data representations. | [
"['Li Meng' 'Morten Goodwin' 'Anis Yazidi' 'Paal Engelstad']"
] |
null | null | 2405.13850 | null | null | http://arxiv.org/pdf/2405.13850v1 | 2024-05-22T17:23:15Z | 2024-05-22T17:23:15Z | Enhancing lattice kinetic schemes for fluid dynamics with
Lattice-Equivariant Neural Networks | We present a new class of equivariant neural networks, hereby dubbed Lattice-Equivariant Neural Networks (LENNs), designed to satisfy local symmetries of a lattice structure. Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators. Whenever neural networks are employed to model physical systems, respecting symmetries and equivariance properties has been shown to be key for accuracy, numerical stability, and performance. Here, hinging on ideas from group representation theory, we define trainable layers whose algebraic structure is equivariant with respect to the symmetries of the lattice cell. Our method naturally allows for efficient implementations, both in terms of memory usage and computational costs, supporting scalable training/testing for lattices in two spatial dimensions and higher, as the size of symmetry group grows. We validate and test our approach considering 2D and 3D flowing dynamics, both in laminar and turbulent regimes. We compare with group averaged-based symmetric networks and with plain, non-symmetric, networks, showing how our approach unlocks the (a-posteriori) accuracy and training stability of the former models, and the train/inference speed of the latter networks (LENNs are about one order of magnitude faster than group-averaged networks in 3D). Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations. | [
"['Giulio Ortali' 'Alessandro Gabbana' 'Imre Atmodimedjo'\n 'Alessandro Corbetta']"
] |
null | null | 2405.13854 | null | null | http://arxiv.org/pdf/2405.13854v1 | 2024-05-22T17:29:12Z | 2024-05-22T17:29:12Z | On the dynamics of convolutional recurrent neural networks near their
critical point | We examine the dynamical properties of a single-layer convolutional recurrent network with a smooth sigmoidal activation function, for small values of the inputs and when the convolution kernel is unitary, so all eigenvalues lie exactly at the unit circle. Such networks have a variety of hallmark properties: the outputs depend on the inputs via compressive nonlinearities such as cubic roots, and both the timescales of relaxation and the length-scales of signal propagation depend sensitively on the inputs as power laws, both diverging as the input to 0. The basic dynamical mechanism is that inputs to the network generate ongoing activity, which in turn controls how additional inputs or signals propagate spatially or attenuate in time. We present analytical solutions for the steady states when the network is forced with a single oscillation and when a background value creates a steady state of ongoing activity, and derive the relationships shaping the value of the temporal decay and spatial propagation length as a function of this background value. | [
"['Aditi Chandra' 'Marcelo O. Magnasco']"
] |
null | null | 2405.13858 | null | null | http://arxiv.org/pdf/2405.13858v1 | 2024-05-22T17:33:51Z | 2024-05-22T17:33:51Z | Carbon Connect: An Ecosystem for Sustainable Computing | Computing is at a moment of profound opportunity. Emerging applications -- such as capable artificial intelligence, immersive virtual realities, and pervasive sensor systems -- drive unprecedented demand for computer. Despite recent advances toward net zero carbon emissions, the computing industry's gross energy usage continues to rise at an alarming rate, outpacing the growth of new energy installations and renewable energy deployments. A shift towards sustainability is needed to spark a transformation in how computer systems are manufactured, allocated, and consumed. Carbon Connect envisions coordinated research thrusts that produce design and management strategies for sustainable, next-generation computer systems. These strategies must flatten and then reverse growth trajectories for computing power and carbon for society's most rapidly growing applications such as artificial intelligence and virtual spaces. We will require accurate models for carbon accounting in computing technology. For embodied carbon, we must re-think conventional design strategies -- over-provisioned monolithic servers, frequent hardware refresh cycles, custom silicon -- and adopt life-cycle design strategies that more effectively reduce, reuse and recycle hardware at scale. For operational carbon, we must not only embrace renewable energy but also design systems to use that energy more efficiently. Finally, new hardware design and management strategies must be cognizant of economic policy and regulatory landscape, aligning private initiatives with societal goals. Many of these broader goals will require computer scientists to develop deep, enduring collaborations with researchers in economics, law, and industrial ecology to spark change in broader practice. | [
"['Benjamin C. Lee' 'David Brooks' 'Arthur van Benthem' 'Udit Gupta'\n 'Gage Hills' 'Vincent Liu' 'Benjamin Pierce' 'Christopher Stewart'\n 'Emma Strubell' 'Gu-Yeon Wei' 'Adam Wierman' 'Yuan Yao' 'Minlan Yu']"
] |
null | null | 2405.13861 | null | null | http://arxiv.org/pdf/2405.13861v2 | 2024-05-26T21:27:03Z | 2024-05-22T17:38:16Z | Transformers Learn Temporal Difference Methods for In-Context
Reinforcement Learning | In-context learning refers to the learning ability of a model during inference time without adapting its parameters. The input (i.e., prompt) to the model (e.g., transformers) consists of both a context (i.e., instance-label pairs) and a query instance. The model is then able to output a label for the query instance according to the context during inference. A possible explanation for in-context learning is that the forward pass of (linear) transformers implements iterations of gradient descent on the instance-label pairs in the context. In this paper, we prove by construction that transformers can also implement temporal difference (TD) learning in the forward pass, a phenomenon we refer to as in-context TD. We demonstrate the emergence of in-context TD after training the transformer with a multi-task TD algorithm, accompanied by theoretical analysis. Furthermore, we prove that transformers are expressive enough to implement many other policy evaluation algorithms in the forward pass, including residual gradient, TD with eligibility trace, and average-reward TD. | [
"['Jiuqi Wang' 'Ethan Blaser' 'Hadi Daneshmand' 'Shangtong Zhang']"
] |
null | null | 2405.13863 | null | null | http://arxiv.org/pdf/2405.13863v1 | 2024-05-22T17:44:07Z | 2024-05-22T17:44:07Z | Dynamic Model Predictive Shielding for Provably Safe Reinforcement
Learning | Among approaches for provably safe reinforcement learning, Model Predictive Shielding (MPS) has proven effective at complex tasks in continuous, high-dimensional state spaces, by leveraging a backup policy to ensure safety when the learned policy attempts to take risky actions. However, while MPS can ensure safety both during and after training, it often hinders task progress due to the conservative and task-oblivious nature of backup policies. This paper introduces Dynamic Model Predictive Shielding (DMPS), which optimizes reinforcement learning objectives while maintaining provable safety. DMPS employs a local planner to dynamically select safe recovery actions that maximize both short-term progress as well as long-term rewards. Crucially, the planner and the neural policy play a synergistic role in DMPS. When planning recovery actions for ensuring safety, the planner utilizes the neural policy to estimate long-term rewards, allowing it to observe beyond its short-term planning horizon. Conversely, the neural policy under training learns from the recovery plans proposed by the planner, converging to policies that are both high-performing and safe in practice. This approach guarantees safety during and after training, with bounded recovery regret that decreases exponentially with planning horizon depth. Experimental results demonstrate that DMPS converges to policies that rarely require shield interventions after training and achieve higher rewards compared to several state-of-the-art baselines. | [
"['Arko Banerjee' 'Kia Rahmani' 'Joydeep Biswas' 'Isil Dillig']"
] |
null | null | 2405.13866 | null | null | http://arxiv.org/pdf/2405.13866v1 | 2024-05-22T17:47:14Z | 2024-05-22T17:47:14Z | Koopcon: A new approach towards smarter and less complex learning | In the era of big data, the sheer volume and complexity of datasets pose significant challenges in machine learning, particularly in image processing tasks. This paper introduces an innovative Autoencoder-based Dataset Condensation Model backed by Koopman operator theory that effectively packs large datasets into compact, information-rich representations. Inspired by the predictive coding mechanisms of the human brain, our model leverages a novel approach to encode and reconstruct data, maintaining essential features and label distributions. The condensation process utilizes an autoencoder neural network architecture, coupled with Optimal Transport theory and Wasserstein distance, to minimize the distributional discrepancies between the original and synthesized datasets. We present a two-stage implementation strategy: first, condensing the large dataset into a smaller synthesized subset; second, evaluating the synthesized data by training a classifier and comparing its performance with a classifier trained on an equivalent subset of the original data. Our experimental results demonstrate that the classifiers trained on condensed data exhibit comparable performance to those trained on the original datasets, thus affirming the efficacy of our condensation model. This work not only contributes to the reduction of computational resources but also paves the way for efficient data handling in constrained environments, marking a significant step forward in data-efficient machine learning. | [
"['Vahid Jebraeeli' 'Bo Jiang' 'Derya Cansever' 'Hamid Krim']"
] |
null | null | 2405.13867 | null | null | http://arxiv.org/pdf/2405.13867v1 | 2024-05-22T17:48:17Z | 2024-05-22T17:48:17Z | Scaling-laws for Large Time-series Models | Scaling laws for large language models (LLMs) have provided useful guidance on how to train ever larger models for predictable performance gains. Time series forecasting shares a similar sequential structure to language, and is amenable to large-scale transformer architectures. Here we show that foundational decoder-only time series transformer models exhibit analogous scaling-behavior to LLMs, while architectural details (aspect ratio and number of heads) have a minimal effect over broad ranges. We assemble a large corpus of heterogenous time series data on which to train, and establish, for the first time, power-law scaling relations with respect to parameter count, dataset size, and training compute, spanning five orders of magnitude. | [
"['Thomas D. P. Edwards' 'James Alvey' 'Justin Alsing' 'Nam H. Nguyen'\n 'Benjamin D. Wandelt']"
] |
null | null | 2405.13868 | null | null | http://arxiv.org/pdf/2405.13868v1 | 2024-05-22T17:50:04Z | 2024-05-22T17:50:04Z | Automatically Identifying Local and Global Circuits with Linear
Computation Graphs | Circuit analysis of any certain model behavior is a central task in mechanistic interpretability. We introduce our circuit discovery pipeline with sparse autoencoders (SAEs) and a variant called skip SAEs. With these two modules inserted into the model, the model's computation graph with respect to OV and MLP circuits becomes strictly linear. Our methods do not require linear approximation to compute the causal effect of each node. This fine-grained graph enables identifying both end-to-end and local circuits accounting for either logits or intermediate features. We can scalably apply this pipeline with a technique called Hierarchical Attribution. We analyze three kind of circuits in GPT2-Small, namely bracket, induction and Indirect Object Identification circuits. Our results reveal new findings underlying existing discoveries. | [
"['Xuyang Ge' 'Fukang Zhu' 'Wentao Shu' 'Junxuan Wang' 'Zhengfu He'\n 'Xipeng Qiu']"
] |
null | null | 2405.13879 | null | null | http://arxiv.org/pdf/2405.13879v1 | 2024-05-22T17:59:44Z | 2024-05-22T17:59:44Z | FACT or Fiction: Can Truthful Mechanisms Eliminate Federated Free
Riding? | Standard federated learning (FL) approaches are vulnerable to the free-rider dilemma: participating agents can contribute little to nothing yet receive a well-trained aggregated model. While prior mechanisms attempt to solve the free-rider dilemma, none have addressed the issue of truthfulness. In practice, adversarial agents can provide false information to the server in order to cheat its way out of contributing to federated training. In an effort to make free-riding-averse federated mechanisms truthful, and consequently less prone to breaking down in practice, we propose FACT. FACT is the first federated mechanism that: (1) eliminates federated free riding by using a penalty system, (2) ensures agents provide truthful information by creating a competitive environment, and (3) encourages agent participation by offering better performance than training alone. Empirically, FACT avoids free-riding when agents are untruthful, and reduces agent loss by over 4x. | [
"['Marco Bornstein' 'Amrit Singh Bedi' 'Abdirisak Mohamed' 'Furong Huang']"
] |
null | null | 2405.13888 | null | null | http://arxiv.org/pdf/2405.13888v1 | 2024-05-22T18:00:41Z | 2024-05-22T18:00:41Z | Marrying Causal Representation Learning with Dynamical Systems for
Science | Causal representation learning promises to extend causal models to hidden causal variables from raw entangled measurements. However, most progress has focused on proving identifiability results in different settings, and we are not aware of any successful real-world application. At the same time, the field of dynamical systems benefited from deep learning and scaled to countless applications but does not allow parameter identification. In this paper, we draw a clear connection between the two and their key assumptions, allowing us to apply identifiable methods developed in causal representation learning to dynamical systems. At the same time, we can leverage scalable differentiable solvers developed for differential equations to build models that are both identifiable and practical. Overall, we learn explicitly controllable models that isolate the trajectory-specific parameters for further downstream tasks such as out-of-distribution classification or treatment effect estimation. We experiment with a wind simulator with partially known factors of variation. We also apply the resulting model to real-world climate data and successfully answer downstream causal questions in line with existing literature on climate change. | [
"['Dingling Yao' 'Caroline Muller' 'Francesco Locatello']"
] |
null | null | 2405.13899 | null | null | http://arxiv.org/pdf/2405.13899v1 | 2024-05-22T18:11:57Z | 2024-05-22T18:11:57Z | Symmetric Linear Bandits with Hidden Symmetry | High-dimensional linear bandits with low-dimensional structure have received considerable attention in recent studies due to their practical significance. The most common structure in the literature is sparsity. However, it may not be available in practice. Symmetry, where the reward is invariant under certain groups of transformations on the set of arms, is another important inductive bias in the high-dimensional case that covers many standard structures, including sparsity. In this work, we study high-dimensional symmetric linear bandits where the symmetry is hidden from the learner, and the correct symmetry needs to be learned in an online setting. We examine the structure of a collection of hidden symmetry and provide a method based on model selection within the collection of low-dimensional subspaces. Our algorithm achieves a regret bound of $ O(d_0^{1/3} T^{2/3} log(d))$, where $d$ is the ambient dimension which is potentially very large, and $d_0$ is the dimension of the true low-dimensional subspace such that $d_0 ll d$. With an extra assumption on well-separated models, we can further improve the regret to $ O(d_0sqrt{Tlog(d)} )$. | [
"['Nam Phuong Tran' 'The Anh Ta' 'Debmalya Mandal' 'Long Tran-Thanh']"
] |
null | null | 2405.13900 | null | null | http://arxiv.org/pdf/2405.13900v1 | 2024-05-22T18:13:38Z | 2024-05-22T18:13:38Z | Rehearsal-free Federated Domain-incremental Learning | We introduce a rehearsal-free federated domain incremental learning framework, RefFiL, based on a global prompt-sharing paradigm to alleviate catastrophic forgetting challenges in federated domain-incremental learning, where unseen domains are continually learned. Typical methods for mitigating forgetting, such as the use of additional datasets and the retention of private data from earlier tasks, are not viable in federated learning (FL) due to devices' limited resources. Our method, RefFiL, addresses this by learning domain-invariant knowledge and incorporating various domain-specific prompts from the domains represented by different FL participants. A key feature of RefFiL is the generation of local fine-grained prompts by our domain adaptive prompt generator, which effectively learns from local domain knowledge while maintaining distinctive boundaries on a global scale. We also introduce a domain-specific prompt contrastive learning loss that differentiates between locally generated prompts and those from other domains, enhancing RefFiL's precision and effectiveness. Compared to existing methods, RefFiL significantly alleviates catastrophic forgetting without requiring extra memory space, making it ideal for privacy-sensitive and resource-constrained devices. | [
"['Rui Sun' 'Haoran Duan' 'Jiahua Dong' 'Varun Ojha' 'Tejal Shah'\n 'Rajiv Ranjan']"
] |
null | null | 2405.13901 | null | null | http://arxiv.org/pdf/2405.13901v2 | 2024-05-28T17:56:12Z | 2024-05-22T18:15:42Z | DCT-Based Decorrelated Attention for Vision Transformers | Central to the Transformer architectures' effectiveness is the self-attention mechanism, a function that maps queries, keys, and values into a high-dimensional vector space. However, training the attention weights of queries, keys, and values is non-trivial from a state of random initialization. In this paper, we propose two methods. (i) We first address the initialization problem of Vision Transformers by introducing a simple, yet highly innovative, initialization approach utilizing Discrete Cosine Transform (DCT) coefficients. Our proposed DCT-based attention initialization marks a significant gain compared to traditional initialization strategies; offering a robust foundation for the attention mechanism. Our experiments reveal that the DCT-based initialization enhances the accuracy of Vision Transformers in classification tasks. (ii) We also recognize that since DCT effectively decorrelates image information in the frequency domain, this decorrelation is useful for compression because it allows the quantization step to discard many of the higher-frequency components. Based on this observation, we propose a novel DCT-based compression technique for the attention function of Vision Transformers. Since high-frequency DCT coefficients usually correspond to noise, we truncate the high-frequency DCT components of the input patches. Our DCT-based compression reduces the size of weight matrices for queries, keys, and values. While maintaining the same level of accuracy, our DCT compressed Swin Transformers obtain a considerable decrease in the computational overhead. | [
"['Hongyi Pan' 'Emadeldeen Hamdan' 'Xin Zhu' 'Koushik Biswas'\n 'Ahmet Enis Cetin' 'Ulas Bagci']"
] |
null | null | 2405.13902 | null | null | http://arxiv.org/pdf/2405.13902v2 | 2024-06-06T08:29:31Z | 2024-05-22T18:17:20Z | LOGIN: A Large Language Model Consulted Graph Neural Network Training
Framework | Recent prevailing works on graph machine learning typically follow a similar methodology that involves designing advanced variants of graph neural networks (GNNs) to maintain the superior performance of GNNs on different graphs. In this paper, we aim to streamline the GNN design process and leverage the advantages of Large Language Models (LLMs) to improve the performance of GNNs on downstream tasks. We formulate a new paradigm, coined "LLMs-as-Consultants," which integrates LLMs with GNNs in an interactive manner. A framework named LOGIN (LLM Consulted GNN training) is instantiated, empowering the interactive utilization of LLMs within the GNN training process. First, we attentively craft concise prompts for spotted nodes, carrying comprehensive semantic and topological information, and serving as input to LLMs. Second, we refine GNNs by devising a complementary coping mechanism that utilizes the responses from LLMs, depending on their correctness. We empirically evaluate the effectiveness of LOGIN on node classification tasks across both homophilic and heterophilic graphs. The results illustrate that even basic GNN architectures, when employed within the proposed LLMs-as-Consultants paradigm, can achieve comparable performance to advanced GNNs with intricate designs. Our codes are available at https://github.com/QiaoYRan/LOGIN. | [
"['Yiran Qiao' 'Xiang Ao' 'Yang Liu' 'Jiarong Xu' 'Xiaoqian Sun' 'Qing He']"
] |
null | null | 2405.13910 | null | null | http://arxiv.org/pdf/2405.13910v2 | 2024-05-27T18:05:55Z | 2024-05-22T18:34:25Z | Learning Latent Space Hierarchical EBM Diffusion Models | This work studies the learning problem of the energy-based prior model and the multi-layer generator model. The multi-layer generator model, which contains multiple layers of latent variables organized in a top-down hierarchical structure, typically assumes the Gaussian prior model. Such a prior model can be limited in modelling expressivity, which results in a gap between the generator posterior and the prior model, known as the prior hole problem. Recent works have explored learning the energy-based (EBM) prior model as a second-stage, complementary model to bridge the gap. However, the EBM defined on a multi-layer latent space can be highly multi-modal, which makes sampling from such marginal EBM prior challenging in practice, resulting in ineffectively learned EBM. To tackle the challenge, we propose to leverage the diffusion probabilistic scheme to mitigate the burden of EBM sampling and thus facilitate EBM learning. Our extensive experiments demonstrate a superior performance of our diffusion-learned EBM prior on various challenging tasks. | [
"['Jiali Cui' 'Tian Han']"
] |
null | null | 2405.13912 | null | null | http://arxiv.org/pdf/2405.13912v1 | 2024-05-22T18:38:10Z | 2024-05-22T18:38:10Z | Matrix Denoising with Doubly Heteroscedastic Noise: Fundamental Limits
and Optimal Spectral Methods | We study the matrix denoising problem of estimating the singular vectors of a rank-$1$ signal corrupted by noise with both column and row correlations. Existing works are either unable to pinpoint the exact asymptotic estimation error or, when they do so, the resulting approaches (e.g., based on whitening or singular value shrinkage) remain vastly suboptimal. On top of this, most of the literature has focused on the special case of estimating the left singular vector of the signal when the noise only possesses row correlation (one-sided heteroscedasticity). In contrast, our work establishes the information-theoretic and algorithmic limits of matrix denoising with doubly heteroscedastic noise. We characterize the exact asymptotic minimum mean square error, and design a novel spectral estimator with rigorous optimality guarantees: under a technical condition, it attains positive correlation with the signals whenever information-theoretically possible and, for one-sided heteroscedasticity, it also achieves the Bayes-optimal error. Numerical experiments demonstrate the significant advantage of our theoretically principled method with the state of the art. The proofs draw connections with statistical physics and approximate message passing, departing drastically from standard random matrix theory techniques. | [
"['Yihan Zhang' 'Marco Mondelli']"
] |
null | null | 2405.13915 | null | null | http://arxiv.org/pdf/2405.13915v1 | 2024-05-22T18:41:11Z | 2024-05-22T18:41:11Z | HeteGraph-Mamba: Heterogeneous Graph Learning via Selective State Space
Model | We propose a heterogeneous graph mamba network (HGMN) as the first exploration in leveraging the selective state space models (SSSMs) for heterogeneous graph learning. Compared with the literature, our HGMN overcomes two major challenges: (i) capturing long-range dependencies among heterogeneous nodes and (ii) adapting SSSMs to heterogeneous graph data. Our key contribution is a general graph architecture that can solve heterogeneous nodes in real-world scenarios, followed an efficient flow. Methodologically, we introduce a two-level efficient tokenization approach that first captures long-range dependencies within identical node types, and subsequently across all node types. Empirically, we conduct comparisons between our framework and 19 state-of-the-art methods on the heterogeneous benchmarks. The extensive comparisons demonstrate that our framework outperforms other methods in both the accuracy and efficiency dimensions. | [
"['Zhenyu Pan' 'Yoonsung Jeong' 'Xiaoda Liu' 'Han Liu']"
] |
null | null | 2405.13919 | null | null | http://arxiv.org/pdf/2405.13919v1 | 2024-05-22T18:49:11Z | 2024-05-22T18:49:11Z | Fair Online Bilateral Trade | In online bilateral trade, a platform posts prices to incoming pairs of buyers and sellers that have private valuations for a certain good. If the price is lower than the buyers' valuation and higher than the sellers' valuation, then a trade takes place. Previous work focused on the platform perspective, with the goal of setting prices maximizing the gain from trade (the sum of sellers' and buyers' utilities). Gain from trade is, however, potentially unfair to traders, as they may receive highly uneven shares of the total utility. In this work we enforce fairness by rewarding the platform with the fair gain from trade, defined as the minimum between sellers' and buyers' utilities. After showing that any no-regret learning algorithm designed to maximize the sum of the utilities may fail badly with fair gain from trade, we present our main contribution: a complete characterization of the regret regimes for fair gain from trade when, after each interaction, the platform only learns whether each trader accepted the current price. Specifically, we prove the following regret bounds: $Theta(ln T)$ in the deterministic setting, $Omega(T)$ in the stochastic setting, and $tilde{Theta}(T^{2/3})$ in the stochastic setting when sellers' and buyers' valuations are independent of each other. We conclude by providing tight regret bounds when, after each interaction, the platform is allowed to observe the true traders' valuations. | [
"['François Bachoc' 'Nicolò Cesa-Bianchi' 'Tommaso Cesari'\n 'Roberto Colomboni']"
] |
null | null | 2405.13922 | null | null | http://arxiv.org/pdf/2405.13922v1 | 2024-05-22T18:52:09Z | 2024-05-22T18:52:09Z | Towards Certification of Uncertainty Calibration under Adversarial
Attacks | Since neural classifiers are known to be sensitive to adversarial perturbations that alter their accuracy, textit{certification methods} have been developed to provide provable guarantees on the insensitivity of their predictions to such perturbations. Furthermore, in safety-critical applications, the frequentist interpretation of the confidence of a classifier (also known as model calibration) can be of utmost importance. This property can be measured via the Brier score or the expected calibration error. We show that attacks can significantly harm calibration, and thus propose certified calibration as worst-case bounds on calibration under adversarial perturbations. Specifically, we produce analytic bounds for the Brier score and approximate bounds via the solution of a mixed-integer program on the expected calibration error. Finally, we propose novel calibration attacks and demonstrate how they can improve model calibration through textit{adversarial calibration training}. | [
"['Cornelius Emde' 'Francesco Pinto' 'Thomas Lukasiewicz'\n 'Philip H. S. Torr' 'Adel Bibi']"
] |
null | null | 2405.13931 | null | null | http://arxiv.org/pdf/2405.13931v1 | 2024-05-22T18:59:42Z | 2024-05-22T18:59:42Z | A Methodology to Identify Physical or Computational Experiment
Conditions for Uncertainty Mitigation | Complex engineering systems require integration of simulation of sub-systems and calculation of metrics to drive design decisions. This paper introduces a methodology for designing computational or physical experiments for system-level uncertainty mitigation purposes. The methodology follows a previously determined problem ontology, where physical, functional and modeling architectures are decided upon. By carrying out sensitivity analysis techniques utilizing system-level tools, critical epistemic uncertainties can be identified. Afterwards, a framework is introduced to design specific computational and physical experimentation for generating new knowledge about parameters, and for uncertainty mitigation. The methodology is demonstrated through a case study on an early-stage design Blended-Wing-Body (BWB) aircraft concept, showcasing how aerostructures analyses can be leveraged for mitigating system-level uncertainty, by computer experiments or guiding physical experimentation. The proposed methodology is versatile enough to tackle uncertainty management across various design challenges, highlighting the potential for more risk-informed design processes. | [
"['Efe Y. Yarbasi' 'Dimitri N. Mavris']"
] |
null | null | 2405.13934 | null | null | http://arxiv.org/pdf/2405.13934v3 | 2024-05-28T10:04:50Z | 2024-05-22T19:06:39Z | Text-Free Multi-domain Graph Pre-training: Toward Graph Foundation
Models | Given the ubiquity of graph data, it is intriguing to ask: Is it possible to train a graph foundation model on a broad range of graph data across diverse domains? A major hurdle toward this goal lies in the fact that graphs from different domains often exhibit profoundly divergent characteristics. Although there have been some initial efforts in integrating multi-domain graphs for pre-training, they primarily rely on textual descriptions to align the graphs, limiting their application to text-attributed graphs. Moreover, different source domains may conflict or interfere with each other, and their relevance to the target domain can vary significantly. To address these issues, we propose MDGPT, a text free Multi-Domain Graph Pre-Training and adaptation framework designed to exploit multi-domain knowledge for graph learning. First, we propose a set of domain tokens to to align features across source domains for synergistic pre-training. Second, we propose a dual prompts, consisting of a unifying prompt and a mixing prompt, to further adapt the target domain with unified multi-domain knowledge and a tailored mixture of domain-specific knowledge. Finally, we conduct extensive experiments involving six public datasets to evaluate and analyze MDGPT, which outperforms prior art by up to 37.9%. | [
"['Xingtong Yu' 'Chang Zhou' 'Yuan Fang' 'Xinming Zhang']"
] |
null | null | 2405.13937 | null | null | http://arxiv.org/pdf/2405.13937v5 | 2024-07-03T02:06:07Z | 2024-05-22T19:10:24Z | DyGPrompt: Learning Feature and Time Prompts on Dynamic Graphs | Dynamic graphs are pervasive in the real world, modeling dynamic relations between objects across various fields. For dynamic graph modeling, dynamic graph neural networks (DGNNs) have emerged as a mainstream technique, which are generally pre-trained on the link prediction task, leaving a significant gap from the objectives of downstream tasks such as node classification. To bridge the gap, prompt-based learning has gained traction on graphs. However, existing efforts focus on static graphs, neglecting the evolution of dynamic graphs. In this paper, we propose DyGPrompt, a novel pre-training and prompting framework for dynamic graph modeling. First, we design dual prompts to address the gap in both task objectives and dynamic variations across pre-training and downstream tasks. Second, we recognize that node and time features mutually characterize each other, and propose dual condition-nets to model the evolving node-time patterns in downstream tasks. Finally, we thoroughly evaluate and analyze DyGPrompt through extensive experiments on three public datasets. | [
"['Xingtong Yu' 'Zhenghao Liu' 'Yuan Fang' 'Xinming Zhang']"
] |
null | null | 2405.13938 | null | null | http://arxiv.org/pdf/2405.13938v1 | 2024-05-22T19:11:28Z | 2024-05-22T19:11:28Z | eXmY: A Data Type and Technique for Arbitrary Bit Precision Quantization | eXmY is a novel data type for quantization of ML models. It supports both arbitrary bit widths and arbitrary integer and floating point formats. For example, it seamlessly supports 3, 5, 6, 7, 9 bit formats. For a specific bit width, say 7, it defines all possible formats e.g. e0m6, e1m5, e2m4, e3m3, e4m2, e5m1 and e6m0. For non-power of two bit widths e.g. 5, 6, 7, we created a novel encoding and decoding scheme which achieves perfect compression, byte addressability and is amenable to sharding and vector processing. We implemented libraries for emulation, encoding and decoding tensors and checkpoints in C++, TensorFlow, JAX and PAX. For optimal performance, the codecs use SIMD instructions on CPUs and vector instructions on TPUs and GPUs. eXmY is also a technique and exploits the statistical distribution of exponents in tensors. It can be used to quantize weights, static and dynamic activations, gradients, master weights and optimizer state. It can reduce memory (CPU DRAM and accelerator HBM), network and disk storage and transfers. It can increase multi tenancy and accelerate compute. eXmY has been deployed in production for almost 2 years. | [
"['Aditya Agrawal' 'Matthew Hedlund' 'Blake Hechtman']"
] |
null | null | 2405.13939 | null | null | http://arxiv.org/pdf/2405.13939v1 | 2024-05-22T19:13:05Z | 2024-05-22T19:13:05Z | Principal eigenstate classical shadows | Given many copies of an unknown quantum state $rho$, we consider the task of learning a classical description of its principal eigenstate. Namely, assuming that $rho$ has an eigenstate $|phirangle$ with (unknown) eigenvalue $lambda > 1/2$, the goal is to learn a (classical shadows style) classical description of $|phirangle$ which can later be used to estimate expectation values $langle phi |O| phi rangle$ for any $O$ in some class of observables. We consider the sample-complexity setting in which generating a copy of $rho$ is expensive, but joint measurements on many copies of the state are possible. We present a protocol for this task scaling with the principal eigenvalue $lambda$ and show that it is optimal within a space of natural approaches, e.g., applying quantum state purification followed by a single-copy classical shadows scheme. Furthermore, when $lambda$ is sufficiently close to $1$, the performance of our algorithm is optimal--matching the sample complexity for pure state classical shadows. | [
"['Daniel Grier' 'Hakop Pashayan' 'Luke Schaeffer']"
] |
null | null | 2405.13944 | null | null | http://arxiv.org/pdf/2405.13944v1 | 2024-05-22T19:19:09Z | 2024-05-22T19:19:09Z | A Survey on Design-space Dimensionality Reduction Methods for Shape
Optimization | The rapidly evolving field of engineering design of functional surfaces necessitates sophisticated tools to manage the inherent complexity of high-dimensional design spaces. This review delves into the field of design-space dimensionality reduction techniques tailored for shape optimization, bridging traditional methods and cutting-edge technologies. Dissecting the spectrum of these techniques, from classical linear approaches like principal component analysis to more nuanced nonlinear methods such as autoencoders, the discussion extends to innovative physics-informed methods that integrate physical data into the dimensionality reduction process, enhancing the predictive accuracy and relevance of reduced models. By integrating these methods into optimization frameworks, it is shown how they significantly mitigate the curse of dimensionality, streamline computational processes, and refine the exploration and optimization of complex functional surfaces. The survey provides a classification of method and highlights the transformative impact of these techniques in simplifying design challenges, thereby fostering more efficient and effective engineering solutions. | [
"['Andrea Serani' 'Matteo Diez']"
] |
null | null | 2405.13947 | null | null | http://arxiv.org/pdf/2405.13947v1 | 2024-05-22T19:27:03Z | 2024-05-22T19:27:03Z | Leader Reward for POMO-Based Neural Combinatorial Optimization | Deep neural networks based on reinforcement learning (RL) for solving combinatorial optimization (CO) problems are developing rapidly and have shown a tendency to approach or even outperform traditional solvers. However, existing methods overlook an important distinction: CO problems differ from other traditional problems in that they focus solely on the optimal solution provided by the model within a specific length of time, rather than considering the overall quality of all solutions generated by the model. In this paper, we propose Leader Reward and apply it during two different training phases of the Policy Optimization with Multiple Optima (POMO) model to enhance the model's ability to generate optimal solutions. This approach is applicable to a variety of CO problems, such as the Traveling Salesman Problem (TSP), the Capacitated Vehicle Routing Problem (CVRP), and the Flexible Flow Shop Problem (FFSP), but also works well with other POMO-based models or inference phase's strategies. We demonstrate that Leader Reward greatly improves the quality of the optimal solutions generated by the model. Specifically, we reduce the POMO's gap to the optimum by more than 100 times on TSP100 with almost no additional computational overhead. | [
"['Chaoyang Wang' 'Pengzhi Cheng' 'Jingze Li' 'Weiwei Sun']"
] |
null | null | 2405.13950 | null | null | http://arxiv.org/pdf/2405.13950v1 | 2024-05-22T19:33:58Z | 2024-05-22T19:33:58Z | Actor-critic algorithms for fiber sampling problems | We propose an actor-critic algorithm for a family of complex problems arising in algebraic statistics and discrete optimization. The core task is to produce a sample from a finite subset of the non-negative integer lattice defined by a high-dimensional polytope. We translate the problem into a Markov decision process and devise an actor-critic reinforcement learning (RL) algorithm to learn a set of good moves that can be used for sampling. We prove that the actor-critic algorithm converges to an approximately optimal sampling policy. To tackle complexity issues that typically arise in these sampling problems, and to allow the RL to function at scale, our solution strategy takes three steps: decomposing the starting point of the sample, using RL on each induced subproblem, and reconstructing to obtain a sample in the original polytope. In this setup, the proof of convergence applies to each subproblem in the decomposition. We test the method in two regimes. In statistical applications, a high-dimensional polytope arises as the support set for the reference distribution in a model/data fit test for a broad family of statistical models for categorical data. We demonstrate how RL can be used for model fit testing problems for data sets for which traditional MCMC samplers converge too slowly due to problem size and sparsity structure. To test the robustness of the algorithm and explore its generalization properties, we apply it to synthetically generated data of various sizes and sparsity levels. | [
"['Ivan Gvozdanović' 'Sonja Petrović']"
] |
null | null | 2405.13952 | null | null | http://arxiv.org/pdf/2405.13952v1 | 2024-05-22T19:36:55Z | 2024-05-22T19:36:55Z | Spectral Adapter: Fine-Tuning in Spectral Space | Recent developments in Parameter-Efficient Fine-Tuning (PEFT) methods for pretrained deep neural networks have captured widespread interest. In this work, we study the enhancement of current PEFT methods by incorporating the spectral information of pretrained weight matrices into the fine-tuning procedure. We investigate two spectral adaptation mechanisms, namely additive tuning and orthogonal rotation of the top singular vectors, both are done via first carrying out Singular Value Decomposition (SVD) of pretrained weights and then fine-tuning the top spectral space. We provide a theoretical analysis of spectral fine-tuning and show that our approach improves the rank capacity of low-rank adapters given a fixed trainable parameter budget. We show through extensive experiments that the proposed fine-tuning model enables better parameter efficiency and tuning performance as well as benefits multi-adapter fusion. The code will be open-sourced for reproducibility. | [
"['Fangzhao Zhang' 'Mert Pilanci']"
] |
null | null | 2405.13954 | null | null | http://arxiv.org/pdf/2405.13954v1 | 2024-05-22T19:39:05Z | 2024-05-22T19:39:05Z | What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence
Functions | Large language models (LLMs) are trained on a vast amount of human-written data, but data providers often remain uncredited. In response to this issue, data valuation (or data attribution), which quantifies the contribution or value of each data to the model output, has been discussed as a potential solution. Nevertheless, applying existing data valuation methods to recent LLMs and their vast training datasets has been largely limited by prohibitive compute and memory costs. In this work, we focus on influence functions, a popular gradient-based data valuation method, and significantly improve its scalability with an efficient gradient projection strategy called LoGra that leverages the gradient structure in backpropagation. We then provide a theoretical motivation of gradient projection approaches to influence functions to promote trust in the data valuation process. Lastly, we lower the barrier to implementing data valuation systems by introducing LogIX, a software package that can transform existing training code into data valuation code with minimal effort. In our data valuation experiments, LoGra achieves competitive accuracy against more expensive baselines while showing up to 6,500x improvement in throughput and 5x reduction in GPU memory usage when applied to Llama3-8B-Instruct and the 1B-token dataset. | [
"['Sang Keun Choe' 'Hwijeen Ahn' 'Juhan Bae' 'Kewen Zhao' 'Minsoo Kang'\n 'Youngseog Chung' 'Adithya Pratapa' 'Willie Neiswanger' 'Emma Strubell'\n 'Teruko Mitamura' 'Jeff Schneider' 'Eduard Hovy' 'Roger Grosse'\n 'Eric Xing']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.