categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2406.02614
null
null
http://arxiv.org/pdf/2406.02614v2
2024-06-06T01:38:45Z
2024-06-03T08:42:00Z
Frequency Enhanced Pre-training for Cross-city Few-shot Traffic Forecasting
The field of Intelligent Transportation Systems (ITS) relies on accurate traffic forecasting to enable various downstream applications. However, developing cities often face challenges in collecting sufficient training traffic data due to limited resources and outdated infrastructure. Recognizing this obstacle, the concept of cross-city few-shot forecasting has emerged as a viable approach. While previous cross-city few-shot forecasting methods ignore the frequency similarity between cities, we have made an observation that the traffic data is more similar in the frequency domain between cities. Based on this fact, we propose a textbf{F}requency textbf{E}nhanced textbf{P}re-training Framework for textbf{Cross}-city Few-shot Forecasting (textbf{FEPCross}). FEPCross has a pre-training stage and a fine-tuning stage. In the pre-training stage, we propose a novel Cross-Domain Spatial-Temporal Encoder that incorporates the information of the time and frequency domain and trains it with self-supervised tasks encompassing reconstruction and contrastive objectives. In the fine-tuning stage, we design modules to enrich training samples and maintain a momentum-updated graph structure, thereby mitigating the risk of overfitting to the few-shot training data. Empirical evaluations performed on real-world traffic datasets validate the exceptional efficacy of FEPCross, outperforming existing approaches of diverse categories and demonstrating characteristics that foster the progress of cross-city few-shot forecasting.
[ "['Zhanyu Liu' 'Jianrong Ding' 'Guanjie Zheng']" ]
null
null
2406.02615
null
null
http://arxiv.org/pdf/2406.02615v1
2024-06-03T08:51:25Z
2024-06-03T08:51:25Z
A hybrid numerical methodology coupling Reduced Order Modeling and Graph Neural Networks for non-parametric geometries: applications to structural dynamics problems
This work introduces a new approach for accelerating the numerical analysis of time-domain partial differential equations (PDEs) governing complex physical systems. The methodology is based on a combination of a classical reduced-order modeling (ROM) framework and recently-introduced Graph Neural Networks (GNNs), where the latter is trained on highly heterogeneous databases of varying numerical discretization sizes. The proposed techniques are shown to be particularly suitable for non-parametric geometries, ultimately enabling the treatment of a diverse range of geometries and topologies. Performance studies are presented in an application context related to the design of aircraft seats and their corresponding mechanical responses to shocks, where the main motivation is to reduce the computational burden and enable the rapid design iteration for such problems that entail non-parametric geometries. The methods proposed here are straightforwardly applicable to other scientific or engineering problems requiring a large number of finite element-based numerical simulations, with the potential to significantly enhance efficiency while maintaining reasonable accuracy.
[ "['Victor Matray' 'Faisal Amlani' 'Frédéric Feyel' 'David Néron']" ]
null
null
2406.02616
null
null
http://arxiv.org/pdf/2406.02616v3
2024-06-08T08:41:32Z
2024-06-03T09:41:42Z
Adaptive Layer Splitting for Wireless LLM Inference in Edge Computing: A Model-Based Reinforcement Learning Approach
Optimizing the deployment of large language models (LLMs) in edge computing environments is critical for enhancing privacy and computational efficiency. Toward efficient wireless LLM inference in edge computing, this study comprehensively analyzes the impact of different splitting points in mainstream open-source LLMs. On this basis, this study introduces a framework taking inspiration from model-based reinforcement learning (MBRL) to determine the optimal splitting point across the edge and user equipment (UE). By incorporating a reward surrogate model, our approach significantly reduces the computational cost of frequent performance evaluations. Extensive simulations demonstrate that this method effectively balances inference performance and computational load under varying network conditions, providing a robust solution for LLM deployment in decentralized settings.
[ "['Yuxuan Chen' 'Rongpeng Li' 'Xiaoxue Yu' 'Zhifeng Zhao' 'Honggang Zhang']" ]
null
null
2406.02619
null
null
http://arxiv.org/pdf/2406.02619v1
2024-06-03T17:55:41Z
2024-06-03T17:55:41Z
Unelicitable Backdoors in Language Models via Cryptographic Transformer Circuits
The rapid proliferation of open-source language models significantly increases the risks of downstream backdoor attacks. These backdoors can introduce dangerous behaviours during model deployment and can evade detection by conventional cybersecurity monitoring systems. In this paper, we introduce a novel class of backdoors in autoregressive transformer models, that, in contrast to prior art, are unelicitable in nature. Unelicitability prevents the defender from triggering the backdoor, making it impossible to evaluate or detect ahead of deployment even if given full white-box access and using automated techniques, such as red-teaming or certain formal verification methods. We show that our novel construction is not only unelicitable thanks to using cryptographic techniques, but also has favourable robustness properties. We confirm these properties in empirical investigations, and provide evidence that our backdoors can withstand state-of-the-art mitigation strategies. Additionally, we expand on previous work by showing that our universal backdoors, while not completely undetectable in white-box settings, can be harder to detect than some existing designs. By demonstrating the feasibility of seamlessly integrating backdoors into transformer models, this paper fundamentally questions the efficacy of pre-deployment detection strategies. This offers new insights into the offence-defence balance in AI safety and security.
[ "['Andis Draguns' 'Andrew Gritsevskiy' 'Sumeet Ramesh Motwani'\n 'Charlie Rogers-Smith' 'Jeffrey Ladish' 'Christian Schroeder de Witt']" ]
null
null
2406.02625
null
null
http://arxiv.org/pdf/2406.02625v1
2024-06-03T21:48:57Z
2024-06-03T21:48:57Z
Progressive Inference: Explaining Decoder-Only Sequence Classification Models Using Intermediate Predictions
This paper proposes Progressive Inference - a framework to compute input attributions to explain the predictions of decoder-only sequence classification models. Our work is based on the insight that the classification head of a decoder-only Transformer model can be used to make intermediate predictions by evaluating them at different points in the input sequence. Due to the causal attention mechanism, these intermediate predictions only depend on the tokens seen before the inference point, allowing us to obtain the model's prediction on a masked input sub-sequence, with negligible computational overheads. We develop two methods to provide sub-sequence level attributions using this insight. First, we propose Single Pass-Progressive Inference (SP-PI), which computes attributions by taking the difference between consecutive intermediate predictions. Second, we exploit a connection with Kernel SHAP to develop Multi Pass-Progressive Inference (MP-PI). MP-PI uses intermediate predictions from multiple masked versions of the input to compute higher quality attributions. Our studies on a diverse set of models trained on text classification tasks show that SP-PI and MP-PI provide significantly better attributions compared to prior work.
[ "['Sanjay Kariyappa' 'Freddy Lécué' 'Saumitra Mishra' 'Christopher Pond'\n 'Daniele Magazzeni' 'Manuela Veloso']" ]
null
null
2406.02628
null
null
http://arxiv.org/pdf/2406.02628v1
2024-06-04T00:06:42Z
2024-06-04T00:06:42Z
Replicability in High Dimensional Statistics
The replicability crisis is a major issue across nearly all areas of empirical science, calling for the formal study of replicability in statistics. Motivated in this context, [Impagliazzo, Lei, Pitassi, and Sorrell STOC 2022] introduced the notion of replicable learning algorithms, and gave basic procedures for $1$-dimensional tasks including statistical queries. In this work, we study the computational and statistical cost of replicability for several fundamental high dimensional statistical tasks, including multi-hypothesis testing and mean estimation. Our main contribution establishes a computational and statistical equivalence between optimal replicable algorithms and high dimensional isoperimetric tilings. As a consequence, we obtain matching sample complexity upper and lower bounds for replicable mean estimation of distributions with bounded covariance, resolving an open problem of [Bun, Gaboardi, Hopkins, Impagliazzo, Lei, Pitassi, Sivakumar, and Sorrell, STOC2023] and for the $N$-Coin Problem, resolving a problem of [Karbasi, Velegkas, Yang, and Zhou, NeurIPS2023] up to log factors. While our equivalence is computational, allowing us to shave log factors in sample complexity from the best known efficient algorithms, efficient isoperimetric tilings are not known. To circumvent this, we introduce several relaxed paradigms that do allow for sample and computationally efficient algorithms, including allowing pre-processing, adaptivity, and approximate replicability. In these cases we give efficient algorithms matching or beating the best known sample complexity for mean estimation and the coin problem, including a generic procedure that reduces the standard quadratic overhead of replicability to linear in expectation.
[ "['Max Hopkins' 'Russell Impagliazzo' 'Daniel Kane' 'Sihan Liu'\n 'Christopher Ye']" ]
null
null
2406.02629
null
null
http://arxiv.org/pdf/2406.02629v1
2024-06-04T00:55:06Z
2024-06-04T00:55:06Z
SSNet: A Lightweight Multi-Party Computation Scheme for Practical Privacy-Preserving Machine Learning Service in the Cloud
As privacy-preserving becomes a pivotal aspect of deep learning (DL) development, multi-party computation (MPC) has gained prominence for its efficiency and strong security. However, the practice of current MPC frameworks is limited, especially when dealing with large neural networks, exemplified by the prolonged execution time of 25.8 seconds for secure inference on ResNet-152. The primary challenge lies in the reliance of current MPC approaches on additive secret sharing, which incurs significant communication overhead with non-linear operations such as comparisons. Furthermore, additive sharing suffers from poor scalability on party size. In contrast, the evolving landscape of MPC necessitates accommodating a larger number of compute parties and ensuring robust performance against malicious activities or computational failures. In light of these challenges, we propose SSNet, which for the first time, employs Shamir's secret sharing (SSS) as the backbone of MPC-based ML framework. We meticulously develop all framework primitives and operations for secure DL models tailored to seamlessly integrate with the SSS scheme. SSNet demonstrates the ability to scale up party numbers straightforwardly and embeds strategies to authenticate the computation correctness without incurring significant performance overhead. Additionally, SSNet introduces masking strategies designed to reduce communication overhead associated with non-linear operations. We conduct comprehensive experimental evaluations on commercial cloud computing infrastructure from Amazon AWS, as well as across diverse prevalent DNN models and datasets. SSNet demonstrates a substantial performance boost, achieving speed-ups ranging from 3x to 14x compared to SOTA MPC frameworks. Moreover, SSNet also represents the first framework that is evaluated on a five-party computation setup, in the context of secure DL inference.
[ "['Shijin Duan' 'Chenghong Wang' 'Hongwu Peng' 'Yukui Luo' 'Wujie Wen'\n 'Caiwen Ding' 'Xiaolin Xu']" ]
null
null
2406.02632
null
null
http://arxiv.org/pdf/2406.02632v1
2024-06-04T03:22:52Z
2024-06-04T03:22:52Z
Redefining DDoS Attack Detection Using A Dual-Space Prototypical Network-Based Approach
Distributed Denial of Service (DDoS) attacks pose an increasingly substantial cybersecurity threat to organizations across the globe. In this paper, we introduce a new deep learning-based technique for detecting DDoS attacks, a paramount cybersecurity challenge with evolving complexity and scale. Specifically, we propose a new dual-space prototypical network that leverages a unique dual-space loss function to enhance detection accuracy for various attack patterns through geometric and angular similarity measures. This approach capitalizes on the strengths of representation learning within the latent space (a lower-dimensional representation of data that captures complex patterns for machine learning analysis), improving the model's adaptability and sensitivity towards varying DDoS attack vectors. Our comprehensive evaluation spans multiple training environments, including offline training, simulated online training, and prototypical network scenarios, to validate the model's robustness under diverse data abundance and scarcity conditions. The Multilayer Perceptron (MLP) with Attention, trained with our dual-space prototypical design over a reduced training set, achieves an average accuracy of 94.85% and an F1-Score of 94.71% across our tests, showcasing its effectiveness in dynamic and constrained real-world scenarios.
[ "['Fernando Martinez' 'Mariyam Mapkar' 'Ali Alfatemi' 'Mohamed Rahouti'\n 'Yufeng Xin' 'Kaiqi Xiong' 'Nasir Ghani']" ]
null
null
2406.02633
null
null
http://arxiv.org/pdf/2406.02633v1
2024-06-04T04:03:17Z
2024-06-04T04:03:17Z
Edit Distance Robust Watermarks for Language Models
Motivated by the problem of detecting AI-generated text, we consider the problem of watermarking the output of language models with provable guarantees. We aim for watermarks which satisfy: (a) undetectability, a cryptographic notion introduced by Christ, Gunn & Zamir (2024) which stipulates that it is computationally hard to distinguish watermarked language model outputs from the model's actual output distribution; and (b) robustness to channels which introduce a constant fraction of adversarial insertions, substitutions, and deletions to the watermarked text. Earlier schemes could only handle stochastic substitutions and deletions, and thus we are aiming for a more natural and appealing robustness guarantee that holds with respect to edit distance. Our main result is a watermarking scheme which achieves both undetectability and robustness to edits when the alphabet size for the language model is allowed to grow as a polynomial in the security parameter. To derive such a scheme, we follow an approach introduced by Christ & Gunn (2024), which proceeds via first constructing pseudorandom codes satisfying undetectability and robustness properties analogous to those above; our key idea is to handle adversarial insertions and deletions by interpreting the symbols as indices into the codeword, which we call indexing pseudorandom codes. Additionally, our codes rely on weaker computational assumptions than used in previous work. Then we show that there is a generic transformation from such codes over large alphabets to watermarking schemes for arbitrary language models.
[ "['Noah Golowich' 'Ankur Moitra']" ]
null
null
2406.02635
null
null
http://arxiv.org/pdf/2406.02635v2
2024-06-13T03:08:23Z
2024-06-04T05:36:29Z
Evidentially Calibrated Source-Free Time-Series Domain Adaptation with Temporal Imputation
Source-free domain adaptation (SFDA) aims to adapt a model pre-trained on a labeled source domain to an unlabeled target domain without access to source data, preserving the source domain's privacy. While SFDA is prevalent in computer vision, it remains largely unexplored in time series analysis. Existing SFDA methods, designed for visual data, struggle to capture the inherent temporal dynamics of time series, hindering adaptation performance. This paper proposes MAsk And imPUte (MAPU), a novel and effective approach for time series SFDA. MAPU addresses the critical challenge of temporal consistency by introducing a novel temporal imputation task. This task involves randomly masking time series signals and leveraging a dedicated temporal imputer to recover the original signal within the learned embedding space, bypassing the complexities of noisy raw data. Notably, MAPU is the first method to explicitly address temporal consistency in the context of time series SFDA. Additionally, it offers seamless integration with existing SFDA methods, providing greater flexibility. We further introduce E-MAPU, which incorporates evidential uncertainty estimation to address the overconfidence issue inherent in softmax predictions. To achieve that, we leverage evidential deep learning to obtain a better-calibrated pre-trained model and adapt the target encoder to map out-of-support target samples to a new feature representation closer to the source domain's support. This fosters better alignment, ultimately enhancing adaptation performance. Extensive experiments on five real-world time series datasets demonstrate that both MAPU and E-MAPU achieve significant performance gains compared to existing methods. These results highlight the effectiveness of our proposed approaches for tackling various time series domain adaptation problems.
[ "['Mohamed Ragab' 'Peiliang Gong' 'Emadeldeen Eldele' 'Wenyu Zhang'\n 'Min Wu' 'Chuan-Sheng Foo' 'Daoqiang Zhang' 'Xiaoli Li' 'Zhenghua Chen']" ]
null
null
2406.02638
null
null
http://arxiv.org/pdf/2406.02638v2
2024-06-10T17:22:33Z
2024-06-04T09:07:58Z
EchoMamba4Rec: Harmonizing Bidirectional State Space Models with Spectral Filtering for Advanced Sequential Recommendation
Predicting user preferences and sequential dependencies based on historical behavior is the core goal of sequential recommendation. Although attention-based models have shown effectiveness in this field, they often struggle with inference inefficiency due to the quadratic computational complexity inherent in attention mechanisms, especially with long-range behavior sequences. Drawing inspiration from the recent advancements of state space models (SSMs) in control theory, which provide a robust framework for modeling and controlling dynamic systems, we introduce EchoMamba4Rec. Control theory emphasizes the use of SSMs for managing long-range dependencies and maintaining inferential efficiency through structured state matrices. EchoMamba4Rec leverages these control relationships in sequential recommendation and integrates bi-directional processing with frequency-domain filtering to capture complex patterns and dependencies in user interaction data more effectively. Our model benefits from the ability of state space models (SSMs) to learn and perform parallel computations, significantly enhancing computational efficiency and scalability. It features a bi-directional Mamba module that incorporates both forward and reverse Mamba components, leveraging information from both past and future interactions. Additionally, a filter layer operates in the frequency domain using learnable Fast Fourier Transform (FFT) and learnable filters, followed by an inverse FFT to refine item embeddings and reduce noise. We also integrate Gate Linear Units (GLU) to dynamically control information flow, enhancing the model's expressiveness and training stability. Experimental results demonstrate that EchoMamba significantly outperforms existing models, providing more accurate and personalized recommendations.
[ "['Yuda Wang' 'Xuxin He' 'Shengxin Zhu']" ]
null
null
2406.02642
null
null
http://arxiv.org/pdf/2406.02642v1
2024-06-04T10:59:43Z
2024-06-04T10:59:43Z
E-ICL: Enhancing Fine-Grained Emotion Recognition through the Lens of Prototype Theory
In-context learning (ICL) achieves remarkable performance in various domains such as knowledge acquisition, commonsense reasoning, and semantic understanding. However, its performance significantly deteriorates for emotion detection tasks, especially fine-grained emotion recognition. The underlying reasons for this remain unclear. In this paper, we identify the reasons behind ICL's poor performance from the perspective of prototype theory and propose a method to address this issue. Specifically, we conduct extensive pilot experiments and find that ICL conforms to the prototype theory on fine-grained emotion recognition. Based on this theory, we uncover the following deficiencies in ICL: (1) It relies on prototypes (example-label pairs) that are semantically similar but emotionally inaccurate to predict emotions. (2) It is prone to interference from irrelevant categories, affecting the accuracy and robustness of the predictions. To address these issues, we propose an Emotion Context Learning method (E-ICL) on fine-grained emotion recognition. E-ICL relies on more emotionally accurate prototypes to predict categories by referring to emotionally similar examples with dynamic labels. Simultaneously, E-ICL employs an exclusionary emotion prediction strategy to avoid interference from irrelevant categories, thereby increasing its accuracy and robustness. Note that the entire process is accomplished with the assistance of a plug-and-play emotion auxiliary model, without additional training. Experiments on the fine-grained emotion datasets EDOS, Empathetic-Dialogues, EmpatheticIntent, and GoEmotions show that E-ICL achieves superior emotion prediction performance. Furthermore, even when the emotion auxiliary model used is lower than 10% of the LLMs, E-ICL can still boost the performance of LLMs by over 4% on multiple datasets.
[ "['Zhou Yang' 'Zhaochun Ren' 'Chenglong Ye' 'Yufeng Wang' 'Haizhou Sun'\n 'Chao Chen' 'Xiaofei Zhu' 'Yunbing Wu' 'Xiangwen Liao']" ]
null
null
2406.02645
null
null
http://arxiv.org/pdf/2406.02645v1
2024-06-04T13:11:49Z
2024-06-04T13:11:49Z
Astral: training physics-informed neural networks with error majorants
The primal approach to physics-informed learning is a residual minimization. We argue that residual is, at best, an indirect measure of the error of approximate solution and propose to train with error majorant instead. Since error majorant provides a direct upper bound on error, one can reliably estimate how close PiNN is to the exact solution and stop the optimization process when the desired accuracy is reached. We call loss function associated with error majorant $textbf{Astral}$: neur$textbf{A}$l a po$textbf{ST}$erio$textbf{RI}$ function$textbf{A}$l Loss. To compare Astral and residual loss functions, we illustrate how error majorants can be derived for various PDEs and conduct experiments with diffusion equations (including anisotropic and in the L-shaped domain), convection-diffusion equation, temporal discretization of Maxwell's equation, and magnetostatics problem. The results indicate that Astral loss is competitive to the residual loss, typically leading to faster convergence and lower error (e.g., for Maxwell's equations, we observe an order of magnitude better relative error and training time). We also report that the error estimate obtained with Astral loss is usually tight enough to be informative, e.g., for a highly anisotropic equation, on average, Astral overestimates error by a factor of $1.5$, and for convection-diffusion by a factor of $1.7$.
[ "['Vladimir Fanaskov' 'Tianchi Yu' 'Alexander Rudikov' 'Ivan Oseledets']" ]
null
null
2406.02648
null
null
http://arxiv.org/pdf/2406.02648v1
2024-06-04T14:16:52Z
2024-06-04T14:16:52Z
Exploring Effects of Hyperdimensional Vectors for Tsetlin Machines
Tsetlin machines (TMs) have been successful in several application domains, operating with high efficiency on Boolean representations of the input data. However, Booleanizing complex data structures such as sequences, graphs, images, signal spectra, chemical compounds, and natural language is not trivial. In this paper, we propose a hypervector (HV) based method for expressing arbitrarily large sets of concepts associated with any input data. Using a hyperdimensional space to build vectors drastically expands the capacity and flexibility of the TM. We demonstrate how images, chemical compounds, and natural language text are encoded according to the proposed method, and how the resulting HV-powered TM can achieve significantly higher accuracy and faster learning on well-known benchmarks. Our results open up a new research direction for TMs, namely how to expand and exploit the benefits of operating in hyperspace, including new booleanization strategies, optimization of TM inference and learning, as well as new TM applications.
[ "['Vojtech Halenka' 'Ahmed K. Kadhim' 'Paul F. A. Clarke' 'Bimal Bhattarai'\n 'Rupsa Saha' 'Ole-Christoffer Granmo' 'Lei Jiao' 'Per-Arne Andersen']" ]
null
null
2406.02649
null
null
http://arxiv.org/pdf/2406.02649v1
2024-06-04T14:20:38Z
2024-06-04T14:20:38Z
Keyword-Guided Adaptation of Automatic Speech Recognition
Automatic Speech Recognition (ASR) technology has made significant progress in recent years, providing accurate transcription across various domains. However, some challenges remain, especially in noisy environments and specialized jargon. In this paper, we propose a novel approach for improved jargon word recognition by contextual biasing Whisper-based models. We employ a keyword spotting model that leverages the Whisper encoder representation to dynamically generate prompts for guiding the decoder during the transcription process. We introduce two approaches to effectively steer the decoder towards these prompts: KG-Whisper, which is aimed at fine-tuning the Whisper decoder, and KG-Whisper-PT, which learns a prompt prefix. Our results show a significant improvement in the recognition accuracy of specified keywords and in reducing the overall word error rates. Specifically, in unseen language generalization, we demonstrate an average WER improvement of 5.1% over Whisper.
[ "['Aviv Shamsian' 'Aviv Navon' 'Neta Glazer' 'Gill Hetz' 'Joseph Keshet']" ]
null
null
2406.02650
null
null
http://arxiv.org/pdf/2406.02650v1
2024-06-04T15:35:08Z
2024-06-04T15:35:08Z
By Fair Means or Foul: Quantifying Collusion in a Market Simulation with Deep Reinforcement Learning
In the rapidly evolving landscape of eCommerce, Artificial Intelligence (AI) based pricing algorithms, particularly those utilizing Reinforcement Learning (RL), are becoming increasingly prevalent. This rise has led to an inextricable pricing situation with the potential for market collusion. Our research employs an experimental oligopoly model of repeated price competition, systematically varying the environment to cover scenarios from basic economic theory to subjective consumer demand preferences. We also introduce a novel demand framework that enables the implementation of various demand models, allowing for a weighted blending of different models. In contrast to existing research in this domain, we aim to investigate the strategies and emerging pricing patterns developed by the agents, which may lead to a collusive outcome. Furthermore, we investigate a scenario where agents cannot observe their competitors' prices. Finally, we provide a comprehensive legal analysis across all scenarios. Our findings indicate that RL-based AI agents converge to a collusive state characterized by the charging of supracompetitive prices, without necessarily requiring inter-agent communication. Implementing alternative RL algorithms, altering the number of agents or simulation settings, and restricting the scope of the agents' observation space does not significantly impact the collusive market outcome behavior.
[ "['Michael Schlechtinger' 'Damaris Kosack' 'Franz Krause' 'Heiko Paulheim']" ]
null
null
2406.02651
null
null
http://arxiv.org/pdf/2406.02651v1
2024-06-04T15:39:41Z
2024-06-04T15:39:41Z
RoutePlacer: An End-to-End Routability-Aware Placer with Graph Neural Network
Placement is a critical and challenging step of modern chip design, with routability being an essential indicator of placement quality. Current routability-oriented placers typically apply an iterative two-stage approach, wherein the first stage generates a placement solution, and the second stage provides non-differentiable routing results to heuristically improve the solution quality. This method hinders jointly optimizing the routability aspect during placement. To address this problem, this work introduces RoutePlacer, an end-to-end routability-aware placement method. It trains RouteGNN, a customized graph neural network, to efficiently and accurately predict routability by capturing and fusing geometric and topological representations of placements. Well-trained RouteGNN then serves as a differentiable approximation of routability, enabling end-to-end gradient-based routability optimization. In addition, RouteGNN can improve two-stage placers as a plug-and-play alternative to external routers. Our experiments on DREAMPlace, an open-source AI4EDA platform, show that RoutePlacer can reduce Total Overflow by up to 16% while maintaining routed wirelength, compared to the state-of-the-art; integrating RouteGNN within two-stage placers leads to a 44% reduction in Total Overflow without compromising wirelength.
[ "['Yunbo Hou' 'Haoran Ye' 'Yingxue Zhang' 'Siyuan Xu' 'Guojie Song']" ]
null
null
2406.02652
null
null
http://arxiv.org/pdf/2406.02652v1
2024-06-04T16:14:19Z
2024-06-04T16:14:19Z
RepCNN: Micro-sized, Mighty Models for Wakeword Detection
Always-on machine learning models require a very low memory and compute footprint. Their restricted parameter count limits the model's capacity to learn, and the effectiveness of the usual training algorithms to find the best parameters. Here we show that a small convolutional model can be better trained by first refactoring its computation into a larger redundant multi-branched architecture. Then, for inference, we algebraically re-parameterize the trained model into the single-branched form with fewer parameters for a lower memory footprint and compute cost. Using this technique, we show that our always-on wake-word detector model, RepCNN, provides a good trade-off between latency and accuracy during inference. RepCNN re-parameterized models are 43% more accurate than a uni-branch convolutional model while having the same runtime. RepCNN also meets the accuracy of complex architectures like BC-ResNet, while having 2x lesser peak memory usage and 10x faster runtime.
[ "['Arnav Kundu' 'Prateeth Nayak' 'Hywel Richards' 'Priyanka Padmanabhan'\n 'Devang Naik']" ]
null
null
2406.02653
null
null
http://arxiv.org/pdf/2406.02653v1
2024-06-04T16:38:11Z
2024-06-04T16:38:11Z
Pancreatic Tumor Segmentation as Anomaly Detection in CT Images Using Denoising Diffusion Models
Despite the advances in medicine, cancer has remained a formidable challenge. Particularly in the case of pancreatic tumors, characterized by their diversity and late diagnosis, early detection poses a significant challenge crucial for effective treatment. The advancement of deep learning techniques, particularly supervised algorithms, has significantly propelled pancreatic tumor detection in the medical field. However, supervised deep learning approaches necessitate extensive labeled medical images for training, yet acquiring such annotations is both limited and costly. Conversely, weakly supervised anomaly detection methods, requiring only image-level annotations, have garnered interest. Existing methodologies predominantly hinge on generative adversarial networks (GANs) or autoencoder models, which can pose complexity in training and, these models may face difficulties in accurately preserving fine image details. This research presents a novel approach to pancreatic tumor detection, employing weak supervision anomaly detection through denoising diffusion algorithms. By incorporating a deterministic iterative process of adding and removing noise along with classifier guidance, the method enables seamless translation of images between diseased and healthy subjects, resulting in detailed anomaly maps without requiring complex training protocols and segmentation masks. This study explores denoising diffusion models as a recent advancement over traditional generative models like GANs, contributing to the field of pancreatic tumor detection. Recognizing the low survival rates of pancreatic cancer, this study emphasizes the need for continued research to leverage diffusion models' efficiency in medical segmentation tasks.
[ "['Reza Babaei' 'Samuel Cheng' 'Theresa Thai' 'Shangqing Zhao']" ]
null
null
2406.02654
null
null
http://arxiv.org/pdf/2406.02654v2
2024-07-06T02:06:49Z
2024-06-04T16:39:02Z
kNN Classification of Malware Data Dependency Graph Features
Explainability in classification results are dependent upon the features used for classification. Data dependency graph features representing data movement are directly correlated with operational semantics, and subject to fine grained analysis. This study obtains accurate classification from the use of features tied to structure and semantics. By training an accurate model using labeled data, this feature representation of semantics is shown to be correlated with ground truth labels. This was performed using non-parametric learning with a novel feature representation on a large scale dataset, the Kaggle 2015 Malware dataset. The features used enable fine grained analysis, increase in resolution, and explainable inferences. This allows for the body of the term frequency distribution to be further analyzed and to provide an increase in feature resolution over term frequency features. This method obtains high accuracy from analysis of a single instruction, a method that can be repeated for additional instructions to obtain further increases in accuracy. This study evaluates the hypothesis that the semantic representation and analysis of structure are able to make accurate predications and are also correlated to ground truth labels. Additionally, similarity in the metric space can be calculated directly without prior training. Our results provide evidence that data dependency graphs accurately capture both semantic and structural information for increased explainability in classification results.
[ "['John Musgrave' 'Anca Ralescu']" ]
null
null
2406.02657
null
null
http://arxiv.org/pdf/2406.02657v1
2024-06-04T17:45:26Z
2024-06-04T17:45:26Z
Block Transformer: Global-to-Local Language Modeling for Fast Inference
This paper presents the Block Transformer architecture which adopts hierarchical global-to-local modeling to autoregressive transformers to mitigate the inference bottlenecks of self-attention. To apply self-attention, the key-value (KV) cache of all previous sequences must be retrieved from memory at every decoding step. Thereby, this KV cache IO becomes a significant bottleneck in batch inference. We notice that these costs stem from applying self-attention on the global context, therefore we isolate the expensive bottlenecks of global modeling to lower layers and apply fast local modeling in upper layers. To mitigate the remaining costs in the lower layers, we aggregate input tokens into fixed size blocks and then apply self-attention at this coarse level. Context information is aggregated into a single embedding to enable upper layers to decode the next block of tokens, without global attention. Free of global attention bottlenecks, the upper layers can fully utilize the compute hardware to maximize inference throughput. By leveraging global and local modules, the Block Transformer architecture demonstrates 10-20x gains in inference throughput compared to vanilla transformers with equivalent perplexity. Our work introduces a new approach to optimize language model inference through novel application of global-to-local modeling. Code is available at https://github.com/itsnamgyu/block-transformer.
[ "['Namgyu Ho' 'Sangmin Bae' 'Taehyeon Kim' 'Hyunjik Jo' 'Yireun Kim'\n 'Tal Schuster' 'Adam Fisch' 'James Thorne' 'Se-Young Yun']" ]
null
null
2406.02663
null
null
http://arxiv.org/pdf/2406.02663v1
2024-06-04T18:00:00Z
2024-06-04T18:00:00Z
Symmetric Kernels with Non-Symmetric Data: A Data-Agnostic Learnability Bound
Kernel ridge regression (KRR) and Gaussian processes (GPs) are fundamental tools in statistics and machine learning with recent applications to highly over-parameterized deep neural networks. The ability of these tools to learn a target function is directly related to the eigenvalues of their kernel sampled on the input data. Targets having support on higher eigenvalues are more learnable. While kernels are often highly symmetric objects, the data is often not. Thus kernel symmetry seems to have little to no bearing on the above eigenvalues or learnability, making spectral analysis on real-world data challenging. Here, we show that contrary to this common lure, one may use eigenvalues and eigenfunctions associated with highly idealized data-measures to bound learnability on realistic data. As a demonstration, we give a theoretical lower bound on the sample complexity of copying heads for kernels associated with generic transformers acting on natural language.
[ "['Itay Lavie' 'Zohar Ringel']" ]
null
null
2406.02696
null
null
http://arxiv.org/pdf/2406.02696v1
2024-06-04T18:15:44Z
2024-06-04T18:15:44Z
iQRL -- Implicitly Quantized Representations for Sample-efficient Reinforcement Learning
Learning representations for reinforcement learning (RL) has shown much promise for continuous control. We propose an efficient representation learning method using only a self-supervised latent-state consistency loss. Our approach employs an encoder and a dynamics model to map observations to latent states and predict future latent states, respectively. We achieve high performance and prevent representation collapse by quantizing the latent representation such that the rank of the representation is empirically preserved. Our method, named iQRL: implicitly Quantized Reinforcement Learning, is straightforward, compatible with any model-free RL algorithm, and demonstrates excellent performance by outperforming other recently proposed representation learning methods in continuous control benchmarks from DeepMind Control Suite.
[ "['Aidan Scannell' 'Kalle Kujanpää' 'Yi Zhao' 'Mohammadreza Nakhaei'\n 'Arno Solin' 'Joni Pajarinen']" ]
null
null
2406.02699
null
null
http://arxiv.org/pdf/2406.02699v1
2024-06-04T18:25:15Z
2024-06-04T18:25:15Z
Operational Latent Spaces
We investigate the construction of latent spaces through self-supervised learning to support semantically meaningful operations. Analogous to operational amplifiers, these "operational latent spaces" (OpLaS) not only demonstrate semantic structure such as clustering but also support common transformational operations with inherent semantic meaning. Some operational latent spaces are found to have arisen "unintentionally" in the progress toward some (other) self-supervised learning objective, in which unintended but still useful properties are discovered among the relationships of points in the space. Other spaces may be constructed "intentionally" by developers stipulating certain kinds of clustering or transformations intended to produce the desired structure. We focus on the intentional creation of operational latent spaces via self-supervised learning, including the introduction of rotation operators via a novel "FiLMR" layer, which can be used to enable ring-like symmetries found in some musical constructions.
[ "['Scott H. Hawley' 'Austin R. Tackett']" ]
null
null
2406.02706
null
null
http://arxiv.org/pdf/2406.02706v1
2024-06-04T18:36:11Z
2024-06-04T18:36:11Z
Window to Wall Ratio Detection using SegFormer
Window to Wall Ratios (WWR) are key to assessing the energy, daylight and ventilation performance of buildings. Studies have shown that window area has a large impact on building performance and simulation. However, data to set up these environmental models and simulations is typically not available. Instead, a standard 40% WWR is typically assumed for all buildings. This paper leverages existing computer vision window detection methods to predict WWR of buildings from external street view images using semantic segmentation, demonstrating the potential for adapting established computer vision technique in architectural applications
[ "['Zoe De Simone' 'Sayandeep Biswas' 'Oscar Wu']" ]
null
null
2406.02711
null
null
http://arxiv.org/pdf/2406.02711v1
2024-06-04T18:54:10Z
2024-06-04T18:54:10Z
Self-Trained Model for ECG Complex Delineation
Electrocardiogram (ECG) delineation plays a crucial role in assisting cardiologists with accurate diagnoses. Prior research studies have explored various methods, including the application of deep learning techniques, to achieve precise delineation. However, existing approaches face limitations primarily related to dataset size and robustness. In this paper, we introduce a dataset for ECG delineation and propose a novel self-trained method aimed at leveraging a vast amount of unlabeled ECG data. Our approach involves the pseudolabeling of unlabeled data using a neural network trained on our dataset. Subsequently, we train the model on the newly labeled samples to enhance the quality of delineation. We conduct experiments demonstrating that our dataset is a valuable resource for training robust models and that our proposed self-trained method improves the prediction quality of ECG delineation.
[ "['Aram Avetisyan' 'Nikolas Khachaturov' 'Ariana Asatryan'\n 'Shahane Tigranyan' 'Yury Markin']" ]
null
null
2406.02716
null
null
http://arxiv.org/pdf/2406.02716v1
2024-06-04T18:59:42Z
2024-06-04T18:59:42Z
Optimal Rates for DP-SCO with a Single Epoch and Large Batches
The most common algorithms for differentially private (DP) machine learning (ML) are all based on stochastic gradient descent, for example, DP-SGD. These algorithms achieve DP by treating each gradient as an independent private query. However, this independence can cause us to overpay in privacy loss because we don't analyze the entire gradient trajectory. In this work, we propose a new DP algorithm, which we call Accelerated-DP-SRGD (DP stochastic recursive gradient descent), that enables us to break this independence and only pay for privacy in the gradient difference, i.e., in the new information at the current step. Our algorithm achieves the optimal DP-stochastic convex optimization (DP-SCO) error (up to polylog factors) using only a single epoch over the dataset, and converges at the Nesterov's accelerated rate. Our algorithm can be run in at most $sqrt{n}$ batch gradient steps with batch size at least $sqrt{n}$, unlike prior work which required $O(n)$ queries with mostly constant batch sizes. To achieve this, our algorithm combines three key ingredients, a variant of stochastic recursive gradients (SRG), accelerated gradient descent, and correlated noise generation from DP continual counting. Finally, we also show that our algorithm improves over existing SoTA on multi-class logistic regression on MNIST and CIFAR-10.
[ "['Christopher A. Choquette-Choo' 'Arun Ganesh' 'Abhradeep Thakurta']" ]
null
null
2406.02726
null
null
http://arxiv.org/pdf/2406.02726v1
2024-06-04T19:08:40Z
2024-06-04T19:08:40Z
Temporal Graph Learning Recurrent Neural Network for Traffic Forecasting
Accurate traffic flow forecasting is a crucial research topic in transportation management. However, it is a challenging problem due to rapidly changing traffic conditions, high nonlinearity of traffic flow, and complex spatial and temporal correlations of road networks. Most existing studies either try to capture the spatial dependencies between roads using the same semantic graph over different time steps, or assume all sensors on the roads are equally likely to be connected regardless of the distance between them. However, we observe that the spatial dependencies between roads indeed change over time, and two distant roads are not likely to be helpful to each other when predicting the traffic flow, both of which limit the performance of existing studies. In this paper, we propose Temporal Graph Learning Recurrent Neural Network (TGLRN) to address these problems. More precisely, to effectively model the nature of time series, we leverage Recurrent Neural Networks (RNNs) to dynamically construct a graph at each time step, thereby capturing the time-evolving spatial dependencies between roads (i.e., microscopic view). Simultaneously, we provide the Adaptive Structure Information to the model, ensuring that close and consecutive sensors are considered to be more important for predicting the traffic flow (i.e., macroscopic view). Furthermore, to endow TGLRN with robustness, we introduce an edge sampling strategy when constructing the graph at each time step, which eventually leads to further improvements on the model performance. Experimental results on four commonly used real-world benchmark datasets show the effectiveness of TGLRN.
[ "['Sanghyun Lee' 'Chanyoung Park']" ]
null
null
2406.02732
null
null
http://arxiv.org/pdf/2406.02732v1
2024-06-04T19:18:05Z
2024-06-04T19:18:05Z
GEFL: Extended Filtration Learning for Graph Classification
Extended persistence is a technique from topological data analysis to obtain global multiscale topological information from a graph. This includes information about connected components and cycles that are captured by the so-called persistence barcodes. We introduce extended persistence into a supervised learning framework for graph classification. Global topological information, in the form of a barcode with four different types of bars and their explicit cycle representatives, is combined into the model by the readout function which is computed by extended persistence. The entire model is end-to-end differentiable. We use a link-cut tree data structure and parallelism to lower the complexity of computing extended persistence, obtaining a speedup of more than 60x over the state-of-the-art for extended persistence computation. This makes extended persistence feasible for machine learning. We show that, under certain conditions, extended persistence surpasses both the WL[1] graph isomorphism test and 0-dimensional barcodes in terms of expressivity because it adds more global (topological) information. In particular, arbitrarily long cycles can be represented, which is difficult for finite receptive field message passing graph neural networks. Furthermore, we show the effectiveness of our method on real world datasets compared to many existing recent graph representation learning methods.
[ "['Simon Zhang' 'Soham Mukherjee' 'Tamal K. Dey']" ]
null
null
2406.02736
null
null
http://arxiv.org/pdf/2406.02736v1
2024-06-04T19:35:44Z
2024-06-04T19:35:44Z
Synthetic Data Outliers: Navigating Identity Disclosure
Multiple synthetic data generation models have emerged, among which deep learning models have become the vanguard due to their ability to capture the underlying characteristics of the original data. However, the resemblance of the synthetic to the original data raises important questions on the protection of individuals' privacy. As synthetic data is perceived as a means to fully protect personal information, most current related work disregards the impact of re-identification risk. In particular, limited attention has been given to exploring outliers, despite their privacy relevance. In this work, we analyze the privacy of synthetic data w.r.t the outliers. Our main findings suggest that outliers re-identification via linkage attack is feasible and easily achieved. Furthermore, additional safeguards such as differential privacy can prevent re-identification, albeit at the expense of the data utility.
[ "['Carolina Trindade' 'Luís Antunes' 'Tânia Carvalho' 'Nuno Moniz']" ]
null
null
2406.02740
null
null
http://arxiv.org/pdf/2406.02740v1
2024-06-04T19:42:19Z
2024-06-04T19:42:19Z
Long Range Propagation on Continuous-Time Dynamic Graphs
Learning Continuous-Time Dynamic Graphs (C-TDGs) requires accurately modeling spatio-temporal information on streams of irregularly sampled events. While many methods have been proposed recently, we find that most message passing-, recurrent- or self-attention-based methods perform poorly on long-range tasks. These tasks require correlating information that occurred "far" away from the current event, either spatially (higher-order node information) or along the time dimension (events occurred in the past). To address long-range dependencies, we introduce Continuous-Time Graph Anti-Symmetric Network (CTAN). Grounded within the ordinary differential equations framework, our method is designed for efficient propagation of information. In this paper, we show how CTAN's (i) long-range modeling capabilities are substantiated by theoretical findings and how (ii) its empirical performance on synthetic long-range benchmarks and real-world benchmarks is superior to other methods. Our results motivate CTAN's ability to propagate long-range information in C-TDGs as well as the inclusion of long-range tasks as part of temporal graph models evaluation.
[ "['Alessio Gravina' 'Giulio Lovisotto' 'Claudio Gallicchio' 'Davide Bacciu'\n 'Claas Grohnfeldt']" ]
null
null
2406.02742
null
null
http://arxiv.org/pdf/2406.02742v1
2024-06-04T19:50:05Z
2024-06-04T19:50:05Z
Tolerant Algorithms for Learning with Arbitrary Covariate Shift
We study the problem of learning under arbitrary distribution shift, where the learner is trained on a labeled set from one distribution but evaluated on a different, potentially adversarially generated test distribution. We focus on two frameworks: PQ learning [Goldwasser, A. Kalai, Y. Kalai, Montasser NeurIPS 2020], allowing abstention on adversarially generated parts of the test distribution, and TDS learning [Klivans, Stavropoulos, Vasilyan COLT 2024], permitting abstention on the entire test distribution if distribution shift is detected. All prior known algorithms either rely on learning primitives that are computationally hard even for simple function classes, or end up abstaining entirely even in the presence of a tiny amount of distribution shift. We address both these challenges for natural function classes, including intersections of halfspaces and decision trees, and standard training distributions, including Gaussians. For PQ learning, we give efficient learning algorithms, while for TDS learning, our algorithms can tolerate moderate amounts of distribution shift. At the core of our approach is an improved analysis of spectral outlier-removal techniques from learning with nasty noise. Our analysis can (1) handle arbitrarily large fraction of outliers, which is crucial for handling arbitrary distribution shifts, and (2) obtain stronger bounds on polynomial moments of the distribution after outlier removal, yielding new insights into polynomial regression under distribution shifts. Lastly, our techniques lead to novel results for tolerant testable learning [Rubinfeld and Vasilyan STOC 2023], and learning with nasty noise.
[ "['Surbhi Goel' 'Abhishek Shetty' 'Konstantinos Stavropoulos'\n 'Arsen Vasilyan']" ]
null
null
2406.02744
null
null
http://arxiv.org/pdf/2406.02744v1
2024-06-04T19:57:47Z
2024-06-04T19:57:47Z
DPDR: Gradient Decomposition and Reconstruction for Differentially Private Deep Learning
Differentially Private Stochastic Gradients Descent (DP-SGD) is a prominent paradigm for preserving privacy in deep learning. It ensures privacy by perturbing gradients with random noise calibrated to their entire norm at each training step. However, this perturbation suffers from a sub-optimal performance: it repeatedly wastes privacy budget on the general converging direction shared among gradients from different batches, which we refer as common knowledge, yet yields little information gain. Motivated by this, we propose a differentially private training framework with early gradient decomposition and reconstruction (DPDR), which enables more efficient use of the privacy budget. In essence, it boosts model utility by focusing on incremental information protection and recycling the privatized common knowledge learned from previous gradients at early training steps. Concretely, DPDR incorporates three steps. First, it disentangles common knowledge and incremental information in current gradients by decomposing them based on previous noisy gradients. Second, most privacy budget is spent on protecting incremental information for higher information gain. Third, the model is updated with the gradient reconstructed from recycled common knowledge and noisy incremental information. Theoretical analysis and extensive experiments show that DPDR outperforms state-of-the-art baselines on both convergence rate and accuracy.
[ "['Yixuan Liu' 'Li Xiong' 'Yuhan Liu' 'Yujie Gu' 'Ruixuan Liu' 'Hong Chen']" ]
null
null
2406.02745
null
null
http://arxiv.org/pdf/2406.02745v1
2024-06-04T20:01:39Z
2024-06-04T20:01:39Z
Measuring Stochastic Data Complexity with Boltzmann Influence Functions
Estimating the uncertainty of a model's prediction on a test point is a crucial part of ensuring reliability and calibration under distribution shifts. A minimum description length approach to this problem uses the predictive normalized maximum likelihood (pNML) distribution, which considers every possible label for a data point, and decreases confidence in a prediction if other labels are also consistent with the model and training data. In this work we propose IF-COMP, a scalable and efficient approximation of the pNML distribution that linearizes the model with a temperature-scaled Boltzmann influence function. IF-COMP can be used to produce well-calibrated predictions on test points as well as measure complexity in both labelled and unlabelled settings. We experimentally validate IF-COMP on uncertainty calibration, mislabel detection, and OOD detection tasks, where it consistently matches or beats strong baseline methods.
[ "['Nathan Ng' 'Roger Grosse' 'Marzyeh Ghassemi']" ]
null
null
2406.02756
null
null
http://arxiv.org/pdf/2406.02756v1
2024-06-04T20:21:45Z
2024-06-04T20:21:45Z
Aligning Large Language Models via Fine-grained Supervision
Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations. Current approaches focus on using reinforcement learning with human feedback (RLHF) to improve model alignment, which works by transforming coarse human preferences of LLM outputs into a feedback signal that guides the model learning process. However, because this approach operates on sequence-level feedback, it lacks the precision to identify the exact parts of the output affecting user preferences. To address this gap, we propose a method to enhance LLM alignment through fine-grained token-level supervision. Specifically, we ask annotators to minimally edit less preferred responses within the standard reward modeling dataset to make them more favorable, ensuring changes are made only where necessary while retaining most of the original content. The refined dataset is used to train a token-level reward model, which is then used for training our fine-grained Proximal Policy Optimization (PPO) model. Our experiment results demonstrate that this approach can achieve up to an absolute improvement of $5.1%$ in LLM performance, in terms of win rate against the reference model, compared with the traditional PPO model.
[ "['Dehong Xu' 'Liang Qiu' 'Minseok Kim' 'Faisal Ladhak' 'Jaeyoung Do']" ]
null
null
2406.02761
null
null
http://arxiv.org/pdf/2406.02761v1
2024-06-04T20:28:02Z
2024-06-04T20:28:02Z
Multi-layer Learnable Attention Mask for Multimodal Tasks
While the Self-Attention mechanism in the Transformer model has proven to be effective in many domains, we observe that it is less effective in more diverse settings (e.g. multimodality) due to the varying granularity of each token and the high computational demands of lengthy sequences. To address the challenges, we introduce the Learnable Attention Mask (LAM), strategically designed to globally regulate attention maps and prioritize critical tokens within the sequence. Leveraging the Self-Attention module in a BERT-like transformer network, our approach adeptly captures associations between tokens. The extension of the LAM to a multi-layer version accommodates the varied information aspects embedded at each layer of the Transformer network. Comprehensive experimental validation on various datasets, such as MADv2, QVHighlights, ImageNet 1K, and MSRVTT, demonstrates the efficacy of the LAM, exemplifying its ability to enhance model performance while mitigating redundant computations. This pioneering approach presents a significant advancement in enhancing the understanding of complex scenarios, such as in movie understanding.
[ "['Wayner Barrios' 'SouYoung Jin']" ]
null
null
2406.02764
null
null
http://arxiv.org/pdf/2406.02764v1
2024-06-04T20:33:22Z
2024-06-04T20:33:22Z
Adaptive Preference Scaling for Reinforcement Learning with Human Feedback
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values by learning rewards from human preference data. Due to various reasons, however, such data typically takes the form of rankings over pairs of trajectory segments, which fails to capture the varying strengths of preferences across different pairs. In this paper, we propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO), designed to address this uncertainty in preference strength. By incorporating an adaptive scaling parameter into the loss for each pair, our method increases the flexibility of the reward function. Specifically, it assigns small scaling parameters to pairs with ambiguous preferences, leading to more comparable rewards, and large scaling parameters to those with clear preferences for more distinct rewards. Computationally, our proposed loss function is strictly convex and univariate with respect to each scaling parameter, enabling its efficient optimization through a simple second-order algorithm. Our method is versatile and can be readily adapted to various preference optimization frameworks, including direct preference optimization (DPO). Our experiments with robotic control and natural language generation with large language models (LLMs) show that our method not only improves policy performance but also aligns reward function selection more closely with policy optimization, simplifying the hyperparameter tuning process.
[ "['Ilgee Hong' 'Zichong Li' 'Alexander Bukharin' 'Yixiao Li'\n 'Haoming Jiang' 'Tianbao Yang' 'Tuo Zhao']" ]
null
null
2406.02765
null
null
http://arxiv.org/pdf/2406.02765v2
2024-06-21T09:14:03Z
2024-06-04T20:33:29Z
Discovering Dynamic Symbolic Policies with Genetic Programming
Artificial intelligence (AI) techniques are increasingly being applied to solve control problems. However, control systems developed in AI are often black-box methods, in that it is not clear how and why they generate their outputs. A lack of transparency can be problematic for control tasks in particular, because it complicates the identification of biases or errors, which in turn negatively influences the user's confidence in the system. To improve the interpretability and transparency in control systems, the black-box structure can be replaced with white-box symbolic policies described by mathematical expressions. Genetic programming offers a gradient-free method to optimise the structure of non-differentiable mathematical expressions. In this paper, we show that genetic programming can be used to discover symbolic control systems. This is achieved by learning a symbolic representation of a function that transforms observations into control signals. We consider both systems that implement static control policies without memory and systems that implement dynamic memory-based control policies. In case of the latter, the discovered function becomes the state equation of a differential equation, which allows for evidence integration. Our results show that symbolic policies are discovered that perform comparably with black-box policies on a variety of control tasks. Furthermore, the additional value of the memory capacity in the dynamic policies is demonstrated on experiments where static policies fall short. Overall, we demonstrate that white-box symbolic policies can be optimised with genetic programming, while offering interpretability and transparency that lacks in black-box models.
[ "['Sigur de Vries' 'Sander Keemink' 'Marcel van Gerven']" ]
null
null
2406.02767
null
null
http://arxiv.org/abs/2406.02767v1
2024-06-04T20:36:16Z
2024-06-04T20:36:16Z
Spatial and social situation-aware transformer-based trajectory prediction of autonomous systems
Autonomous transportation systems such as road vehicles or vessels require the consideration of the static and dynamic environment to dislocate without collision. Anticipating the behavior of an agent in a given situation is required to adequately react to it in time. Developing deep learning-based models has become the dominant approach to motion prediction recently. The social environment is often considered through a CNN-LSTM-based sub-module processing a $textit{social tensor}$ that includes information of the past trajectory of surrounding agents. For the proposed transformer-based trajectory prediction model, an alternative, computationally more efficient social tensor definition and processing is suggested. It considers the interdependencies between target and surrounding agents at each time step directly instead of relying on information of last hidden LSTM states of individually processed agents. A transformer-based sub-module, the Social Tensor Transformer, is integrated into the overall prediction model. It is responsible for enriching the target agent's dislocation features with social interaction information obtained from the social tensor. For the awareness of spatial limitations, dislocation features are defined in relation to the navigable area. This replaces additional, computationally expensive map processing sub-modules. An ablation study shows, that for longer prediction horizons, the deviation of the predicted trajectory from the ground truth is lower compared to a spatially and socially agnostic model. Even if the performance gain from a spatial-only to a spatial and social context-sensitive model is small in terms of common error measures, by visualizing the results it can be shown that the proposed model in fact is able to predict reactions to surrounding agents and explicitely allows an interpretable behavior.
[ "['Kathrin Donandt' 'Dirk Söffker']" ]
null
null
2406.02769
null
null
http://arxiv.org/pdf/2406.02769v1
2024-06-04T20:37:17Z
2024-06-04T20:37:17Z
Precise asymptotics of reweighted least-squares algorithms for linear diagonal networks
The classical iteratively reweighted least-squares (IRLS) algorithm aims to recover an unknown signal from linear measurements by performing a sequence of weighted least squares problems, where the weights are recursively updated at each step. Varieties of this algorithm have been shown to achieve favorable empirical performance and theoretical guarantees for sparse recovery and $ell_p$-norm minimization. Recently, some preliminary connections have also been made between IRLS and certain types of non-convex linear neural network architectures that are observed to exploit low-dimensional structure in high-dimensional linear models. In this work, we provide a unified asymptotic analysis for a family of algorithms that encompasses IRLS, the recently proposed lin-RFM algorithm (which was motivated by feature learning in neural networks), and the alternating minimization algorithm on linear diagonal neural networks. Our analysis operates in a "batched" setting with i.i.d. Gaussian covariates and shows that, with appropriately chosen reweighting policy, the algorithm can achieve favorable performance in only a handful of iterations. We also extend our results to the case of group-sparse recovery and show that leveraging this structure in the reweighting scheme provably improves test error compared to coordinate-wise reweighting.
[ "['Chiraag Kaushik' 'Justin Romberg' 'Vidya Muthukumar']" ]
null
null
2406.02770
null
null
http://arxiv.org/abs/2406.02770v1
2024-06-04T20:37:30Z
2024-06-04T20:37:30Z
Short-term Inland Vessel Trajectory Prediction with Encoder-Decoder Models
Accurate vessel trajectory prediction is necessary for save and efficient navigation. Deep learning-based prediction models, esp. encoder-decoders, are rarely applied to inland navigation specifically. Approaches from the maritime domain cannot directly be transferred to river navigation due to specific driving behavior influencing factors. Different encoder-decoder architectures, including a transformer encoder-decoder, are compared herein for predicting the next positions of inland vessels, given not only spatio-temporal information from AIS, but also river specific features. The results show that the reformulation of the regression task as classification problem and the inclusion of river specific features yield the lowest displacement errors. The standard LSTM encoder-decoder outperforms the transformer encoder-decoder for the data considered, but is computationally more expensive. In this study for the first time a transformer-based encoder-decoder model is applied to the problem of predicting the ship trajectory. Here, a feature vector using the river-specific context of navigation input parameters is established. Future studies can built on the proposed models, investigate the improvement of the computationally more efficient transformer, e.g. through further hyper-parameter optimization, and use additional river-specific information in the context representation to further increase prediction accuracy.
[ "['Kathrin Donandt' 'Karim Böttger' 'Dirk Söffker']" ]
null
null
2406.02771
null
null
http://arxiv.org/abs/2406.02771v1
2024-06-04T20:39:14Z
2024-06-04T20:39:14Z
Improved context-sensitive transformer model for inland vessel trajectory prediction
Physics-related and model-based vessel trajectory prediction is highly accurate but requires specific knowledge of the vessel under consideration which is not always practical. Machine learning-based trajectory prediction models do not require expert knowledge, but rely on the implicit knowledge extracted from massive amounts of data. Several deep learning (DL) methods for vessel trajectory prediction have recently been suggested. The DL models developed typically only process information about the (dis)location of vessels defined with respect to a global reference system. In the context of inland navigation, this can be problematic, since without knowledge of the limited navigable space, irrealistic trajectories are likely to be determined. If spatial constraintes are introduced, e.g., by implementing an additional submodule to process map data, however, overall complexity increases. Instead of processing the vessel displacement information on the one hand and the spatial information on the other hand, the paper proposes the merging of both information. Here, fairway-related and navigation-related displacement information are used directly. In this way, the previously proposed context-sensitive Classification Transformer (CSCT) shows an improved spatial awareness. Additionally, the CSCT is adapted to assess the model uncertainty by enabling dropout during inference. This approach is trained on different inland waterways to analyze its generalizability. As the improved CSCT obtains lower prediction errors and enables to estimate the trustworthiness of each prediction, it is more suitable for safety-critical applications in inland navigation than previously developed models.
[ "['Kathrin Donandt' 'Karim Böttger' 'Dirk Söffker']" ]
null
null
2406.02772
null
null
http://arxiv.org/pdf/2406.02772v1
2024-06-04T20:40:06Z
2024-06-04T20:40:06Z
Hyperbolic Benchmarking Unveils Network Topology-Feature Relationship in GNN Performance
Graph Neural Networks (GNNs) have excelled in predicting graph properties in various applications ranging from identifying trends in social networks to drug discovery and malware detection. With the abundance of new architectures and increased complexity, GNNs are becoming highly specialized when tested on a few well-known datasets. However, how the performance of GNNs depends on the topological and features properties of graphs is still an open question. In this work, we introduce a comprehensive benchmarking framework for graph machine learning, focusing on the performance of GNNs across varied network structures. Utilizing the geometric soft configuration model in hyperbolic space, we generate synthetic networks with realistic topological properties and node feature vectors. This approach enables us to assess the impact of network properties, such as topology-feature correlation, degree distributions, local density of triangles (or clustering), and homophily, on the effectiveness of different GNN architectures. Our results highlight the dependency of model performance on the interplay between network structure and node features, providing insights for model selection in various scenarios. This study contributes to the field by offering a versatile tool for evaluating GNNs, thereby assisting in developing and selecting suitable models based on specific data characteristics.
[ "['Roya Aliakbarisani' 'Robert Jankowski' 'M. Ángeles Serrano'\n 'Marián Boguñá']" ]
null
null
2406.02773
null
null
http://arxiv.org/pdf/2406.02773v2
2024-06-07T06:10:04Z
2024-06-04T20:40:27Z
Cyclic Sparse Training: Is it Enough?
The success of iterative pruning methods in achieving state-of-the-art sparse networks has largely been attributed to improved mask identification and an implicit regularization induced by pruning. We challenge this hypothesis and instead posit that their repeated cyclic training schedules enable improved optimization. To verify this, we show that pruning at initialization is significantly boosted by repeated cyclic training, even outperforming standard iterative pruning methods. The dominant mechanism how this is achieved, as we conjecture, can be attributed to a better exploration of the loss landscape leading to a lower training loss. However, at high sparsity, repeated cyclic training alone is not enough for competitive performance. A strong coupling between learnt parameter initialization and mask seems to be required. Standard methods obtain this coupling via expensive pruning-training iterations, starting from a dense network. To achieve this with sparse training instead, we propose SCULPT-ing, i.e., repeated cyclic training of any sparse mask followed by a single pruning step to couple the parameters and the mask, which is able to match the performance of state-of-the-art iterative pruning methods in the high sparsity regime at reduced computational cost.
[ "['Advait Gadhikar' 'Sree Harsha Nelaturu' 'Rebekka Burkholz']" ]
null
null
2406.02775
null
null
http://arxiv.org/pdf/2406.02775v1
2024-06-04T20:45:20Z
2024-06-04T20:45:20Z
Diagnostic Digital Twin for Anomaly Detection in Floating Offshore Wind Energy
The demand for condition-based and predictive maintenance is rising across industries, especially for remote, high-value, and high-risk assets. In this article, the diagnostic digital twin concept is introduced, discussed, and implemented for a floating offshore turbine. A diagnostic digital twin is a virtual representation of an asset that combines real-time data and models to monitor damage, detect anomalies, and diagnose failures, thereby enabling condition-based and predictive maintenance. By applying diagnostic digital twins to offshore assets, unexpected failures can be alleviated, but the implementation can prove challenging. Here, a diagnostic digital twin is implemented for an operational floating offshore wind turbine. The asset is monitored through measurements. Unsupervised learning methods are employed to build a normal operation model, detect anomalies, and provide a fault diagnosis. Warnings and diagnoses are sent through text messages, and a more detailed diagnosis can be accessed in a virtual reality interface. The diagnostic digital twin successfully detected an anomaly with high confidence hours before a failure occurred. The paper concludes by discussing diagnostic digital twins in the broader context of offshore engineering. The presented approach can be generalized to other offshore assets to improve maintenance and increase the lifetime, efficiency, and sustainability of offshore assets.
[ "['Florian Stadtmann' 'Adil Rasheed']" ]
null
null
2406.02778
null
null
http://arxiv.org/pdf/2406.02778v2
2024-06-06T01:31:53Z
2024-06-04T20:48:33Z
MS-IMAP -- A Multi-Scale Graph Embedding Approach for Interpretable Manifold Learning
Deriving meaningful representations from complex, high-dimensional data in unsupervised settings is crucial across diverse machine learning applications. This paper introduces a framework for multi-scale graph network embedding based on spectral graph wavelets that employs a contrastive learning approach. A significant feature of the proposed embedding is its capacity to establish a correspondence between the embedding space and the input feature space which aids in deriving feature importance of the original features. We theoretically justify our approach and demonstrate that, in Paley-Wiener spaces on combinatorial graphs, the spectral graph wavelets operator offers greater flexibility and better control over smoothness properties compared to the Laplacian operator. We validate the effectiveness of our proposed graph embedding on a variety of public datasets through a range of downstream tasks, including clustering and unsupervised feature importance.
[ "['Shay Deutsch' 'Lionel Yelibi' 'Alex Tong Lin' 'Arjun Ravi Kannan']" ]
null
null
2406.02780
null
null
http://arxiv.org/pdf/2406.02780v1
2024-06-04T20:51:04Z
2024-06-04T20:51:04Z
LADI v2: Multi-label Dataset and Classifiers for Low-Altitude Disaster Imagery
ML-based computer vision models are promising tools for supporting emergency management operations following natural disasters. Arial photographs taken from small manned and unmanned aircraft can be available soon after a disaster and provide valuable information from multiple perspectives for situational awareness and damage assessment applications. However, emergency managers often face challenges finding the most relevant photos among the tens of thousands that may be taken after an incident. While ML-based solutions could enable more effective use of aerial photographs, there is still a lack of training data for imagery of this type from multiple perspectives and for multiple hazard types. To address this, we present the LADI v2 (Low Altitude Disaster Imagery version 2) dataset, a curated set of about 10,000 disaster images captured in the United States by the Civil Air Patrol (CAP) in response to federally-declared emergencies (2015-2023) and annotated for multi-label classification by trained CAP volunteers. We also provide two pretrained baseline classifiers and compare their performance to state-of-the-art vision-language models in multi-label classification. The data and code are released publicly to support the development of computer vision models for emergency management research and applications.
[ "['Samuel Scheele' 'Katherine Picchione' 'Jeffrey Liu']" ]
null
null
2406.02785
null
null
http://arxiv.org/pdf/2406.02785v1
2024-06-04T21:08:07Z
2024-06-04T21:08:07Z
Event-horizon-scale Imaging of M87* under Different Assumptions via Deep Generative Image Priors
Reconstructing images from the Event Horizon Telescope (EHT) observations of M87*, the supermassive black hole at the center of the galaxy M87, depends on a prior to impose desired image statistics. However, given the impossibility of directly observing black holes, there is no clear choice for a prior. We present a framework for flexibly designing a range of priors, each bringing different biases to the image reconstruction. These priors can be weak (e.g., impose only basic natural-image statistics) or strong (e.g., impose assumptions of black-hole structure). Our framework uses Bayesian inference with score-based priors, which are data-driven priors arising from a deep generative model that can learn complicated image distributions. Using our Bayesian imaging approach with sophisticated data-driven priors, we can assess how visual features and uncertainty of reconstructed images change depending on the prior. In addition to simulated data, we image the real EHT M87* data and discuss how recovered features are influenced by the choice of prior.
[ "['Berthy T. Feng' 'Katherine L. Bouman' 'William T. Freeman']" ]
null
null
2406.02787
null
null
http://arxiv.org/pdf/2406.02787v1
2024-06-04T21:25:06Z
2024-06-04T21:25:06Z
Disentangling Logic: The Role of Context in Large Language Model Reasoning Capabilities
This study intends to systematically disentangle pure logic reasoning and text understanding by investigating the contrast across abstract and contextualized logical problems from a comprehensive set of domains. We explore whether LLMs demonstrate genuine reasoning capabilities across various domains when the underlying logical structure remains constant. We focus on two main questions (1) Can abstract logical problems alone accurately benchmark an LLM's reasoning ability in real-world scenarios, disentangled from contextual support in practical settings? (2) Does fine-tuning LLMs on abstract logic problem generalize to contextualized logic problems and vice versa? To investigate these questions, we focus on standard propositional logic, specifically propositional deductive and abductive logic reasoning. In particular, we construct instantiated datasets for deductive and abductive reasoning with 4 levels of difficulty, encompassing 12 distinct categories or domains based on the categorization of Wikipedia. Our experiments aim to provide insights into disentangling context in logical reasoning and the true reasoning capabilities of LLMs and their generalization potential. The code and dataset are available at: https://github.com/agiresearch/ContextHub.
[ "['Wenyue Hua' 'Kaijie Zhu' 'Lingyao Li' 'Lizhou Fan' 'Shuhang Lin'\n 'Mingyu Jin' 'Haochen Xue' 'Zelong Li' 'JinDong Wang' 'Yongfeng Zhang']" ]
null
null
2406.02789
null
null
http://arxiv.org/pdf/2406.02789v1
2024-06-04T21:26:29Z
2024-06-04T21:26:29Z
Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions
We study the problem of differentially private stochastic convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a $k^{text{th}}$-moment bound on the Lipschitz constants of sample functions rather than a uniform bound. We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error $G_2 cdot frac 1 {sqrt n} + G_k cdot (frac{sqrt d}{nepsilon})^{1 - frac 1 k}$ under $(epsilon, delta)$-approximate differential privacy, up to a mild $textup{polylog}(frac{1}{delta})$ factor, where $G_2^2$ and $G_k^k$ are the $2^{text{nd}}$ and $k^{text{th}}$ moment bounds on sample Lipschitz constants, nearly-matching a lower bound of [Lowy and Razaviyayn 2023]. We further give a suite of private algorithms in the heavy-tailed setting which improve upon our basic result under additional assumptions, including an optimal algorithm under a known-Lipschitz constant assumption, a near-linear time algorithm for smooth functions, and an optimal linear time algorithm for smooth generalized linear models.
[ "['Hilal Asi' 'Daogao Liu' 'Kevin Tian']" ]
null
null
2406.02790
null
null
http://arxiv.org/pdf/2406.02790v1
2024-06-04T21:27:43Z
2024-06-04T21:27:43Z
Building Socially-Equitable Public Models
Public models offer predictions to a variety of downstream tasks and have played a crucial role in various AI applications, showcasing their proficiency in accurate predictions. However, the exclusive emphasis on prediction accuracy may not align with the diverse end objectives of downstream agents. Recognizing the public model's predictions as a service, we advocate for integrating the objectives of downstream agents into the optimization process. Concretely, to address performance disparities and foster fairness among heterogeneous agents in training, we propose a novel Equitable Objective. This objective, coupled with a policy gradient algorithm, is crafted to train the public model to produce a more equitable/uniform performance distribution across downstream agents, each with their unique concerns. Both theoretical analysis and empirical case studies have proven the effectiveness of our method in advancing performance equity across diverse downstream agents utilizing the public model for their decision-making. Codes and datasets are released at https://github.com/Ren-Research/Socially-Equitable-Public-Models.
[ "['Yejia Liu' 'Jianyi Yang' 'Pengfei Li' 'Tongxin Li' 'Shaolei Ren']" ]
null
null
2406.02797
null
null
http://arxiv.org/pdf/2406.02797v1
2024-06-04T21:48:30Z
2024-06-04T21:48:30Z
Auditing Privacy Mechanisms via Label Inference Attacks
We propose reconstruction advantage measures to audit label privatization mechanisms. A reconstruction advantage measure quantifies the increase in an attacker's ability to infer the true label of an unlabeled example when provided with a private version of the labels in a dataset (e.g., aggregate of labels from different users or noisy labels output by randomized response), compared to an attacker that only observes the feature vectors, but may have prior knowledge of the correlation between features and labels. We consider two such auditing measures: one additive, and one multiplicative. These incorporate previous approaches taken in the literature on empirical auditing and differential privacy. The measures allow us to place a variety of proposed privatization schemes -- some differentially private, some not -- on the same footing. We analyze these measures theoretically under a distributional model which encapsulates reasonable adversarial settings. We also quantify their behavior empirically on real and simulated prediction tasks. Across a range of experimental settings, we find that differentially private schemes dominate or match the privacy-utility tradeoff of more heuristic approaches.
[ "['Róbert István Busa-Fekete' 'Travis Dick' 'Claudio Gentile'\n 'Andrés Muñoz Medina' 'Adam Smith' 'Marika Swanberg']" ]
null
null
2406.02804
null
null
http://arxiv.org/pdf/2406.02804v1
2024-06-04T22:08:24Z
2024-06-04T22:08:24Z
$\texttt{ACCORD}$: Closing the Commonsense Measurability Gap
We present $texttt{ACCORD}$, a framework and benchmark suite for disentangling the commonsense grounding and reasoning abilities of large language models (LLMs) through controlled, multi-hop counterfactuals. $texttt{ACCORD}$ introduces formal elements to commonsense reasoning to explicitly control and quantify reasoning complexity beyond the typical 1 or 2 hops. Uniquely, $texttt{ACCORD}$ can automatically generate benchmarks of arbitrary reasoning complexity, and so it scales with future LLM improvements. Benchmarking state-of-the-art LLMs -- including GPT-4o (2024-05-13), Llama-3-70B-Instruct, and Mixtral-8x22B-Instruct-v0.1 -- shows performance degrading to random chance with only moderate scaling, leaving substantial headroom for improvement. We release a leaderboard of the benchmark suite tested in this work, as well as code for automatically generating more complex benchmarks.
[ "['François Roewer-Després' 'Jinyue Feng' 'Zining Zhu' 'Frank Rudzicz']" ]
null
null
2406.02806
null
null
http://arxiv.org/pdf/2406.02806v2
2024-06-08T20:35:12Z
2024-06-04T22:22:39Z
Randomized Geometric Algebra Methods for Convex Neural Networks
We introduce randomized algorithms to Clifford's Geometric Algebra, generalizing randomized linear algebra to hypercomplex vector spaces. This novel approach has many implications in machine learning, including training neural networks to global optimality via convex optimization. Additionally, we consider fine-tuning large language model (LLM) embeddings as a key application area, exploring the intersection of geometric algebra and modern AI techniques. In particular, we conduct a comparative analysis of the robustness of transfer learning via embeddings, such as OpenAI GPT models and BERT, using traditional methods versus our novel approach based on convex optimization. We test our convex optimization transfer learning method across a variety of case studies, employing different embeddings (GPT-4 and BERT embeddings) and different text classification datasets (IMDb, Amazon Polarity Dataset, and GLUE) with a range of hyperparameter settings. Our results demonstrate that convex optimization and geometric algebra not only enhances the performance of LLMs but also offers a more stable and reliable method of transfer learning via embeddings.
[ "['Yifei Wang' 'Sungyoon Kim' 'Paul Chu' 'Indu Subramaniam' 'Mert Pilanci']" ]
null
null
2406.02820
null
null
http://arxiv.org/pdf/2406.02820v1
2024-06-04T23:39:08Z
2024-06-04T23:39:08Z
ORACLE: Leveraging Mutual Information for Consistent Character Generation with LoRAs in Diffusion Models
Text-to-image diffusion models have recently taken center stage as pivotal tools in promoting visual creativity across an array of domains such as comic book artistry, children's literature, game development, and web design. These models harness the power of artificial intelligence to convert textual descriptions into vivid images, thereby enabling artists and creators to bring their imaginative concepts to life with unprecedented ease. However, one of the significant hurdles that persist is the challenge of maintaining consistency in character generation across diverse contexts. Variations in textual prompts, even if minor, can yield vastly different visual outputs, posing a considerable problem in projects that require a uniform representation of characters throughout. In this paper, we introduce a novel framework designed to produce consistent character representations from a single text prompt across diverse settings. Through both quantitative and qualitative analyses, we demonstrate that our framework outperforms existing methods in generating characters with consistent visual identities, underscoring its potential to transform creative industries. By addressing the critical challenge of character consistency, we not only enhance the practical utility of these models but also broaden the horizons for artistic and creative expression.
[ "['Kiymet Akdemir' 'Pinar Yanardag']" ]
null
null
2406.02826
null
null
http://arxiv.org/pdf/2406.02826v1
2024-06-05T00:11:20Z
2024-06-05T00:11:20Z
Exploring Robustness in Doctor-Patient Conversation Summarization: An Analysis of Out-of-Domain SOAP Notes
Summarizing medical conversations poses unique challenges due to the specialized domain and the difficulty of collecting in-domain training data. In this study, we investigate the performance of state-of-the-art doctor-patient conversation generative summarization models on the out-of-domain data. We divide the summarization model of doctor-patient conversation into two configurations: (1) a general model, without specifying subjective (S), objective (O), and assessment (A) and plan (P) notes; (2) a SOAP-oriented model that generates a summary with SOAP sections. We analyzed the limitations and strengths of the fine-tuning language model-based methods and GPTs on both configurations. We also conducted a Linguistic Inquiry and Word Count analysis to compare the SOAP notes from different datasets. The results exhibit a strong correlation for reference notes across different datasets, indicating that format mismatch (i.e., discrepancies in word distribution) is not the main cause of performance decline on out-of-domain data. Lastly, a detailed analysis of SOAP notes is included to provide insights into missing information and hallucinations introduced by the models.
[ "['Yu-Wen Chen' 'Julia Hirschberg']" ]
null
null
2406.02827
null
null
http://arxiv.org/pdf/2406.02827v1
2024-06-05T00:13:38Z
2024-06-05T00:13:38Z
Stochastic Diffusion: A Diffusion Probabilistic Model for Stochastic Time Series Forecasting
Recent innovations in diffusion probabilistic models have paved the way for significant progress in image, text and audio generation, leading to their applications in generative time series forecasting. However, leveraging such abilities to model highly stochastic time series data remains a challenge. In this paper, we propose a novel Stochastic Diffusion (StochDiff) model which learns data-driven prior knowledge at each time step by utilizing the representational power of the stochastic latent spaces to model the variability of the multivariate time series data. The learnt prior knowledge helps the model to capture complex temporal dynamics and the inherent uncertainty of the data. This improves its ability to model highly stochastic time series data. Through extensive experiments on real-world datasets, we demonstrate the effectiveness of our proposed model on stochastic time series forecasting. Additionally, we showcase an application of our model for real-world surgical guidance, highlighting its potential to benefit the medical community.
[ "['Yuansan Liu' 'Sudanthi Wijewickrema' 'Dongting Hu' 'Christofer Bester'\n \"Stephen O'Leary\" 'James Bailey']" ]
null
null
2406.02832
null
null
http://arxiv.org/pdf/2406.02832v1
2024-06-05T00:54:03Z
2024-06-05T00:54:03Z
Efficient Minimum Bayes Risk Decoding using Low-Rank Matrix Completion Algorithms
Minimum Bayes Risk (MBR) decoding is a powerful decoding strategy widely used for text generation tasks, but its quadratic computational complexity limits its practical application. This paper presents a novel approach for approximating MBR decoding using matrix completion techniques, focusing on the task of machine translation. We formulate MBR decoding as a matrix completion problem, where the utility metric scores between candidate hypotheses and pseudo-reference translations form a low-rank matrix. First, we empirically show that the scores matrices indeed have a low-rank structure. Then, we exploit this by only computing a random subset of the scores and efficiently recover the missing entries in the matrix by applying the Alternating Least Squares (ALS) algorithm, thereby enabling a fast approximation of the MBR decoding process. Our experimental results on machine translation tasks demonstrate that the proposed method requires 1/16 utility metric computations compared to vanilla MBR decoding while achieving equal translation quality measured by COMET22 on the WMT22 dataset (en<>de and en<>ru). We also benchmark our method against other approximation methods and we show gains in quality when comparing to them.
[ "['Firas Trabelsi' 'David Vilar' 'Mara Finkelstein' 'Markus Freitag']" ]
null
null
2406.02838
null
null
http://arxiv.org/pdf/2406.02838v1
2024-06-05T01:28:53Z
2024-06-05T01:28:53Z
You Only Accept Samples Once: Fast, Self-Correcting Stochastic Variational Inference
We introduce YOASOVI, an algorithm for performing fast, self-correcting stochastic optimization for Variational Inference (VI) on large Bayesian heirarchical models. To accomplish this, we take advantage of available information on the objective function used for stochastic VI at each iteration and replace regular Monte Carlo sampling with acceptance sampling. Rather than spend computational resources drawing and evaluating over a large sample for the gradient, we draw only one sample and accept it with probability proportional to the expected improvement in the objective. The following paper develops two versions of the algorithm: the first one based on a naive intuition, and another building up the algorithm as a Metropolis-type scheme. Empirical results based on simulations and benchmark datasets for multivariate Gaussian mixture models show that YOASOVI consistently converges faster (in clock time) and within better optimal neighborhoods than both regularized Monte Carlo and Quasi-Monte Carlo VI algorithms.
[ "['Dominic B. Dayta']" ]
null
null
2406.02841
null
null
http://arxiv.org/pdf/2406.02841v1
2024-06-05T01:31:50Z
2024-06-05T01:31:50Z
Conditional Idempotent Generative Networks
We propose Conditional Idempotent Generative Networks (CIGN), a novel approach that expands upon Idempotent Generative Networks (IGN) to enable conditional generation. While IGNs offer efficient single-pass generation, they lack the ability to control the content of the generated data. CIGNs address this limitation by incorporating conditioning mechanisms, allowing users to steer the generation process towards specific types of data. We establish the theoretical foundations for CIGNs, outlining their scope, loss function design, and evaluation metrics. We then present two potential architectures for implementing CIGNs: channel conditioning and filter conditioning. Finally, we discuss experimental results on the MNIST dataset, demonstrating the effectiveness of both approaches. Our findings pave the way for further exploration of CIGNs on larger datasets and with more powerful computing resources to determine the optimal implementation strategy.
[ "['Niccolò Ronchetti']" ]
null
null
2406.02847
null
null
http://arxiv.org/pdf/2406.02847v2
2024-06-06T06:15:29Z
2024-06-05T01:47:40Z
Exact Conversion of In-Context Learning to Model Weights in Linearized-Attention Transformers
In-Context Learning (ICL) has been a powerful emergent property of large language models that has attracted increasing attention in recent years. In contrast to regular gradient-based learning, ICL is highly interpretable and does not require parameter updates. In this paper, we show that, for linearized transformer networks, ICL can be made explicit and permanent through the inclusion of bias terms. We mathematically demonstrate the equivalence between a model with ICL demonstration prompts and the same model with the additional bias terms. Our algorithm (ICLCA) allows for exact conversion in an inexpensive manner. Existing methods are not exact and require expensive parameter updates. We demonstrate the efficacy of our approach through experiments that show the exact incorporation of ICL tokens into a linear transformer. We further suggest how our method can be adapted to achieve cheap approximate conversion of ICL tokens, even in regular transformer networks that are not linearized. Our experiments on GPT-2 show that, even though the conversion is only approximate, the model still gains valuable context from the included bias terms.
[ "['Brian K Chen' 'Tianyang Hu' 'Hui Jin' 'Hwee Kuan Lee' 'Kenji Kawaguchi']" ]
null
null
2406.02858
null
null
http://arxiv.org/pdf/2406.02858v1
2024-06-05T02:15:55Z
2024-06-05T02:15:55Z
TSPDiffuser: Diffusion Models as Learned Samplers for Traveling Salesperson Path Planning Problems
This paper presents TSPDiffuser, a novel data-driven path planner for traveling salesperson path planning problems (TSPPPs) in environments rich with obstacles. Given a set of destinations within obstacle maps, our objective is to efficiently find the shortest possible collision-free path that visits all the destinations. In TSPDiffuser, we train a diffusion model on a large collection of TSPPP instances and their respective solutions to generate plausible paths for unseen problem instances. The model can then be employed as a learned sampler to construct a roadmap that contains potential solutions with a small number of nodes and edges. This approach enables efficient and accurate estimation of traveling costs between destinations, effectively addressing the primary computational challenge in solving TSPPPs. Experimental evaluations with diverse synthetic and real-world indoor/outdoor environments demonstrate the effectiveness of TSPDiffuser over existing methods in terms of the trade-off between solution quality and computational time requirements.
[ "['Ryo Yonetani']" ]
null
null
2406.02867
null
null
http://arxiv.org/pdf/2406.02867v1
2024-06-05T02:30:29Z
2024-06-05T02:30:29Z
Oscillations enhance time-series prediction in reservoir computing with feedback
Reservoir computing, a machine learning framework used for modeling the brain, can predict temporal data with little observations and minimal computational resources. However, it is difficult to accurately reproduce the long-term target time series because the reservoir system becomes unstable. This predictive capability is required for a wide variety of time-series processing, including predictions of motor timing and chaotic dynamical systems. This study proposes oscillation-driven reservoir computing (ODRC) with feedback, where oscillatory signals are fed into a reservoir network to stabilize the network activity and induce complex reservoir dynamics. The ODRC can reproduce long-term target time series more accurately than conventional reservoir computing methods in a motor timing and chaotic time-series prediction tasks. Furthermore, it generates a time series similar to the target in the unexperienced period, that is, it can learn the abstract generative rules from limited observations. Given these significant improvements made by the simple and computationally inexpensive implementation, the ODRC would serve as a practical model of various time series data. Moreover, we will discuss biological implications of the ODRC, considering it as a model of neural oscillations and their cerebellar processors.
[ "['Yuji Kawai' 'Takashi Morita' 'Jihoon Park' 'Minoru Asada']" ]
null
null
2406.02872
null
null
http://arxiv.org/pdf/2406.02872v2
2024-06-10T02:45:41Z
2024-06-05T02:43:41Z
Combinatorial Optimization with Automated Graph Neural Networks
In recent years, graph neural networks (GNNs) have become increasingly popular for solving NP-hard combinatorial optimization (CO) problems, such as maximum cut and maximum independent set. The core idea behind these methods is to represent a CO problem as a graph and then use GNNs to learn the node/graph embedding with combinatorial information. Although these methods have achieved promising results, given a specific CO problem, the design of GNN architectures still requires heavy manual work with domain knowledge. Existing automated GNNs are mostly focused on traditional graph learning problems, which is inapplicable to solving NP-hard CO problems. To this end, we present a new class of textbf{AUTO}mated textbf{G}NNs for solving textbf{NP}-hard problems, namely textbf{AutoGNP}. We represent CO problems by GNNs and focus on two specific problems, i.e., mixed integer linear programming and quadratic unconstrained binary optimization. The idea of AutoGNP is to use graph neural architecture search algorithms to automatically find the best GNNs for a given NP-hard combinatorial optimization problem. Compared with existing graph neural architecture search algorithms, AutoGNP utilizes two-hop operators in the architecture search space. Moreover, AutoGNP utilizes simulated annealing and a strict early stopping policy to avoid local optimal solutions. Empirical results on benchmark combinatorial problems demonstrate the superiority of our proposed model.
[ "['Yang Liu' 'Peng Zhang' 'Yang Gao' 'Chuan Zhou' 'Zhao Li' 'Hongyang Chen']" ]
null
null
2406.02873
null
null
http://arxiv.org/pdf/2406.02873v1
2024-06-05T02:44:14Z
2024-06-05T02:44:14Z
Prediction-powered Generalization of Causal Inferences
Causal inferences from a randomized controlled trial (RCT) may not pertain to a target population where some effect modifiers have a different distribution. Prior work studies generalizing the results of a trial to a target population with no outcome but covariate data available. We show how the limited size of trials makes generalization a statistically infeasible task, as it requires estimating complex nuisance functions. We develop generalization algorithms that supplement the trial data with a prediction model learned from an additional observational study (OS), without making any assumptions on the OS. We theoretically and empirically show that our methods facilitate better generalization when the OS is high-quality, and remain robust when it is not, and e.g., have unmeasured confounding.
[ "['Ilker Demirel' 'Ahmed Alaa' 'Anthony Philippakis' 'David Sontag']" ]
null
null
2406.02875
null
null
http://arxiv.org/pdf/2406.02875v2
2024-06-06T13:13:35Z
2024-06-05T02:50:27Z
Leveraging KANs For Enhanced Deep Koopman Operator Discovery
Multi-layer perceptrons (MLP's) have been extensively utilized in discovering Deep Koopman operators for linearizing nonlinear dynamics. With the emergence of Kolmogorov-Arnold Networks (KANs) as a more efficient and accurate alternative to the MLP Neural Network, we propose a comparison of the performance of each network type in the context of learning Koopman operators with control. In this work, we propose a KANs-based deep Koopman framework with applications to an orbital Two-Body Problem (2BP) and the pendulum for data-driven discovery of linear system dynamics. KANs were found to be superior in nearly all aspects of training; learning 31 times faster, being 15 times more parameter efficiency, and predicting 1.25 times more accurately as compared to the MLP Deep Neural Networks (DNNs) in the case of the 2BP. Thus, KANs shows potential for being an efficient tool in the development of Deep Koopman Theory.
[ "['George Nehma' 'Madhur Tiwari']" ]
null
null
2406.02877
null
null
http://arxiv.org/pdf/2406.02877v1
2024-06-05T02:52:22Z
2024-06-05T02:52:22Z
FedStaleWeight: Buffered Asynchronous Federated Learning with Fair Aggregation via Staleness Reweighting
Federated Learning (FL) endeavors to harness decentralized data while preserving privacy, facing challenges of performance, scalability, and collaboration. Asynchronous Federated Learning (AFL) methods have emerged as promising alternatives to their synchronous counterparts bounded by the slowest agent, yet they add additional challenges in convergence guarantees, fairness with respect to compute heterogeneity, and incorporation of staleness in aggregated updates. Specifically, AFL biases model training heavily towards agents who can produce updates faster, leaving slower agents behind, who often also have differently distributed data which is not learned by the global model. Naively upweighting introduces incentive issues, where true fast updating agents may falsely report updates at a slower speed to increase their contribution to model training. We introduce FedStaleWeight, an algorithm addressing fairness in aggregating asynchronous client updates by employing average staleness to compute fair re-weightings. FedStaleWeight reframes asynchronous federated learning aggregation as a mechanism design problem, devising a weighting strategy that incentivizes truthful compute speed reporting without favoring faster update-producing agents by upweighting agent updates based on staleness. Leveraging only observed agent update staleness, FedStaleWeight results in more equitable aggregation on a per-agent basis. We both provide theoretical convergence guarantees in the smooth, non-convex setting and empirically compare FedStaleWeight against the commonly used asynchronous FedBuff with gradient averaging, demonstrating how it achieves stronger fairness, expediting convergence to a higher global model accuracy. Finally, we provide an open-source test bench to facilitate exploration of buffered AFL aggregation strategies, fostering further research in asynchronous federated learning paradigms.
[ "['Jeffrey Ma' 'Alan Tu' 'Yiling Chen' 'Vijay Janapa Reddi']" ]
null
null
2406.02883
null
null
http://arxiv.org/pdf/2406.02883v1
2024-06-05T03:00:47Z
2024-06-05T03:00:47Z
Nonlinear Transformations Against Unlearnable Datasets
Automated scraping stands out as a common method for collecting data in deep learning models without the authorization of data owners. Recent studies have begun to tackle the privacy concerns associated with this data collection method. Notable approaches include Deepconfuse, error-minimizing, error-maximizing (also known as adversarial poisoning), Neural Tangent Generalization Attack, synthetic, autoregressive, One-Pixel Shortcut, Self-Ensemble Protection, Entangled Features, Robust Error-Minimizing, Hypocritical, and TensorClog. The data generated by those approaches, called "unlearnable" examples, are prevented "learning" by deep learning models. In this research, we investigate and devise an effective nonlinear transformation framework and conduct extensive experiments to demonstrate that a deep neural network can effectively learn from the data/examples traditionally considered unlearnable produced by the above twelve approaches. The resulting approach improves the ability to break unlearnable data compared to the linear separable technique recently proposed by researchers. Specifically, our extensive experiments show that the improvement ranges from 0.34% to 249.59% for the unlearnable CIFAR10 datasets generated by those twelve data protection approaches, except for One-Pixel Shortcut. Moreover, the proposed framework achieves over 100% improvement of test accuracy for Autoregressive and REM approaches compared to the linear separable technique. Our findings suggest that these approaches are inadequate in preventing unauthorized uses of data in machine learning models. There is an urgent need to develop more robust protection mechanisms that effectively thwart an attacker from accessing data without proper authorization from the owners.
[ "['Thushari Hapuarachchi' 'Jing Lin' 'Kaiqi Xiong' 'Mohamed Rahouti'\n 'Gitte Ost']" ]
null
null
2406.02888
null
null
http://arxiv.org/pdf/2406.02888v2
2024-06-11T01:51:57Z
2024-06-05T03:08:46Z
HYDRA: Model Factorization Framework for Black-Box LLM Personalization
Personalization has emerged as a critical research area in modern intelligent systems, focusing on mining users' behavioral history and adapting to their preferences for delivering tailored experiences. Despite the remarkable few-shot capabilities exhibited by black-box large language models (LLMs), the inherent opacity of their model parameters presents significant challenges in aligning the generated output with individual expectations. Existing solutions have primarily focused on prompt design to incorporate user-specific profiles and behaviors; however, such approaches often struggle to generalize effectively due to their inability to capture shared knowledge among all users. To address these challenges, we propose HYDRA, a model factorization framework that captures both user-specific behavior patterns from historical data and shared general knowledge among all users to deliver personalized generation. In order to capture user-specific behavior patterns, we first train a reranker to prioritize the most useful information from top-retrieved relevant historical records. By combining the prioritized history with the corresponding query, we train an adapter to align the output with individual user-specific preferences, eliminating the reliance on access to inherent model parameters of black-box LLMs. Both the reranker and the adapter can be decomposed into a base model with multiple user-specific heads, resembling a hydra. The base model maintains shared knowledge across users, while the multiple personal heads capture user-specific preferences. Experimental results demonstrate that HYDRA outperforms existing state-of-the-art prompt-based methods by an average relative improvement of 9.01% across five diverse personalization tasks in the LaMP benchmark. Our implementation is available at https://github.com/night-chen/HYDRA.
[ "['Yuchen Zhuang' 'Haotian Sun' 'Yue Yu' 'Rushi Qiang' 'Qifan Wang'\n 'Chao Zhang' 'Bo Dai']" ]
null
null
2406.02890
null
null
http://arxiv.org/pdf/2406.02890v1
2024-06-05T03:11:44Z
2024-06-05T03:11:44Z
Representation Learning For Efficient Deep Multi-Agent Reinforcement Learning
Sample efficiency remains a key challenge in multi-agent reinforcement learning (MARL). A promising approach is to learn a meaningful latent representation space through auxiliary learning objectives alongside the MARL objective to aid in learning a successful control policy. In our work, we present MAPO-LSO (Multi-Agent Policy Optimization with Latent Space Optimization) which applies a form of comprehensive representation learning devised to supplement MARL training. Specifically, MAPO-LSO proposes a multi-agent extension of transition dynamics reconstruction and self-predictive learning that constructs a latent state optimization scheme that can be trivially extended to current state-of-the-art MARL algorithms. Empirical results demonstrate MAPO-LSO to show notable improvements in sample efficiency and learning performance compared to its vanilla MARL counterpart without any additional MARL hyperparameter tuning on a diverse suite of MARL tasks.
[ "['Dom Huh' 'Prasant Mohapatra']" ]
null
null
2406.02891
null
null
http://arxiv.org/pdf/2406.02891v1
2024-06-05T03:17:48Z
2024-06-05T03:17:48Z
A Bi-metric Framework for Fast Similarity Search
We propose a new "bi-metric" framework for designing nearest neighbor data structures. Our framework assumes two dissimilarity functions: a ground-truth metric that is accurate but expensive to compute, and a proxy metric that is cheaper but less accurate. In both theory and practice, we show how to construct data structures using only the proxy metric such that the query procedure achieves the accuracy of the expensive metric, while only using a limited number of calls to both metrics. Our theoretical results instantiate this framework for two popular nearest neighbor search algorithms: DiskANN and Cover Tree. In both cases we show that, as long as the proxy metric used to construct the data structure approximates the ground-truth metric up to a bounded factor, our data structure achieves arbitrarily good approximation guarantees with respect to the ground-truth metric. On the empirical side, we apply the framework to the text retrieval problem with two dissimilarity functions evaluated by ML models with vastly different computational costs. We observe that for almost all data sets in the MTEB benchmark, our approach achieves a considerably better accuracy-efficiency tradeoff than the alternatives, such as re-ranking.
[ "['Haike Xu' 'Sandeep Silwal' 'Piotr Indyk']" ]
null
null
2406.02900
null
null
http://arxiv.org/pdf/2406.02900v1
2024-06-05T03:41:37Z
2024-06-05T03:41:37Z
Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms
Reinforcement Learning from Human Feedback (RLHF) has been crucial to the recent success of Large Language Models (LLMs), however, it is often a complex and brittle process. In the classical RLHF framework, a reward model is first trained to represent human preferences, which is in turn used by an online reinforcement learning (RL) algorithm to optimize the LLM. A prominent issue with such methods is emph{reward over-optimization} or emph{reward hacking}, where performance as measured by the learned proxy reward model increases, but true quality plateaus or even deteriorates. Direct Alignment Algorithms (DDAs) like Direct Preference Optimization have emerged as alternatives to the classical RLHF pipeline by circumventing the reward modeling phase. However, although DAAs do not use a separate proxy reward model, they still commonly deteriorate from over-optimization. While the so-called reward hacking phenomenon is not well-defined for DAAs, we still uncover similar trends: at higher KL budgets, DAA algorithms exhibit similar degradation patterns to their classic RLHF counterparts. In particular, we find that DAA methods deteriorate not only across a wide range of KL budgets but also often before even a single epoch of the dataset is completed. Through extensive empirical experimentation, this work formulates and formalizes the reward over-optimization or hacking problem for DAAs and explores its consequences across objectives, training regimes, and model scales.
[ "['Rafael Rafailov' 'Yaswanth Chittepu' 'Ryan Park' 'Harshit Sikchi'\n 'Joey Hejna' 'Bradley Knox' 'Chelsea Finn' 'Scott Niekum']" ]
null
null
2406.02913
null
null
http://arxiv.org/pdf/2406.02913v1
2024-06-05T04:07:35Z
2024-06-05T04:07:35Z
Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Zeroth-order optimization (ZO) is a memory-efficient strategy for fine-tuning Large Language Models using only forward passes. However, the application of ZO fine-tuning in memory-constrained settings such as mobile phones and laptops is still challenging since full precision forward passes are infeasible. In this study, we address this limitation by integrating sparsity and quantization into ZO fine-tuning of LLMs. Specifically, we investigate the feasibility of fine-tuning an extremely small subset of LLM parameters using ZO. This approach allows the majority of un-tuned parameters to be quantized to accommodate the constraint of limited device memory. Our findings reveal that the pre-training process can identify a set of "sensitive parameters" that can guide the ZO fine-tuning of LLMs on downstream tasks. Our results demonstrate that fine-tuning 0.1% sensitive parameters in the LLM with ZO can outperform the full ZO fine-tuning performance, while offering wall-clock time speedup. Additionally, we show that ZO fine-tuning targeting these 0.1% sensitive parameters, combined with 4 bit quantization, enables efficient ZO fine-tuning of an Llama2-7B model on a GPU device with less than 8 GiB of memory and notably reduced latency.
[ "['Wentao Guo' 'Jikai Long' 'Yimeng Zeng' 'Zirui Liu' 'Xinyu Yang'\n 'Yide Ran' 'Jacob R. Gardner' 'Osbert Bastani' 'Christopher De Sa'\n 'Xiaodong Yu' 'Beidi Chen' 'Zhaozhuo Xu']" ]
null
null
2406.02915
null
null
http://arxiv.org/pdf/2406.02915v1
2024-06-05T04:08:41Z
2024-06-05T04:08:41Z
Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models
It has recently been discovered that using a pre-trained vision-language model (VLM), e.g., CLIP, to align a whole query image with several finer text descriptions generated by a large language model can significantly enhance zero-shot performance. However, in this paper, we empirically find that the finer descriptions tend to align more effectively with local areas of the query image rather than the whole image, and then we theoretically validate this finding. Thus, we present a method called weighted visual-text cross alignment (WCA). This method begins with a localized visual prompting technique, designed to identify local visual areas within the query image. The local visual areas are then cross-aligned with the finer descriptions by creating a similarity matrix using the pre-trained VLM. To determine how well a query image aligns with each category, we develop a score function based on the weighted similarities in this matrix. Extensive experiments demonstrate that our method significantly improves zero-shot performance across various datasets, achieving results that are even comparable to few-shot learning methods.
[ "['Jinhao Li' 'Haopeng Li' 'Sarah Erfani' 'Lei Feng' 'James Bailey'\n 'Feng Liu']" ]
null
null
2406.02917
null
null
http://arxiv.org/pdf/2406.02917v1
2024-06-05T04:10:36Z
2024-06-05T04:10:36Z
A comprehensive and FAIR comparison between MLP and KAN representations for differential equations and operator networks
Kolmogorov-Arnold Networks (KANs) were recently introduced as an alternative representation model to MLP. Herein, we employ KANs to construct physics-informed machine learning models (PIKANs) and deep operator models (DeepOKANs) for solving differential equations for forward and inverse problems. In particular, we compare them with physics-informed neural networks (PINNs) and deep operator networks (DeepONets), which are based on the standard MLP representation. We find that although the original KANs based on the B-splines parameterization lack accuracy and efficiency, modified versions based on low-order orthogonal polynomials have comparable performance to PINNs and DeepONet although they still lack robustness as they may diverge for different random seeds or higher order orthogonal polynomials. We visualize their corresponding loss landscapes and analyze their learning dynamics using information bottleneck theory. Our study follows the FAIR principles so that other researchers can use our benchmarks to further advance this emerging topic.
[ "['Khemraj Shukla' 'Juan Diego Toscano' 'Zhicheng Wang' 'Zongren Zou'\n 'George Em Karniadakis']" ]
null
null
2406.02921
null
null
http://arxiv.org/pdf/2406.02921v2
2024-06-11T04:11:56Z
2024-06-05T04:20:17Z
Text Injection for Neural Contextual Biasing
Neural contextual biasing effectively improves automatic speech recognition (ASR) for crucial phrases within a speaker's context, particularly those that are infrequent in the training data. This work proposes contextual text injection (CTI) to enhance contextual ASR. CTI leverages not only the paired speech-text data, but also a much larger corpus of unpaired text to optimize the ASR model and its biasing component. Unpaired text is converted into speech-like representations and used to guide the model's attention towards relevant bias phrases. Moreover, we introduce a contextual text-injected (CTI) minimum word error rate (MWER) training, which minimizes the expected WER caused by contextual biasing when unpaired text is injected into the model. Experiments show that CTI with 100 billion text sentences can achieve up to 43.3% relative WER reduction from a strong neural biasing model. CTI-MWER provides a further relative improvement of 23.5%.
[ "['Zhong Meng' 'Zelin Wu' 'Rohit Prabhavalkar' 'Cal Peyser' 'Weiran Wang'\n 'Nanxin Chen' 'Tara N. Sainath' 'Bhuvana Ramabhadran']" ]
null
null
2406.02924
null
null
http://arxiv.org/pdf/2406.02924v1
2024-06-05T04:25:23Z
2024-06-05T04:25:23Z
Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large Language Models
Despite the remarkable capabilities, Large Language Models (LLMs) face deployment challenges due to their extensive size. Pruning methods drop a subset of weights to accelerate, but many of them require retraining, which is prohibitively expensive and computationally demanding. Recently, post-training pruning approaches introduced novel metrics, enabling the pruning of LLMs without retraining. However, these metrics require the involvement of human experts and tedious trial and error. To efficiently identify superior pruning metrics, we develop an automatic framework for searching symbolic pruning metrics using genetic programming. In particular, we devise an elaborate search space encompassing the existing pruning metrics to discover the potential symbolic pruning metric. We propose an opposing operation simplification strategy to increase the diversity of the population. In this way, Pruner-Zero allows auto-generation of symbolic pruning metrics. Based on the searched results, we explore the correlation between pruning metrics and performance after pruning and summarize some principles. Extensive experiments on LLaMA and LLaMA-2 on language modeling and zero-shot tasks demonstrate that our Pruner-Zero obtains superior performance than SOTA post-training pruning methods. Code at: url{https://github.com/pprp/Pruner-Zero}.
[ "['Peijie Dong' 'Lujun Li' 'Zhenheng Tang' 'Xiang Liu' 'Xinglin Pan'\n 'Qiang Wang' 'Xiaowen Chu']" ]
null
null
2406.02925
null
null
http://arxiv.org/pdf/2406.02925v2
2024-06-15T15:58:22Z
2024-06-05T04:25:56Z
Task Arithmetic can Mitigate Synthetic-to-Real Gap in Automatic Speech Recognition
Synthetic data is widely used in speech recognition due to the availability of text-to-speech models, which facilitate adapting models to previously unseen text domains. However, existing methods suffer in performance when they fine-tune an automatic speech recognition (ASR) model on synthetic data as they suffer from the distributional shift commonly referred to as the synthetic-to-real gap. In this paper, we find that task vector arithmetic is effective at mitigating this gap. Our proposed method, SYN2REAL task vector, shows an average improvement of 10.03% improvement in word error rate over baselines on the SLURP dataset. Additionally, we show that an average of SYN2REAL task vectors, when we have real speeches from multiple different domains, can further adapt the original ASR model to perform better on the target text domain.
[ "['Hsuan Su' 'Hua Farn' 'Fan-Yun Sun' 'Shang-Tse Chen' 'Hung-yi Lee']" ]
null
null
2406.02927
null
null
http://arxiv.org/pdf/2406.02927v1
2024-06-05T04:28:57Z
2024-06-05T04:28:57Z
Multivariate Physics-Informed Convolutional Autoencoder for Anomaly Detection in Power Distribution Systems with High Penetration of DERs
Despite the relentless progress of deep learning models in analyzing the system conditions under cyber-physical events, their abilities are limited in the power system domain due to data availability issues, cost of data acquisition, and lack of interpretation and extrapolation for the data beyond the training windows. In addition, the integration of distributed energy resources (DERs) such as wind and solar generations increases the complexities and nonlinear nature of power systems. Therefore, an interpretable and reliable methodology is of utmost need to increase the confidence of power system operators and their situational awareness for making reliable decisions. This has led to the development of physics-informed neural network (PINN) models as more interpretable, trustworthy, and robust models where the underlying principled laws are integrated into the training process of neural network models to achieve improved performance. This paper proposes a multivariate physics-informed convolutional autoencoder (PIConvAE) model to detect cyber anomalies in power distribution systems with unbalanced configurations and high penetration of DERs. The physical laws are integrated through a customized loss function that embeds the underlying Kirchhoff's circuit laws into the training process of the autoencoder. The performance of the multivariate PIConvAE model is evaluated on two unbalanced power distribution grids, IEEE 123-bus system and a real-world feeder in Riverside, CA. The results show the exceptional performance of the proposed method in detecting various cyber anomalies in both systems. In addition, the model's effectiveness is evaluated in data scarcity scenarios with different training data ratios. Finally, the model's performance is compared with existing machine learning models where the PIConvAE model surpasses other models with considerably higher detection metrics.
[ "['Mehdi Jabbari Zideh' 'Sarika Khushalani Solanki']" ]
null
null
2406.02929
null
null
http://arxiv.org/pdf/2406.02929v1
2024-06-05T04:37:06Z
2024-06-05T04:37:06Z
Exploring Data Efficiency in Zero-Shot Learning with Diffusion Models
Zero-Shot Learning (ZSL) aims to enable classifiers to identify unseen classes by enhancing data efficiency at the class level. This is achieved by generating image features from pre-defined semantics of unseen classes. However, most current approaches heavily depend on the number of samples from seen classes, i.e. they do not consider instance-level effectiveness. In this paper, we demonstrate that limited seen examples generally result in deteriorated performance of generative models. To overcome these challenges, we propose ZeroDiff, a Diffusion-based Generative ZSL model. This unified framework incorporates diffusion models to improve data efficiency at both the class and instance levels. Specifically, for instance-level effectiveness, ZeroDiff utilizes a forward diffusion chain to transform limited data into an expanded set of noised data. For class-level effectiveness, we design a two-branch generation structure that consists of a Diffusion-based Feature Generator (DFG) and a Diffusion-based Representation Generator (DRG). DFG focuses on learning and sampling the distribution of cross-entropy-based features, whilst DRG learns the supervised contrastive-based representation to boost the zero-shot capabilities of DFG. Additionally, we employ three discriminators to evaluate generated features from various aspects and introduce a Wasserstein-distance-based mutual learning loss to transfer knowledge among discriminators, thereby enhancing guidance for generation. Demonstrated through extensive experiments on three popular ZSL benchmarks, our ZeroDiff not only achieves significant improvements over existing ZSL methods but also maintains robust performance even with scarce training data. Code will be released upon acceptance.
[ "['Zihan Ye' 'Shreyank N. Gowda' 'Xiaobo Jin' 'Xiaowei Huang' 'Haotian Xu'\n 'Yaochu Jin' 'Kaizhu Huang']" ]
null
null
2406.02939
null
null
http://arxiv.org/pdf/2406.02939v1
2024-06-05T04:54:36Z
2024-06-05T04:54:36Z
Achieving Near-Optimal Convergence for Distributed Minimax Optimization with Adaptive Stepsizes
In this paper, we show that applying adaptive methods directly to distributed minimax problems can result in non-convergence due to inconsistency in locally computed adaptive stepsizes. To address this challenge, we propose D-AdaST, a Distributed Adaptive minimax method with Stepsize Tracking. The key strategy is to employ an adaptive stepsize tracking protocol involving the transmission of two extra (scalar) variables. This protocol ensures the consistency among stepsizes of nodes, eliminating the steady-state error due to the lack of coordination of stepsizes among nodes that commonly exists in vanilla distributed adaptive methods, and thus guarantees exact convergence. For nonconvex-strongly-concave distributed minimax problems, we characterize the specific transient times that ensure time-scale separation of stepsizes and quasi-independence of networks, leading to a near-optimal convergence rate of $tilde{mathcal{O}} left( epsilon ^{-left( 4+delta right)} right)$ for any small $delta > 0$, matching that of the centralized counterpart. To our best knowledge, D-AdaST is the first distributed adaptive method achieving near-optimal convergence without knowing any problem-dependent parameters for nonconvex minimax problems. Extensive experiments are conducted to validate our theoretical results.
[ "['Yan Huang' 'Xiang Li' 'Yipeng Shen' 'Niao He' 'Jinming Xu']" ]
null
null
2406.02953
null
null
http://arxiv.org/pdf/2406.02953v1
2024-06-05T05:22:32Z
2024-06-05T05:22:32Z
GraphAlign: Pretraining One Graph Neural Network on Multiple Graphs via Feature Alignment
Graph self-supervised learning (SSL) holds considerable promise for mining and learning with graph-structured data. Yet, a significant challenge in graph SSL lies in the feature discrepancy among graphs across different domains. In this work, we aim to pretrain one graph neural network (GNN) on a varied collection of graphs endowed with rich node features and subsequently apply the pretrained GNN to unseen graphs. We present a general GraphAlign method that can be seamlessly integrated into the existing graph SSL framework. To align feature distributions across disparate graphs, GraphAlign designs alignment strategies of feature encoding, normalization, alongside a mixture-of-feature-expert module. Extensive experiments show that GraphAlign empowers existing graph SSL frameworks to pretrain a unified and powerful GNN across multiple graphs, showcasing performance superiority on both in-domain and out-of-domain graphs.
[ "['Zhenyu Hou' 'Haozhan Li' 'Yukuo Cen' 'Jie Tang' 'Yuxiao Dong']" ]
null
null
2406.02958
null
null
http://arxiv.org/pdf/2406.02958v1
2024-06-05T05:27:02Z
2024-06-05T05:27:02Z
PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs
On-device training is currently the most common approach for training machine learning (ML) models on private, distributed user data. Despite this, on-device training has several drawbacks: (1) most user devices are too small to train large models on-device, (2) on-device training is communication- and computation-intensive, and (3) on-device training can be difficult to debug and deploy. To address these problems, we propose Private Evolution-Text (PrE-Text), a method for generating differentially private (DP) synthetic textual data. First, we show that across multiple datasets, training small models (models that fit on user devices) with PrE-Text synthetic data outperforms small models trained on-device under practical privacy regimes ($epsilon=1.29$, $epsilon=7.58$). We achieve these results while using 9$times$ fewer rounds, 6$times$ less client computation per round, and 100$times$ less communication per round. Second, finetuning large models on PrE-Text's DP synthetic data improves large language model (LLM) performance on private data across the same range of privacy budgets. Altogether, these results suggest that training on DP synthetic data can be a better option than training a model on-device on private distributed data. Code is available at https://github.com/houcharlie/PrE-Text.
[ "['Charlie Hou' 'Akshat Shrivastava' 'Hongyuan Zhan' 'Rylan Conway'\n 'Trang Le' 'Adithya Sagar' 'Giulia Fanti' 'Daniel Lazar']" ]
null
null
2406.02959
null
null
http://arxiv.org/pdf/2406.02959v1
2024-06-05T05:27:29Z
2024-06-05T05:27:29Z
Adversarial Moment-Matching Distillation of Large Language Models
Knowledge distillation (KD) has been shown to be highly effective in guiding a student model with a larger teacher model and achieving practical benefits in improving the computational and memory efficiency for large language models (LLMs). State-of-the-art KD methods for LLMs mostly rely on minimizing explicit distribution distance between teacher and student probability predictions. Instead of optimizing these mandatory behaviour cloning objectives, we explore an imitation learning strategy for KD of LLMs. In particular, we minimize the imitation gap by matching the action-value moments of the teacher's behavior from both on- and off-policy perspectives. To achieve this action-value moment-matching goal, we propose an adversarial training algorithm to jointly estimate the moment-matching distance and optimize the student policy to minimize it. Results from both task-agnostic instruction-following experiments and task-specific experiments demonstrate the effectiveness of our method and achieve new state-of-the-art performance.
[ "['Chen Jia']" ]
null
null
2406.02969
null
null
http://arxiv.org/pdf/2406.02969v1
2024-06-05T05:53:50Z
2024-06-05T05:53:50Z
Filtered not Mixed: Stochastic Filtering-Based Online Gating for Mixture of Large Language Models
We propose MoE-F -- a formalised mechanism for combining $N$ pre-trained expert Large Language Models (LLMs) in online time-series prediction tasks by adaptively forecasting the best weighting of LLM predictions at every time step. Our mechanism leverages the conditional information in each expert's running performance to forecast the best combination of LLMs for predicting the time series in its next step. Diverging from static (learned) Mixture of Experts (MoE) methods, MoE-F employs time-adaptive stochastic filtering techniques to combine experts. By framing the expert selection problem as a finite state-space, continuous-time Hidden Markov model (HMM), we can leverage the Wohman-Shiryaev filter. Our approach first constructs $N$ parallel filters corresponding to each of the $N$ individual LLMs. Each filter proposes its best combination of LLMs, given the information that they have access to. Subsequently, the $N$ filter outputs are aggregated to optimize a lower bound for the loss of the aggregated LLMs, which can be optimized in closed-form, thus generating our ensemble predictor. Our contributions here are: (I) the MoE-F algorithm -- deployable as a plug-and-play filtering harness, (II) theoretical optimality guarantees of the proposed filtering-based gating algorithm, and (III) empirical evaluation and ablative results using state of the art foundational and MoE LLMs on a real-world Financial Market Movement task where MoE-F attains a remarkable 17% absolute and 48.5% relative F1 measure improvement over the next best performing individual LLM expert.
[ "['Raeid Saqur' 'Anastasis Kratsios' 'Florian Krach' 'Yannick Limmer'\n 'Jacob-Junqi Tian' 'John Willes' 'Blanka Horvath' 'Frank Rudzicz']" ]
null
null
2406.02970
null
null
http://arxiv.org/pdf/2406.02970v1
2024-06-05T05:54:56Z
2024-06-05T05:54:56Z
Which exceptional low-dimensional projections of a Gaussian point cloud can be found in polynomial time?
Given $d$-dimensional standard Gaussian vectors $boldsymbol{x}_1,dots, boldsymbol{x}_n$, we consider the set of all empirical distributions of its $m$-dimensional projections, for $m$ a fixed constant. Diaconis and Freedman (1984) proved that, if $n/dto infty$, all such distributions converge to the standard Gaussian distribution. In contrast, we study the proportional asymptotics, whereby $n,dto infty$ with $n/dto alpha in (0, infty)$. In this case, the projection of the data points along a typical random subspace is again Gaussian, but the set $mathscr{F}_{m,alpha}$ of all probability distributions that are asymptotically feasible as $m$-dimensional projections contains non-Gaussian distributions corresponding to exceptional subspaces. Non-rigorous methods from statistical physics yield an indirect characterization of $mathscr{F}_{m,alpha}$ in terms of a generalized Parisi formula. Motivated by the goal of putting this formula on a rigorous basis, and to understand whether these projections can be found efficiently, we study the subset $mathscr{F}^{rm alg}_{m,alpha}subseteq mathscr{F}_{m,alpha}$ of distributions that can be realized by a class of iterative algorithms. We prove that this set is characterized by a certain stochastic optimal control problem, and obtain a dual characterization of this problem in terms of a variational principle that extends Parisi's formula. As a byproduct, we obtain computationally achievable values for a class of random optimization problems including `generalized spherical perceptron' models.
[ "['Andrea Montanari' 'Kangjie Zhou']" ]
null
null
2406.02979
null
null
http://arxiv.org/pdf/2406.02979v1
2024-06-05T06:22:11Z
2024-06-05T06:22:11Z
Efficient User Sequence Learning for Online Services via Compressed Graph Neural Networks
Learning representations of user behavior sequences is crucial for various online services, such as online fraudulent transaction detection mechanisms. Graph Neural Networks (GNNs) have been extensively applied to model sequence relationships, and extract information from similar sequences. While user behavior sequence data volume is usually huge for online applications, directly applying GNN models may lead to substantial computational overhead during both the training and inference stages and make it challenging to meet real-time requirements for online services. In this paper, we leverage graph compression techniques to alleviate the efficiency issue. Specifically, we propose a novel unified framework called ECSeq, to introduce graph compression techniques into relation modeling for user sequence representation learning. The key module of ECSeq is sequence relation modeling, which explores relationships among sequences to enhance sequence representation learning, and employs graph compression algorithms to achieve high efficiency and scalability. ECSeq also exhibits plug-and-play characteristics, seamlessly augmenting pre-trained sequence representation models without modifications. Empirical experiments on both sequence classification and regression tasks demonstrate the effectiveness of ECSeq. Specifically, with an additional training time of tens of seconds in total on 100,000+ sequences and inference time preserved within $10^{-4}$ seconds/sample, ECSeq improves the prediction R@P$_{0.9}$ of the widely used LSTM by $sim 5%$.
[ "['Yucheng Wu' 'Liyue Chen' 'Yu Cheng' 'Shuai Chen' 'Jinyu Xu' 'Leye Wang']" ]
null
null
2406.02980
null
null
http://arxiv.org/pdf/2406.02980v1
2024-06-05T06:23:11Z
2024-06-05T06:23:11Z
Tensor Polynomial Additive Model
Additive models can be used for interpretable machine learning for their clarity and simplicity. However, In the classical models for high-order data, the vectorization operation disrupts the data structure, which may lead to degenerated accuracy and increased computational complexity. To deal with these problems, we propose the tensor polynomial addition model (TPAM). It retains the multidimensional structure information of high-order inputs with tensor representation. The model parameter compression is achieved using a hierarchical and low-order symmetric tensor approximation. In this way, complex high-order feature interactions can be captured with fewer parameters. Moreover, The TPAM preserves the inherent interpretability of additive models, facilitating transparent decision-making and the extraction of meaningful feature values. Additionally, leveraging TPAM's transparency and ability to handle higher-order features, it is used as a post-processing module for other interpretation models by introducing two variants for class activation maps. Experimental results on a series of datasets demonstrate that TPAM can enhance accuracy by up to 30%, and compression rate by up to 5 times, while maintaining a good interpretability.
[ "['Yang Chen' 'Ce Zhu' 'Jiani Liu' 'Yipeng Liu']" ]
null
null
2406.02981
null
null
http://arxiv.org/pdf/2406.02981v2
2024-06-07T08:44:52Z
2024-06-05T06:23:49Z
Local vs. Global Interpretability: A Computational Complexity Perspective
The local and global interpretability of various ML models has been studied extensively in recent years. However, despite significant progress in the field, many known results remain informal or lack sufficient mathematical rigor. We propose a framework for bridging this gap, by using computational complexity theory to assess local and global perspectives of interpreting ML models. We begin by proposing proofs for two novel insights that are essential for our analysis: (1) a duality between local and global forms of explanations; and (2) the inherent uniqueness of certain global explanation forms. We then use these insights to evaluate the complexity of computing explanations, across three model types representing the extremes of the interpretability spectrum: (1) linear models; (2) decision trees; and (3) neural networks. Our findings offer insights into both the local and global interpretability of these models. For instance, under standard complexity assumptions such as P != NP, we prove that selecting global sufficient subsets in linear models is computationally harder than selecting local subsets. Interestingly, with neural networks and decision trees, the opposite is true: it is harder to carry out this task locally than globally. We believe that our findings demonstrate how examining explainability through a computational complexity lens can help us develop a more rigorous grasp of the inherent interpretability of ML models.
[ "['Shahaf Bassan' 'Guy Amir' 'Guy Katz']" ]
null
null
2406.02996
null
null
http://arxiv.org/pdf/2406.02996v1
2024-06-05T06:52:29Z
2024-06-05T06:52:29Z
Quantifying Task Priority for Multi-Task Optimization
The goal of multi-task learning is to learn diverse tasks within a single unified network. As each task has its own unique objective function, conflicts emerge during training, resulting in negative transfer among them. Earlier research identified these conflicting gradients in shared parameters between tasks and attempted to realign them in the same direction. However, we prove that such optimization strategies lead to sub-optimal Pareto solutions due to their inability to accurately determine the individual contributions of each parameter across various tasks. In this paper, we propose the concept of task priority to evaluate parameter contributions across different tasks. To learn task priority, we identify the type of connections related to links between parameters influenced by task-specific losses during backpropagation. The strength of connections is gauged by the magnitude of parameters to determine task priority. Based on these, we present a new method named connection strength-based optimization for multi-task learning which consists of two phases. The first phase learns the task priority within the network, while the second phase modifies the gradients while upholding this priority. This ultimately leads to finding new Pareto optimal solutions for multiple tasks. Through extensive experiments, we show that our approach greatly enhances multi-task performance in comparison to earlier gradient manipulation methods.
[ "['Wooseong Jeong' 'Kuk-Jin Yoon']" ]
null
null
2406.02997
null
null
http://arxiv.org/pdf/2406.02997v2
2024-06-12T09:06:10Z
2024-06-05T06:53:16Z
Residual Connections and Normalization Can Provably Prevent Oversmoothing in GNNs
Residual connections and normalization layers have become standard design choices for graph neural networks (GNNs), and were proposed as solutions to the mitigate the oversmoothing problem in GNNs. However, how exactly these methods help alleviate the oversmoothing problem from a theoretical perspective is not well understood. In this work, we provide a formal and precise characterization of (linearized) GNNs with residual connections and normalization layers. We establish that (a) for residual connections, the incorporation of the initial features at each layer can prevent the signal from becoming too smooth, and determines the subspace of possible node representations; (b) batch normalization prevents a complete collapse of the output embedding space to a one-dimensional subspace through the individual rescaling of each column of the feature matrix. This results in the convergence of node representations to the top-$k$ eigenspace of the message-passing operator; (c) moreover, we show that the centering step of a normalization layer -- which can be understood as a projection -- alters the graph signal in message-passing in such a way that relevant information can become harder to extract. We therefore introduce a novel, principled normalization layer called GraphNormv2 in which the centering step is learned such that it does not distort the original graph signal in an undesirable way. Experimental results confirm the effectiveness of our method.
[ "['Michael Scholkemper' 'Xinyi Wu' 'Ali Jadbabaie' 'Michael T. Schaub']" ]
null
null
2406.03006
null
null
http://arxiv.org/pdf/2406.03006v1
2024-06-05T07:13:52Z
2024-06-05T07:13:52Z
Quantum Algorithms and Lower Bounds for Finite-Sum Optimization
Finite-sum optimization has wide applications in machine learning, covering important problems such as support vector machines, regression, etc. In this paper, we initiate the study of solving finite-sum optimization problems by quantum computing. Specifically, let $f_1,ldots,f_ncolonmathbb{R}^dtomathbb{R}$ be $ell$-smooth convex functions and $psicolonmathbb{R}^dtomathbb{R}$ be a $mu$-strongly convex proximal function. The goal is to find an $epsilon$-optimal point for $F(mathbf{x})=frac{1}{n}sum_{i=1}^n f_i(mathbf{x})+psi(mathbf{x})$. We give a quantum algorithm with complexity $tilde{O}big(n+sqrt{d}+sqrt{ell/mu}big(n^{1/3}d^{1/3}+n^{-2/3}d^{5/6}big)big)$, improving the classical tight bound $tilde{Theta}big(n+sqrt{nell/mu}big)$. We also prove a quantum lower bound $tilde{Omega}(n+n^{3/4}(ell/mu)^{1/4})$ when $d$ is large enough. Both our quantum upper and lower bounds can extend to the cases where $psi$ is not necessarily strongly convex, or each $f_i$ is Lipschitz but not necessarily smooth. In addition, when $F$ is nonconvex, our quantum algorithm can find an $epsilon$-critial point using $tilde{O}(n+ell(d^{1/3}n^{1/3}+sqrt{d})/epsilon^2)$ queries.
[ "['Yexin Zhang' 'Chenyi Zhang' 'Cong Fang' 'Liwei Wang' 'Tongyang Li']" ]
null
null
2406.03007
null
null
http://arxiv.org/pdf/2406.03007v1
2024-06-05T07:14:28Z
2024-06-05T07:14:28Z
BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents
With the prosperity of large language models (LLMs), powerful LLM-based intelligent agents have been developed to provide customized services with a set of user-defined tools. State-of-the-art methods for constructing LLM agents adopt trained LLMs and further fine-tune them on data for the agent task. However, we show that such methods are vulnerable to our proposed backdoor attacks named BadAgent on various agent tasks, where a backdoor can be embedded by fine-tuning on the backdoor data. At test time, the attacker can manipulate the deployed LLM agents to execute harmful operations by showing the trigger in the agent input or environment. To our surprise, our proposed attack methods are extremely robust even after fine-tuning on trustworthy data. Though backdoor attacks have been studied extensively in natural language processing, to the best of our knowledge, we could be the first to study them on LLM agents that are more dangerous due to the permission to use external tools. Our work demonstrates the clear risk of constructing LLM agents based on untrusted LLMs or data. Our code is public at https://github.com/DPamK/BadAgent
[ "['Yifei Wang' 'Dizhan Xue' 'Shengjie Zhang' 'Shengsheng Qian']" ]
null
null
2406.03012
null
null
http://arxiv.org/pdf/2406.03012v1
2024-06-05T07:20:06Z
2024-06-05T07:20:06Z
Analyzing the Influence of Training Samples on Explanations
EXplainable AI (XAI) constitutes a popular method to analyze the reasoning of AI systems by explaining their decision-making, e.g. providing a counterfactual explanation of how to achieve recourse. However, in cases such as unexpected explanations, the user might be interested in learning about the cause of this explanation -- e.g. properties of the utilized training data that are responsible for the observed explanation. Under the umbrella of data valuation, first approaches have been proposed that estimate the influence of data samples on a given model. In this work, we take a slightly different stance, as we are interested in the influence of single samples on a model explanation rather than the model itself. Hence, we propose the novel problem of identifying training data samples that have a high influence on a given explanation (or related quantity) and investigate the particular case of differences in the cost of the recourse between protected groups. For this, we propose an algorithm that identifies such influential training samples.
[ "['André Artelt' 'Barbara Hammer']" ]
null
null
2406.03030
null
null
http://arxiv.org/pdf/2406.03030v1
2024-06-05T07:57:17Z
2024-06-05T07:57:17Z
From Tarzan to Tolkien: Controlling the Language Proficiency Level of LLMs for Content Generation
We study the problem of controlling the difficulty level of text generated by Large Language Models (LLMs) for contexts where end-users are not fully proficient, such as language learners. Using a novel framework, we evaluate the effectiveness of several key approaches for this task, including few-shot prompting, supervised finetuning, and reinforcement learning (RL), utilising both GPT-4 and open source alternatives like LLama2-7B and Mistral-7B. Our findings reveal a large performance gap between GPT-4 and the open source models when using prompt-based strategies. However, we show how to bridge this gap with a careful combination of finetuning and RL alignment. Our best model, CALM (CEFR-Aligned Language Model), surpasses the performance of GPT-4 and other strategies, at only a fraction of the cost. We further validate the quality of our results through a small-scale human study.
[ "['Ali Malik' 'Stephen Mayhew' 'Chris Piech' 'Klinton Bicknell']" ]
null
null
2406.03033
null
null
http://arxiv.org/pdf/2406.03033v1
2024-06-05T08:02:40Z
2024-06-05T08:02:40Z
Optimal Multi-Fidelity Best-Arm Identification
In bandit best-arm identification, an algorithm is tasked with finding the arm with highest mean reward with a specified accuracy as fast as possible. We study multi-fidelity best-arm identification, in which the algorithm can choose to sample an arm at a lower fidelity (less accurate mean estimate) for a lower cost. Several methods have been proposed for tackling this problem, but their optimality remain elusive, notably due to loose lower bounds on the total cost needed to identify the best arm. Our first contribution is a tight, instance-dependent lower bound on the cost complexity. The study of the optimization problem featured in the lower bound provides new insights to devise computationally efficient algorithms, and leads us to propose a gradient-based approach with asymptotically optimal cost complexity. We demonstrate the benefits of the new algorithm compared to existing methods in experiments. Our theoretical and empirical findings also shed light on an intriguing concept of optimal fidelity for each arm.
[ "['Riccardo Poiani' 'Rémy Degenne' 'Emilie Kaufmann'\n 'Alberto Maria Metelli' 'Marcello Restelli']" ]
null
null
2406.03044
null
null
http://arxiv.org/pdf/2406.03044v1
2024-06-05T08:15:09Z
2024-06-05T08:15:09Z
Population Transformer: Learning Population-level Representations of Intracranial Activity
We present a self-supervised framework that learns population-level codes for intracranial neural recordings at scale, unlocking the benefits of representation learning for a key neuroscience recording modality. The Population Transformer (PopT) lowers the amount of data required for decoding experiments, while increasing accuracy, even on never-before-seen subjects and tasks. We address two key challenges in developing PopT: sparse electrode distribution and varying electrode location across patients. PopT stacks on top of pretrained representations and enhances downstream tasks by enabling learned aggregation of multiple spatially-sparse data channels. Beyond decoding, we interpret the pretrained PopT and fine-tuned models to show how it can be used to provide neuroscience insights learned from massive amounts of data. We release a pretrained PopT to enable off-the-shelf improvements in multi-channel intracranial data decoding and interpretability, and code is available at https://github.com/czlwang/PopulationTransformer.
[ "['Geeling Chau' 'Christopher Wang' 'Sabera Talukder'\n 'Vighnesh Subramaniam' 'Saraswati Soedarmadji' 'Yisong Yue' 'Boris Katz'\n 'Andrei Barbu']" ]
null
null
2406.03052
null
null
http://arxiv.org/pdf/2406.03052v1
2024-06-05T08:26:53Z
2024-06-05T08:26:53Z
Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections
Despite the remarkable capabilities demonstrated by Graph Neural Networks (GNNs) in graph-related tasks, recent research has revealed the fairness vulnerabilities in GNNs when facing malicious adversarial attacks. However, all existing fairness attacks require manipulating the connectivity between existing nodes, which may be prohibited in reality. To this end, we introduce a Node Injection-based Fairness Attack (NIFA), exploring the vulnerabilities of GNN fairness in such a more realistic setting. In detail, NIFA first designs two insightful principles for node injection operations, namely the uncertainty-maximization principle and homophily-increase principle, and then optimizes injected nodes' feature matrix to further ensure the effectiveness of fairness attacks. Comprehensive experiments on three real-world datasets consistently demonstrate that NIFA can significantly undermine the fairness of mainstream GNNs, even including fairness-aware GNNs, by injecting merely 1% of nodes. We sincerely hope that our work can stimulate increasing attention from researchers on the vulnerability of GNN fairness, and encourage the development of corresponding defense mechanisms.
[ "['Zihan Luo' 'Hong Huang' 'Yongkang Zhou' 'Jiping Zhang' 'Nuo Chen']" ]
null
null
2406.03057
null
null
http://arxiv.org/pdf/2406.03057v1
2024-06-05T08:33:09Z
2024-06-05T08:33:09Z
BWS: Best Window Selection Based on Sample Scores for Data Pruning across Broad Ranges
Data subset selection aims to find a smaller yet informative subset of a large dataset that can approximate the full-dataset training, addressing challenges associated with training neural networks on large-scale datasets. However, existing methods tend to specialize in either high or low selection ratio regimes, lacking a universal approach that consistently achieves competitive performance across a broad range of selection ratios. We introduce a universal and efficient data subset selection method, Best Window Selection (BWS), by proposing a method to choose the best window subset from samples ordered based on their difficulty scores. This approach offers flexibility by allowing the choice of window intervals that span from easy to difficult samples. Furthermore, we provide an efficient mechanism for selecting the best window subset by evaluating its quality using kernel ridge regression. Our experimental results demonstrate the superior performance of BWS compared to other baselines across a broad range of selection ratios over datasets, including CIFAR-10/100 and ImageNet, and the scenarios involving training from random initialization or fine-tuning of pre-trained models.
[ "['Hoyong Choi' 'Nohyun Ki' 'Hye Won Chung']" ]
null
null
2406.03059
null
null
http://arxiv.org/pdf/2406.03059v1
2024-06-05T08:37:41Z
2024-06-05T08:37:41Z
Efficient Exploration of the Rashomon Set of Rule Set Models
Today, as increasingly complex predictive models are developed, simple rule sets remain a crucial tool to obtain interpretable predictions and drive high-stakes decision making. However, a single rule set provides a partial representation of a learning task. An emerging paradigm in interpretable machine learning aims at exploring the Rashomon set of all models exhibiting near-optimal performance. Existing work on Rashomon-set exploration focuses on exhaustive search of the Rashomon set for particular classes of models, which can be a computationally challenging task. On the other hand, exhaustive enumeration leads to redundancy that often is not necessary, and a representative sample or an estimate of the size of the Rashomon set is sufficient for many applications. In this work, we propose, for the first time, efficient methods to explore the Rashomon set of rule set models with or without exhaustive search. Extensive experiments demonstrate the effectiveness of the proposed methods in a variety of scenarios.
[ "['Martino Ciaperoni' 'Han Xiao' 'Aristides Gionis']" ]