categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.05890 | null | null | http://arxiv.org/pdf/2405.05890v1 | 2024-05-09T16:42:39Z | 2024-05-09T16:42:39Z | Safe Exploration Using Bayesian World Models and Log-Barrier
Optimization | A major challenge in deploying reinforcement learning in online tasks is ensuring that safety is maintained throughout the learning process. In this work, we propose CERL, a new method for solving constrained Markov decision processes while keeping the policy safe during learning. Our method leverages Bayesian world models and suggests policies that are pessimistic w.r.t. the model's epistemic uncertainty. This makes CERL robust towards model inaccuracies and leads to safe exploration during learning. In our experiments, we demonstrate that CERL outperforms the current state-of-the-art in terms of safety and optimality in solving CMDPs from image observations. | [
"['Yarden As' 'Bhavya Sukhija' 'Andreas Krause']"
]
|
null | null | 2405.05906 | null | null | http://arxiv.org/abs/2405.05906v1 | 2024-05-09T17:02:06Z | 2024-05-09T17:02:06Z | Deep Multi-Task Learning for Malware Image Classification | Malicious software is a pernicious global problem. A novel multi-task learning framework is proposed in this paper for malware image classification for accurate and fast malware detection. We generate bitmap (BMP) and (PNG) images from malware features, which we feed to a deep learning classifier. Our state-of-the-art multi-task learning approach has been tested on a new dataset, for which we have collected approximately 100,000 benign and malicious PE, APK, Mach-o, and ELF examples. Experiments with seven tasks tested with 4 activation functions, ReLU, LeakyReLU, PReLU, and ELU separately demonstrate that PReLU gives the highest accuracy of more than 99.87% on all tasks. Our model can effectively detect a variety of obfuscation methods like packing, encryption, and instruction overlapping, strengthing the beneficial claims of our model, in addition to achieving the state-of-art methods in terms of accuracy. | [
"['Ahmed Bensaoud' 'Jugal Kalita']"
]
|
null | null | 2405.05925 | null | null | http://arxiv.org/pdf/2405.05925v2 | 2024-07-05T06:48:50Z | 2024-05-09T17:15:09Z | FuXi-ENS: A machine learning model for medium-range ensemble weather
forecasting | Ensemble forecasting is crucial for improving weather predictions, especially for forecasts of extreme events. Constructing an ensemble prediction system (EPS) based on conventional NWP models is highly computationally expensive. ML models have emerged as valuable tools for deterministic weather forecasts, providing forecasts with significantly reduced computational requirements and even surpassing the forecast performance of traditional NWP models. However, challenges arise when applying ML models to ensemble forecasting. Recent ML models, such as GenCast and SEEDS model, rely on the ERA5 EDA or operational NWP ensemble members for forecast generation. Their spatial resolution is also considered too coarse for many applications. To overcome these limitations, we introduce FuXi-ENS, an advanced ML model designed to deliver 6-hourly global ensemble weather forecasts up to 15 days. This model runs at a significantly increased spatial resolution of 0.25textdegree, incorporating 5 atmospheric variables at 13 pressure levels, along with 13 surface variables. By leveraging the inherent probabilistic nature of Variational AutoEncoder (VAE), FuXi-ENS optimizes a loss function that combines the CRPS and the KL divergence between the predicted and target distribution, facilitating the incorporation of flow-dependent perturbations in both initial conditions and forecast. This innovative approach makes FuXi-ENS an advancement over the traditional ones that use L1 loss combined with the KL loss in standard VAE models for ensemble weather forecasting. Results demonstrate that FuXi-ENS outperforms ensemble forecasts from the ECMWF, a world leading NWP model, in the CRPS of 98.1% of 360 variable and forecast lead time combinations. This achievement underscores the potential of the FuXi-ENS model to enhance ensemble weather forecasts, offering a promising direction for further development in this field. | [
"['Xiaohui Zhong' 'Lei Chen' 'Hao Li' 'Jun Liu' 'Xu Fan' 'Jie Feng'\n 'Kan Dai' 'Jing-Jia Luo' 'Jie Wu' 'Yuan Qi' 'Bo Lu']"
]
|
null | null | 2405.05934 | null | null | http://arxiv.org/pdf/2405.05934v1 | 2024-05-09T17:16:54Z | 2024-05-09T17:16:54Z | Theoretical Guarantees of Data Augmented Last Layer Retraining Methods | Ensuring fair predictions across many distinct subpopulations in the training data can be prohibitive for large models. Recently, simple linear last layer retraining strategies, in combination with data augmentation methods such as upweighting, downsampling and mixup, have been shown to achieve state-of-the-art performance for worst-group accuracy, which quantifies accuracy for the least prevalent subpopulation. For linear last layer retraining and the abovementioned augmentations, we present the optimal worst-group accuracy when modeling the distribution of the latent representations (input to the last layer) as Gaussian for each subpopulation. We evaluate and verify our results for both synthetic and large publicly available datasets. | [
"['Monica Welfert' 'Nathan Stromberg' 'Lalitha Sankar']"
]
|
null | null | 2405.05941 | null | null | http://arxiv.org/pdf/2405.05941v1 | 2024-05-09T17:30:16Z | 2024-05-09T17:30:16Z | Evaluating Real-World Robot Manipulation Policies in Simulation | The field of robotics has made significant advances towards generalist robot manipulation policies. However, real-world evaluation of such policies is not scalable and faces reproducibility challenges, which are likely to worsen as policies broaden the spectrum of tasks they can perform. We identify control and visual disparities between real and simulated environments as key challenges for reliable simulated evaluation and propose approaches for mitigating these gaps without needing to craft full-fidelity digital twins of real-world environments. We then employ these approaches to create SIMPLER, a collection of simulated environments for manipulation policy evaluation on common real robot setups. Through paired sim-and-real evaluations of manipulation policies, we demonstrate strong correlation between policy performance in SIMPLER environments and in the real world. Additionally, we find that SIMPLER evaluations accurately reflect real-world policy behavior modes such as sensitivity to various distribution shifts. We open-source all SIMPLER environments along with our workflow for creating new environments at https://simpler-env.github.io to facilitate research on general-purpose manipulation policies and simulated evaluation frameworks. | [
"['Xuanlin Li' 'Kyle Hsu' 'Jiayuan Gu' 'Karl Pertsch' 'Oier Mees'\n 'Homer Rich Walke' 'Chuyuan Fu' 'Ishikaa Lunawat' 'Isabel Sieh'\n 'Sean Kirmani' 'Sergey Levine' 'Jiajun Wu' 'Chelsea Finn' 'Hao Su'\n 'Quan Vuong' 'Ted Xiao']"
]
|
null | null | 2405.05950 | null | null | http://arxiv.org/pdf/2405.05950v1 | 2024-05-09T17:40:09Z | 2024-05-09T17:40:09Z | Federated Combinatorial Multi-Agent Multi-Armed Bandits | This paper introduces a federated learning framework tailored for online combinatorial optimization with bandit feedback. In this setting, agents select subsets of arms, observe noisy rewards for these subsets without accessing individual arm information, and can cooperate and share information at specific intervals. Our framework transforms any offline resilient single-agent $(alpha-epsilon)$-approximation algorithm, having a complexity of $tilde{mathcal{O}}(frac{psi}{epsilon^beta})$, where the logarithm is omitted, for some function $psi$ and constant $beta$, into an online multi-agent algorithm with $m$ communicating agents and an $alpha$-regret of no more than $tilde{mathcal{O}}(m^{-frac{1}{3+beta}} psi^frac{1}{3+beta} T^frac{2+beta}{3+beta})$. This approach not only eliminates the $epsilon$ approximation error but also ensures sublinear growth with respect to the time horizon $T$ and demonstrates a linear speedup with an increasing number of communicating agents. Additionally, the algorithm is notably communication-efficient, requiring only a sublinear number of communication rounds, quantified as $tilde{mathcal{O}}left(psi T^frac{beta}{beta+1}right)$. Furthermore, the framework has been successfully applied to online stochastic submodular maximization using various offline algorithms, yielding the first results for both single-agent and multi-agent settings and recovering specialized single-agent theoretical guarantees. We empirically validate our approach to a stochastic data summarization problem, illustrating the effectiveness of the proposed framework, even in single-agent scenarios. | [
"['Fares Fourati' 'Mohamed-Slim Alouini' 'Vaneet Aggarwal']"
]
|
null | null | 2405.05959 | null | null | http://arxiv.org/abs/2405.05959v2 | 2024-06-17T08:54:51Z | 2024-05-09T17:55:16Z | Self-Supervised Learning of Time Series Representation via Diffusion
Process and Imputation-Interpolation-Forecasting Mask | Time Series Representation Learning (TSRL) focuses on generating informative representations for various Time Series (TS) modeling tasks. Traditional Self-Supervised Learning (SSL) methods in TSRL fall into four main categories: reconstructive, adversarial, contrastive, and predictive, each with a common challenge of sensitivity to noise and intricate data nuances. Recently, diffusion-based methods have shown advanced generative capabilities. However, they primarily target specific application scenarios like imputation and forecasting, leaving a gap in leveraging diffusion models for generic TSRL. Our work, Time Series Diffusion Embedding (TSDE), bridges this gap as the first diffusion-based SSL TSRL approach. TSDE segments TS data into observed and masked parts using an Imputation-Interpolation-Forecasting (IIF) mask. It applies a trainable embedding function, featuring dual-orthogonal Transformer encoders with a crossover mechanism, to the observed part. We train a reverse diffusion process conditioned on the embeddings, designed to predict noise added to the masked part. Extensive experiments demonstrate TSDE's superiority in imputation, interpolation, forecasting, anomaly detection, classification, and clustering. We also conduct an ablation study, present embedding visualizations, and compare inference speed, further substantiating TSDE's efficiency and validity in learning representations of TS data. | [
"['Zineb Senane' 'Lele Cao' 'Valentin Leonhard Buchner' 'Yusuke Tashiro'\n 'Lei You' 'Pawel Herman' 'Mats Nordahl' 'Ruibo Tu'\n 'Vilhelm von Ehrenheim']"
]
|
null | null | 2405.05962 | null | null | http://arxiv.org/pdf/2405.05962v2 | 2024-07-05T14:01:11Z | 2024-05-09T17:58:25Z | Age Aware Scheduling for Differentially-Private Federated Learning | This paper explores differentially-private federated learning (FL) across time-varying databases, delving into a nuanced three-way tradeoff involving age, accuracy, and differential privacy (DP). Emphasizing the potential advantages of scheduling, we propose an optimization problem aimed at meeting DP requirements while minimizing the loss difference between the aggregated model and the model obtained without DP constraints. To harness the benefits of scheduling, we introduce an age-dependent upper bound on the loss, leading to the development of an age-aware scheduling design. Simulation results underscore the superior performance of our proposed scheme compared to FL with classic DP, which does not consider scheduling as a design factor. This research contributes insights into the interplay of age, accuracy, and DP in federated learning, with practical implications for scheduling strategies. | [
"['Kuan-Yu Lin' 'Hsuan-Yin Lin' 'Yu-Pin Hsu' 'Yu-Chih Huang']"
]
|
null | null | 2405.05967 | null | null | http://arxiv.org/pdf/2405.05967v2 | 2024-06-13T18:28:54Z | 2024-05-09T17:59:40Z | Distilling Diffusion Models into Conditional GANs | We propose a method to distill a complex multistep diffusion model into a single-step conditional GAN student model, dramatically accelerating inference, while preserving image quality. Our approach interprets diffusion distillation as a paired image-to-image translation task, using noise-to-image pairs of the diffusion model's ODE trajectory. For efficient regression loss computation, we propose E-LatentLPIPS, a perceptual loss operating directly in diffusion model's latent space, utilizing an ensemble of augmentations. Furthermore, we adapt a diffusion model to construct a multi-scale discriminator with a text alignment loss to build an effective conditional GAN-based formulation. E-LatentLPIPS converges more efficiently than many existing distillation methods, even accounting for dataset construction costs. We demonstrate that our one-step generator outperforms cutting-edge one-step diffusion distillation models -- DMD, SDXL-Turbo, and SDXL-Lightning -- on the zero-shot COCO benchmark. | [
"['Minguk Kang' 'Richard Zhang' 'Connelly Barnes' 'Sylvain Paris'\n 'Suha Kwak' 'Jaesik Park' 'Eli Shechtman' 'Jun-Yan Zhu' 'Taesung Park']"
]
|
null | null | 2405.05968 | null | null | http://arxiv.org/pdf/2405.05968v2 | 2024-07-08T17:20:19Z | 2024-05-09T17:59:55Z | A Universal Growth Rate for Learning with Smooth Surrogate Losses | This paper presents a comprehensive analysis of the growth rate of $H$-consistency bounds (and excess error bounds) for various surrogate losses used in classification. We prove a square-root growth rate near zero for smooth margin-based surrogate losses in binary classification, providing both upper and lower bounds under mild assumptions. This result also translates to excess error bounds. Our lower bound requires weaker conditions than those in previous work for excess error bounds, and our upper bound is entirely novel. Moreover, we extend this analysis to multi-class classification with a series of novel results, demonstrating a universal square-root growth rate for smooth comp-sum and constrained losses, covering common choices for training neural networks in multi-class classification. Given this universal rate, we turn to the question of choosing among different surrogate losses. We first examine how $H$-consistency bounds vary across surrogates based on the number of classes. Next, ignoring constants and focusing on behavior near zero, we identify minimizability gaps as the key differentiating factor in these bounds. Thus, we thoroughly analyze these gaps, to guide surrogate loss selection, covering: comparisons across different comp-sum losses, conditions where gaps become zero, and general conditions leading to small gaps. Additionally, we demonstrate the key role of minimizability gaps in comparing excess error bounds and $H$-consistency bounds. | [
"['Anqi Mao' 'Mehryar Mohri' 'Yutao Zhong']"
]
|
null | null | 2405.05980 | null | null | http://arxiv.org/pdf/2405.05980v1 | 2024-05-07T10:04:08Z | 2024-05-07T10:04:08Z | Overcoming challenges of translating deep-learning models for
glioblastoma: the ZGBM consortium | Objective: To report imaging protocol and scheduling variance in routine care of glioblastoma patients in order to demonstrate challenges of integrating deep-learning models in glioblastoma care pathways. Additionally, to understand the most common imaging studies and image contrasts to inform the development of potentially robust deep-learning models. Methods: MR imaging data were analysed from a random sample of five patients from the prospective cohort across five participating sites of the ZGBM consortium. Reported clinical and treatment data alongside DICOM header information were analysed to understand treatment pathway imaging schedules. Results: All sites perform all structural imaging at every stage in the pathway except for the presurgical study, where in some sites only contrast-enhanced T1-weighted imaging is performed. Diffusion MRI is the most common non-structural imaging type, performed at every site. Conclusion: The imaging protocol and scheduling varies across the UK, making it challenging to develop machine-learning models that could perform robustly at other centres. Structural imaging is performed most consistently across all centres. Advances in knowledge: Successful translation of deep-learning models will likely be based on structural post-treatment imaging unless there is significant effort made to standardise non-structural or peri-operative imaging protocols and schedules. | [
"['Haris Shuaib' 'Gareth J Barker' 'Peter Sasieni' 'Enrico De Vita'\n 'Alysha Chelliah' 'Roman Andrei' 'Keyoumars Ashkan' 'Erica Beaumont'\n 'Lucy Brazil' 'Chris Rowland-Hill' 'Yue Hui Lau' 'Aysha Luis'\n 'James Powell' 'Angela Swampillai' 'Sean Tenant' 'Stefanie C Thust'\n 'Stephen Wastling' 'Tom Young' 'Thomas C Booth']"
]
|
null | null | 2405.05981 | null | null | http://arxiv.org/pdf/2405.05981v1 | 2024-05-07T10:54:20Z | 2024-05-07T10:54:20Z | Scalable physical source-to-field inference with hypernetworks | We present a generative model that amortises computation for the field around e.g. gravitational or magnetic sources. Exact numerical calculation has either computational complexity $mathcal{O}(Mtimes{}N)$ in the number of sources and field evaluation points, or requires a fixed evaluation grid to exploit fast Fourier transforms. Using an architecture where a hypernetwork produces an implicit representation of the field around a source collection, our model instead performs as $mathcal{O}(M + N)$, achieves accuracy of $sim!4%-6%$, and allows evaluation at arbitrary locations for arbitrary numbers of sources, greatly increasing the speed of e.g. physics simulations. We also examine a model relating to the physical properties of the output field and develop two-dimensional examples to demonstrate its application. The code for these models and experiments is available at https://github.com/cmt-dtu-energy/hypermagnetics. | [
"['Berian James' 'Stefan Pollok' 'Ignacio Peis' 'Jes Frellsen'\n 'Rasmus Bjørk']"
]
|
null | null | 2405.05983 | null | null | http://arxiv.org/pdf/2405.05983v1 | 2024-05-08T03:18:46Z | 2024-05-08T03:18:46Z | Real-Time Pill Identification for the Visually Impaired Using Deep
Learning | The prevalence of mobile technology offers unique opportunities for addressing healthcare challenges, especially for individuals with visual impairments. This paper explores the development and implementation of a deep learning-based mobile application designed to assist blind and visually impaired individuals in real-time pill identification. Utilizing the YOLO framework, the application aims to accurately recognize and differentiate between various pill types through real-time image processing on mobile devices. The system incorporates Text-to- Speech (TTS) to provide immediate auditory feedback, enhancing usability and independence for visually impaired users. Our study evaluates the application's effectiveness in terms of detection accuracy and user experience, highlighting its potential to improve medication management and safety among the visually impaired community. Keywords-Deep Learning; YOLO Framework; Mobile Application; Visual Impairment; Pill Identification; Healthcare | [
"['Bo Dang' 'Wenchao Zhao' 'Yufeng Li' 'Danqing Ma' 'Qixuan Yu'\n 'Elly Yijun Zhu']"
]
|
null | null | 2405.05984 | null | null | http://arxiv.org/pdf/2405.05984v1 | 2024-05-08T03:35:52Z | 2024-05-08T03:35:52Z | Few-Shot Class Incremental Learning via Robust Transformer Approach | Few-Shot Class-Incremental Learning presents an extension of the Class Incremental Learning problem where a model is faced with the problem of data scarcity while addressing the catastrophic forgetting problem. This problem remains an open problem because all recent works are built upon the convolutional neural networks performing sub-optimally compared to the transformer approaches. Our paper presents Robust Transformer Approach built upon the Compact Convolution Transformer. The issue of overfitting due to few samples is overcome with the notion of the stochastic classifier, where the classifier's weights are sampled from a distribution with mean and variance vectors, thus increasing the likelihood of correct classifications, and the batch-norm layer to stabilize the training process. The issue of CF is dealt with the idea of delta parameters, small task-specific trainable parameters while keeping the backbone networks frozen. A non-parametric approach is developed to infer the delta parameters for the model's predictions. The prototype rectification approach is applied to avoid biased prototype calculations due to the issue of data scarcity. The advantage of ROBUSTA is demonstrated through a series of experiments in the benchmark problems where it is capable of outperforming prior arts with big margins without any data augmentation protocols. | [
"['Naeem Paeedeh' 'Mahardhika Pratama' 'Sunu Wibirama' 'Wolfgang Mayer'\n 'Zehong Cao' 'Ryszard Kowalczyk']"
]
|
null | null | 2405.05985 | null | null | http://arxiv.org/pdf/2405.05985v1 | 2024-05-08T07:48:40Z | 2024-05-08T07:48:40Z | TrafficGPT: Towards Multi-Scale Traffic Analysis and Generation with
Spatial-Temporal Agent Framework | The precise prediction of multi-scale traffic is a ubiquitous challenge in the urbanization process for car owners, road administrators, and governments. In the case of complex road networks, current and past traffic information from both upstream and downstream roads are crucial since various road networks have different semantic information about traffic. Rationalizing the utilization of semantic information can realize short-term, long-term, and unseen road traffic prediction. As the demands of multi-scale traffic analysis increase, on-demand interactions and visualizations are expected to be available for transportation participants. We have designed a multi-scale traffic generation system, namely TrafficGPT, using three AI agents to process multi-scale traffic data, conduct multi-scale traffic analysis, and present multi-scale visualization results. TrafficGPT consists of three essential AI agents: 1) a text-to-demand agent that is employed with Question & Answer AI to interact with users and extract prediction tasks through texts; 2) a traffic prediction agent that leverages multi-scale traffic data to generate temporal features and similarity, and fuse them with limited spatial features and similarity, to achieve accurate prediction of three tasks; and 3) a suggestion and visualization agent that uses the prediction results to generate suggestions and visualizations, providing users with a comprehensive understanding of traffic conditions. Our TrafficGPT system focuses on addressing concerns about traffic prediction from transportation participants, and conducted extensive experiments on five real-world road datasets to demonstrate its superior predictive and interactive performance | [
"['Jinhui Ouyang' 'Yijie Zhu' 'Xiang Yuan' 'Di Wu']"
]
|
null | null | 2405.05987 | null | null | http://arxiv.org/pdf/2405.05987v2 | 2024-06-08T18:49:34Z | 2024-05-08T14:15:51Z | Physics-Enhanced Machine Learning: a position paper for dynamical
systems investigations | This position paper takes a broad look at Physics-Enhanced Machine Learning (PEML) -- also known as Scientific Machine Learning -- with particular focus to those PEML strategies developed to tackle dynamical systems' challenges. The need to go beyond Machine Learning (ML) strategies is driven by: (i) limited volume of informative data, (ii) avoiding accurate-but-wrong predictions; (iii) dealing with uncertainties; (iv) providing Explainable and Interpretable inferences. A general definition of PEML is provided by considering four physics and domain knowledge biases, and three broad groups of PEML approaches are discussed: physics-guided, physics-encoded and physics-informed. The advantages and challenges in developing PEML strategies for guiding high-consequence decision making in engineering applications involving complex dynamical systems, are presented. | [
"['Alice Cicirello']"
]
|
null | null | 2405.05988 | null | null | http://arxiv.org/pdf/2405.05988v1 | 2024-05-08T21:12:33Z | 2024-05-08T21:12:33Z | CloudSense: A Model for Cloud Type Identification using Machine Learning
from Radar data | The knowledge of type of precipitating cloud is crucial for radar based quantitative estimates of precipitation. We propose a novel model called CloudSense which uses machine learning to accurately identify the type of precipitating clouds over the complex terrain locations in the Western Ghats (WGs) of India. CloudSense uses vertical reflectivity profiles collected during July-August 2018 from an X-band radar to classify clouds into four categories namely stratiform,mixed stratiform-convective,convective and shallow clouds. The machine learning(ML) model used in CloudSense was trained using a dataset balanced by Synthetic Minority Oversampling Technique (SMOTE), with features selected based on physical characteristics relevant to different cloud types. Among various ML models evaluated Light Gradient Boosting Machine (LightGBM) demonstrate superior performance in classifying cloud types with a BAC of 0.8 and F1-Score of 0.82. CloudSense generated results are also compared against conventional radar algorithms and we find that CloudSense performs better than radar algorithms. For 200 samples tested, the radar algorithm achieved a BAC of 0.69 and F1-Score of 0.68, whereas CloudSense achieved a BAC and F1-Score of 0.77. Our results show that ML based approach can provide more accurate cloud detection and classification which would be useful to improve precipitation estimates over the complex terrain of the WG. | [
"['Mehzooz Nizar' 'Jha K. Ambuj' 'Manmeet Singh' 'Vaisakh S. B'\n 'G. Pandithurai']"
]
|
null | null | 2405.05989 | null | null | http://arxiv.org/pdf/2405.05989v2 | 2024-05-14T00:39:43Z | 2024-05-09T00:08:21Z | Clustering-based Multitasking Deep Neural Network for Solar
Photovoltaics Power Generation Prediction | The increasing installation of Photovoltaics (PV) cells leads to more generation of renewable energy sources (RES), but results in increased uncertainties of energy scheduling. Predicting PV power generation is important for energy management and dispatch optimization in smart grid. However, the PV power generation data is often collected across different types of customers (e.g., residential, agricultural, industrial, and commercial) while the customer information is always de-identified. This often results in a forecasting model trained with all PV power generation data, allowing the predictor to learn various patterns through intra-model self-learning, instead of constructing a separate predictor for each customer type. In this paper, we propose a clustering-based multitasking deep neural network (CM-DNN) framework for PV power generation prediction. K-means is applied to cluster the data into different customer types. For each type, a deep neural network (DNN) is employed and trained until the accuracy cannot be improved. Subsequently, for a specified customer type (i.e., the target task), inter-model knowledge transfer is conducted to enhance its training accuracy. During this process, source task selection is designed to choose the optimal subset of tasks (excluding the target customer), and each selected source task uses a coefficient to determine the amount of DNN model knowledge (weights and biases) transferred to the aimed prediction task. The proposed CM-DNN is tested on a real-world PV power generation dataset and its superiority is demonstrated by comparing the prediction performance on training the dataset with a single model without clustering. | [
"['Hui Song' 'Zheng Miao' 'Ali Babalhavaeji' 'Saman Mehrnia' 'Mahdi Jalili'\n 'Xinghuo Yu']"
]
|
null | null | 2405.05990 | null | null | http://arxiv.org/pdf/2405.05990v2 | 2024-05-20T14:40:03Z | 2024-05-09T02:35:32Z | Special Characters Attack: Toward Scalable Training Data Extraction From
Large Language Models | Large language models (LLMs) have achieved remarkable performance on a wide range of tasks. However, recent studies have shown that LLMs can memorize training data and simple repeated tokens can trick the model to leak the data. In this paper, we take a step further and show that certain special characters or their combinations with English letters are stronger memory triggers, leading to more severe data leakage. The intuition is that, since LLMs are trained with massive data that contains a substantial amount of special characters (e.g. structural symbols {, } of JSON files, and @, # in emails and online posts), the model may memorize the co-occurrence between these special characters and the raw texts. This motivates us to propose a simple but effective Special Characters Attack (SCA) to induce training data leakage. Our experiments verify the high effectiveness of SCA against state-of-the-art LLMs: they can leak diverse training data, such as code corpus, web pages, and personally identifiable information, and sometimes generate non-stop outputs as a byproduct. We further show that the composition of the training data corpus can be revealed by inspecting the leaked data -- one crucial piece of information for pre-training high-performance LLMs. Our work can help understand the sensitivity of LLMs to special characters and identify potential areas for improvement. | [
"['Yang Bai' 'Ge Pei' 'Jindong Gu' 'Yong Yang' 'Xingjun Ma']"
]
|
null | null | 2405.05991 | null | null | http://arxiv.org/pdf/2405.05991v1 | 2024-05-09T02:35:46Z | 2024-05-09T02:35:46Z | Agent-oriented Joint Decision Support for Data Owners in Auction-based
Federated Learning | Auction-based Federated Learning (AFL) has attracted extensive research interest due to its ability to motivate data owners (DOs) to join FL through economic means. While many existing AFL methods focus on providing decision support to model users (MUs) and the AFL auctioneer, decision support for data owners remains open. To bridge this gap, we propose a first-of-its-kind agent-oriented joint Pricing, Acceptance and Sub-delegation decision support approach for data owners in AFL (PAS-AFL). By considering a DO's current reputation, pending FL tasks, willingness to train FL models, and its trust relationships with other DOs, it provides a systematic approach for a DO to make joint decisions on AFL bid acceptance, task sub-delegation and pricing based on Lyapunov optimization to maximize its utility. It is the first to enable each DO to take on multiple FL tasks simultaneously to earn higher income for DOs and enhance the throughput of FL tasks in the AFL ecosystem. Extensive experiments based on six benchmarking datasets demonstrate significant advantages of PAS-AFL compared to six alternative strategies, beating the best baseline by 28.77% and 2.64% on average in terms of utility and test accuracy of the resulting FL models, respectively. | [
"['Xiaoli Tang' 'Han Yu' 'Xiaoxiao Li']"
]
|
null | null | 2405.05993 | null | null | http://arxiv.org/pdf/2405.05993v1 | 2024-05-09T04:06:44Z | 2024-05-09T04:06:44Z | Precision Rehabilitation for Patients Post-Stroke based on Electronic
Health Records and Machine Learning | In this study, we utilized statistical analysis and machine learning methods to examine whether rehabilitation exercises can improve patients post-stroke functional abilities, as well as forecast the improvement in functional abilities. Our dataset is patients' rehabilitation exercises and demographic information recorded in the unstructured electronic health records (EHRs) data and free-text rehabilitation procedure notes. We collected data for 265 stroke patients from the University of Pittsburgh Medical Center. We employed a pre-existing natural language processing (NLP) algorithm to extract data on rehabilitation exercises and developed a rule-based NLP algorithm to extract Activity Measure for Post-Acute Care (AM-PAC) scores, covering basic mobility (BM) and applied cognitive (AC) domains, from procedure notes. Changes in AM-PAC scores were classified based on the minimal clinically important difference (MCID), and significance was assessed using Friedman and Wilcoxon tests. To identify impactful exercises, we used Chi-square tests, Fisher's exact tests, and logistic regression for odds ratios. Additionally, we developed five machine learning models-logistic regression (LR), Adaboost (ADB), support vector machine (SVM), gradient boosting (GB), and random forest (RF)-to predict outcomes in functional ability. Statistical analyses revealed significant associations between functional improvements and specific exercises. The RF model achieved the best performance in predicting functional outcomes. In this study, we identified three rehabilitation exercises that significantly contributed to patient post-stroke functional ability improvement in the first two months. Additionally, the successful application of a machine learning model to predict patient-specific functional outcomes underscores the potential for precision rehabilitation. | [
"['Fengyi Gao' 'Xingyu Zhang' 'Sonish Sivarajkumar' 'Parker Denny'\n 'Bayan Aldhahwani' 'Shyam Visweswaran' 'Ryan Shi' 'William Hogan'\n 'Allyn Bove' 'Yanshan Wang']"
]
|
null | null | 2405.05998 | null | null | http://arxiv.org/pdf/2405.05998v2 | 2024-05-28T10:59:16Z | 2024-05-09T09:34:51Z | Whole Genome Transformer for Gene Interaction Effects in Microbiome
Habitat Specificity | Leveraging the vast genetic diversity within microbiomes offers unparalleled insights into complex phenotypes, yet the task of accurately predicting and understanding such traits from genomic data remains challenging. We propose a framework taking advantage of existing large models for gene vectorization to predict habitat specificity from entire microbial genome sequences. Based on our model, we develop attribution techniques to elucidate gene interaction effects that drive microbial adaptation to diverse environments. We train and validate our approach on a large dataset of high quality microbiome genomes from different habitats. We not only demonstrate solid predictive performance, but also how sequence-level information of entire genomes allows us to identify gene associations underlying complex phenotypes. Our attribution recovers known important interaction networks and proposes new candidates for experimental follow up. | [
"['Zhufeng Li' 'Sandeep S Cranganore' 'Nicholas Youngblut' 'Niki Kilbertus']"
]
|
null | null | 2405.05999 | null | null | http://arxiv.org/pdf/2405.05999v1 | 2024-05-09T09:37:22Z | 2024-05-09T09:37:22Z | LLMPot: Automated LLM-based Industrial Protocol and Physical Process
Emulation for ICS Honeypots | Industrial Control Systems (ICS) are extensively used in critical infrastructures ensuring efficient, reliable, and continuous operations. However, their increasing connectivity and addition of advanced features make them vulnerable to cyber threats, potentially leading to severe disruptions in essential services. In this context, honeypots play a vital role by acting as decoy targets within ICS networks, or on the Internet, helping to detect, log, analyze, and develop mitigations for ICS-specific cyber threats. Deploying ICS honeypots, however, is challenging due to the necessity of accurately replicating industrial protocols and device characteristics, a crucial requirement for effectively mimicking the unique operational behavior of different industrial systems. Moreover, this challenge is compounded by the significant manual effort required in also mimicking the control logic the PLC would execute, in order to capture attacker traffic aiming to disrupt critical infrastructure operations. In this paper, we propose LLMPot, a novel approach for designing honeypots in ICS networks harnessing the potency of Large Language Models (LLMs). LLMPot aims to automate and optimize the creation of realistic honeypots with vendor-agnostic configurations, and for any control logic, aiming to eliminate the manual effort and specialized knowledge traditionally required in this domain. We conducted extensive experiments focusing on a wide array of parameters, demonstrating that our LLM-based approach can effectively create honeypot devices implementing different industrial protocols and diverse control logic. | [
"['Christoforos Vasilatos' 'Dunia J. Mahboobeh' 'Hithem Lamri'\n 'Manaar Alam' 'Michail Maniatakos']"
]
|
null | null | 2405.06001 | null | null | http://arxiv.org/pdf/2405.06001v1 | 2024-05-09T11:49:05Z | 2024-05-09T11:49:05Z | LLM-QBench: A Benchmark Towards the Best Practice for Post-training
Quantization of Large Language Models | Recent advancements in large language models (LLMs) are propelling us toward artificial general intelligence, thanks to their remarkable emergent abilities and reasoning capabilities. However, the substantial computational and memory requirements of LLMs limit their widespread adoption. Quan- tization, a key compression technique, offers a viable solution to mitigate these demands by compressing and accelerating LLMs, albeit with poten- tial risks to model accuracy. Numerous studies have aimed to minimize the accuracy loss associated with quantization. However, the quantization configurations in these studies vary and may not be optimized for hard- ware compatibility. In this paper, we focus on identifying the most effective practices for quantizing LLMs, with the goal of balancing performance with computational efficiency. For a fair analysis, we develop a quantization toolkit LLMC, and design four crucial principles considering the inference efficiency, quantized accuracy, calibration cost, and modularization. By benchmarking on various models and datasets with over 500 experiments, three takeaways corresponding to calibration data, quantization algorithm, and quantization schemes are derived. Finally, a best practice of LLM PTQ pipeline is constructed. All the benchmark results and the toolkit can be found at https://github.com/ModelTC/llmc. | [
"['Ruihao Gong' 'Yang Yong' 'Shiqiao Gu' 'Yushi Huang' 'Yunchen Zhang'\n 'Xianglong Liu' 'Dacheng Tao']"
]
|
null | null | 2405.06003 | null | null | http://arxiv.org/pdf/2405.06003v1 | 2024-05-09T15:56:29Z | 2024-05-09T15:56:29Z | Binary Hypothesis Testing for Softmax Models and Leverage Score Models | Softmax distributions are widely used in machine learning, including Large Language Models (LLMs) where the attention unit uses softmax distributions. We abstract the attention unit as the softmax model, where given a vector input, the model produces an output drawn from the softmax distribution (which depends on the vector input). We consider the fundamental problem of binary hypothesis testing in the setting of softmax models. That is, given an unknown softmax model, which is known to be one of the two given softmax models, how many queries are needed to determine which one is the truth? We show that the sample complexity is asymptotically $O(epsilon^{-2})$ where $epsilon$ is a certain distance between the parameters of the models. Furthermore, we draw analogy between the softmax model and the leverage score model, an important tool for algorithm design in linear algebra and graph theory. The leverage score model, on a high level, is a model which, given vector input, produces an output drawn from a distribution dependent on the input. We obtain similar results for the binary hypothesis testing problem for leverage score models. | [
"['Yeqi Gao' 'Yuzhou Gu' 'Zhao Song']"
]
|
null | null | 2405.06004 | null | null | http://arxiv.org/pdf/2405.06004v1 | 2024-05-09T16:42:13Z | 2024-05-09T16:42:13Z | EWMoE: An effective model for global weather forecasting with
mixture-of-experts | Weather forecasting is a crucial task for meteorologic research, with direct social and economic impacts. Recently, data-driven weather forecasting models based on deep learning have shown great potential, achieving superior performance compared with traditional numerical weather prediction methods. However, these models often require massive training data and computational resources. In this paper, we propose EWMoE, an effective model for accurate global weather forecasting, which requires significantly less training data and computational resources. Our model incorporates three key components to enhance prediction accuracy: meteorology-specific embedding, a core Mixture-of-Experts (MoE) layer, and two specific loss functions. We conduct our evaluation on the ERA5 dataset using only two years of training data. Extensive experiments demonstrate that EWMoE outperforms current models such as FourCastNet and ClimaX at all forecast time, achieving competitive performance compared with the state-of-the-art Pangu-Weather model in evaluation metrics such as Anomaly Correlation Coefficient (ACC) and Root Mean Square Error (RMSE). Additionally, ablation studies indicate that applying the MoE architecture to weather forecasting offers significant advantages in improving accuracy and resource efficiency. | [
"['Lihao Gan' 'Xin Man' 'Chenghong Zhang' 'Jie Shao']"
]
|
null | null | 2405.06008 | null | null | http://arxiv.org/pdf/2405.06008v1 | 2024-05-09T18:00:00Z | 2024-05-09T18:00:00Z | Wilsonian Renormalization of Neural Network Gaussian Processes | Separating relevant and irrelevant information is key to any modeling process or scientific inquiry. Theoretical physics offers a powerful tool for achieving this in the form of the renormalization group (RG). Here we demonstrate a practical approach to performing Wilsonian RG in the context of Gaussian Process (GP) Regression. We systematically integrate out the unlearnable modes of the GP kernel, thereby obtaining an RG flow of the Gaussian Process in which the data plays the role of the energy scale. In simple cases, this results in a universal flow of the ridge parameter, which becomes input-dependent in the richer scenario in which non-Gaussianities are included. In addition to being analytically tractable, this approach goes beyond structural analogies between RG and neural networks by providing a natural connection between RG flow and learnable vs. unlearnable modes. Studying such flows may improve our understanding of feature learning in deep neural networks, and identify potential universality classes in these models. | [
"['Jessica N. Howard' 'Ro Jefferson' 'Anindita Maiti' 'Zohar Ringel']"
]
|
null | null | 2405.06034 | null | null | http://arxiv.org/pdf/2405.06034v1 | 2024-05-09T18:08:58Z | 2024-05-09T18:08:58Z | Bayesian Prediction-Powered Inference | Prediction-powered inference (PPI) is a method that improves statistical estimates based on limited human-labeled data. Specifically, PPI methods provide tighter confidence intervals by combining small amounts of human-labeled data with larger amounts of data labeled by a reasonably accurate, but potentially biased, automatic system. We propose a framework for PPI based on Bayesian inference that allows researchers to develop new task-appropriate PPI methods easily. Exploiting the ease with which we can design new metrics, we propose improved PPI methods for several importantcases, such as autoraters that give discrete responses (e.g., prompted LLM ``judges'') and autoraters with scores that have a non-linear relationship to human scores. | [
"['R. Alex Hofer' 'Joshua Maynez' 'Bhuwan Dhingra' 'Adam Fisch'\n 'Amir Globerson' 'William W. Cohen']"
]
|
null | null | 2405.06038 | null | null | http://arxiv.org/pdf/2405.06038v1 | 2024-05-09T18:17:25Z | 2024-05-09T18:17:25Z | From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of
Deep Neural Networks | Deep neural networks (DNNs) have been widely used in many artificial intelligence (AI) tasks. However, deploying them brings significant challenges due to the huge cost of memory, energy, and computation. To address these challenges, researchers have developed various model compression techniques such as model quantization and model pruning. Recently, there has been a surge in research of compression methods to achieve model efficiency while retaining the performance. Furthermore, more and more works focus on customizing the DNN hardware accelerators to better leverage the model compression techniques. In addition to efficiency, preserving security and privacy is critical for deploying DNNs. However, the vast and diverse body of related works can be overwhelming. This inspires us to conduct a comprehensive survey on recent research toward the goal of high-performance, cost-efficient, and safe deployment of DNNs. Our survey first covers the mainstream model compression techniques such as model quantization, model pruning, knowledge distillation, and optimizations of non-linear operations. We then introduce recent advances in designing hardware accelerators that can adapt to efficient model compression approaches. Additionally, we discuss how homomorphic encryption can be integrated to secure DNN deployment. Finally, we discuss several issues, such as hardware evaluation, generalization, and integration of various compression approaches. Overall, we aim to provide a big picture of efficient DNNs, from algorithm to hardware accelerators and security perspectives. | [
"['Xue Geng' 'Zhe Wang' 'Chunyun Chen' 'Qing Xu' 'Kaixin Xu' 'Chao Jin'\n 'Manas Gupta' 'Xulei Yang' 'Zhenghua Chen' 'Mohamed M. Sabry Aly'\n 'Jie Lin' 'Min Wu' 'Xiaoli Li']"
]
|
null | null | 2405.06049 | null | null | http://arxiv.org/pdf/2405.06049v1 | 2024-05-09T18:42:26Z | 2024-05-09T18:42:26Z | BB-Patch: BlackBox Adversarial Patch-Attack using Zeroth-Order
Optimization | Deep Learning has become popular due to its vast applications in almost all domains. However, models trained using deep learning are prone to failure for adversarial samples and carry a considerable risk in sensitive applications. Most of these adversarial attack strategies assume that the adversary has access to the training data, the model parameters, and the input during deployment, hence, focus on perturbing the pixel level information present in the input image. Adversarial Patches were introduced to the community which helped in bringing out the vulnerability of deep learning models in a much more pragmatic manner but here the attacker has a white-box access to the model parameters. Recently, there has been an attempt to develop these adversarial attacks using black-box techniques. However, certain assumptions such as availability large training data is not valid for a real-life scenarios. In a real-life scenario, the attacker can only assume the type of model architecture used from a select list of state-of-the-art architectures while having access to only a subset of input dataset. Hence, we propose an black-box adversarial attack strategy that produces adversarial patches which can be applied anywhere in the input image to perform an adversarial attack. | [
"['Satyadwyoom Kumar' 'Saurabh Gupta' 'Arun Balaji Buduru']"
]
|
null | null | 2405.06057 | null | null | http://arxiv.org/pdf/2405.06057v1 | 2024-05-09T19:02:00Z | 2024-05-09T19:02:00Z | UnSegGNet: Unsupervised Image Segmentation using Graph Neural Networks | Image segmentation, the process of partitioning an image into meaningful regions, plays a pivotal role in computer vision and medical imaging applications. Unsupervised segmentation, particularly in the absence of labeled data, remains a challenging task due to the inter-class similarity and variations in intensity and resolution. In this study, we extract high-level features of the input image using pretrained vision transformer. Subsequently, the proposed method leverages the underlying graph structures of the images, seeking to discover and delineate meaningful boundaries using graph neural networks and modularity based optimization criteria without relying on pre-labeled training data. Experimental results on benchmark datasets demonstrate the effectiveness and versatility of the proposed approach, showcasing competitive performance compared to the state-of-the-art unsupervised segmentation methods. This research contributes to the broader field of unsupervised medical imaging and computer vision by presenting an innovative methodology for image segmentation that aligns with real-world challenges. The proposed method holds promise for diverse applications, including medical imaging, remote sensing, and object recognition, where labeled data may be scarce or unavailable. The github repository of the code is available on [https://github.com/ksgr5566/unseggnet] | [
"['Kovvuri Sai Gopal Reddy' 'Bodduluri Saran' 'A. Mudit Adityaja'\n 'Saurabh J. Shigwan' 'Nitin Kumar']"
]
|
null | null | 2405.06063 | null | null | http://arxiv.org/pdf/2405.06063v1 | 2024-05-09T19:15:33Z | 2024-05-09T19:15:33Z | A Minimalist Prompt for Zero-Shot Policy Learning | Transformer-based methods have exhibited significant generalization ability when prompted with target-domain demonstrations or example solutions during inference. Although demonstrations, as a way of task specification, can capture rich information that may be hard to specify by language, it remains unclear what information is extracted from the demonstrations to help generalization. Moreover, assuming access to demonstrations of an unseen task is impractical or unreasonable in many real-world scenarios, especially in robotics applications. These questions motivate us to explore what the minimally sufficient prompt could be to elicit the same level of generalization ability as the demonstrations. We study this problem in the contextural RL setting which allows for quantitative measurement of generalization and is commonly adopted by meta-RL and multi-task RL benchmarks. In this setting, the training and test Markov Decision Processes (MDPs) only differ in certain properties, which we refer to as task parameters. We show that conditioning a decision transformer on these task parameters alone can enable zero-shot generalization on par with or better than its demonstration-conditioned counterpart. This suggests that task parameters are essential for the generalization and DT models are trying to recover it from the demonstration prompt. To extract the remaining generalizable information from the supervision, we introduce an additional learnable prompt which is demonstrated to further boost zero-shot generalization across a range of robotic control, manipulation, and navigation benchmark tasks. | [
"['Meng Song' 'Xuezhi Wang' 'Tanay Biradar' 'Yao Qin' 'Manmohan Chandraker']"
]
|
null | null | 2405.06064 | null | null | http://arxiv.org/pdf/2405.06064v1 | 2024-05-09T19:17:47Z | 2024-05-09T19:17:47Z | LLMs for XAI: Future Directions for Explaining Explanations | In response to the demand for Explainable Artificial Intelligence (XAI), we investigate the use of Large Language Models (LLMs) to transform ML explanations into natural, human-readable narratives. Rather than directly explaining ML models using LLMs, we focus on refining explanations computed using existing XAI algorithms. We outline several research directions, including defining evaluation metrics, prompt design, comparing LLM models, exploring further training methods, and integrating external data. Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI. | [
"['Alexandra Zytek' 'Sara Pidò' 'Kalyan Veeramachaneni']"
]
|
null | null | 2405.06065 | null | null | http://arxiv.org/pdf/2405.06065v2 | 2024-05-18T18:19:23Z | 2024-05-09T19:23:35Z | Driving down Poisson error can offset classification error in clinical
tasks | Medical machine learning algorithms are typically evaluated based on accuracy vs. a clinician-defined ground truth, a reasonable initial choice since trained clinicians are usually better classifiers than ML models. However, this metric does not fully capture the actual clinical task: it neglects the fact that humans, even with perfect accuracy, are subject to non-trivial error from the Poisson statistics of rare events, because clinical protocols often specify a relatively small sample size. For example, to quantitate malaria on a thin blood film a clinician examines only 2000 red blood cells (0.0004 uL), which can yield large Poisson variation in the actual number of parasites present, so that a perfect human's count can differ substantially from the true average load. In contrast, an ML system may be less accurate on an object level, but it may also have the option to examine more blood (e.g. 0.1 uL, or 250x). Then while its parasite identification error is higher, the Poisson variability of its estimate is lower due to larger sample size. To qualify for clinical deployment, an ML system's performance must match current standard of care, typically a very demanding target. To achieve this, it may be possible to offset the ML system's lower accuracy by increasing its sample size to reduce Poisson error, and thus attain the same net clinical performance as a perfectly accurate human limited by smaller sample size. In this paper, we analyse the mathematics of the relationship between Poisson error, classification error, and total error. This mathematical toolkit enables teams optimizing ML systems to leverage a relative strength (larger sample sizes) to offset a relative weakness (classification accuracy). We illustrate the methods with two concrete examples: diagnosis and quantitation of malaria on blood films. | [
"['Charles B. Delahunt' 'Courosh Mehanian' 'Matthew P. Horning']"
]
|
null | null | 2405.06067 | null | null | http://arxiv.org/pdf/2405.06067v2 | 2024-05-14T06:09:52Z | 2024-05-09T19:32:49Z | HMT: Hierarchical Memory Transformer for Long Context Language
Processing | Transformer-based large language models (LLM) have been widely used in language processing applications. However, most of them restrict the context window that permits the model to attend to every token in the inputs. Previous works in recurrent models can memorize past tokens to enable unlimited context and maintain effectiveness. However, they have "flat" memory architectures, which have limitations in selecting and filtering information. Since humans are good at learning and self-adjustment, we speculate that imitating brain memory hierarchy is beneficial for model memorization. We propose the Hierarchical Memory Transformer (HMT), a novel framework that enables and improves models' long-context processing ability by imitating human memorization behavior. Leveraging memory-augmented segment-level recurrence, we organize the memory hierarchy by preserving tokens from early input token segments, passing memory embeddings along the sequence, and recalling relevant information from history. Evaluating general language modeling (Wikitext-103, PG-19) and question-answering tasks (PubMedQA), we show that HMT steadily improves the long-context processing ability of context-constrained and long-context models. With an additional 0.5% - 2% of parameters, HMT can easily plug in and augment future LLMs to handle long context effectively. Our code is open-sourced on Github: https://github.com/OswaldHe/HMT-pytorch. | [
"['Zifan He' 'Zongyue Qin' 'Neha Prakriya' 'Yizhou Sun' 'Jason Cong']"
]
|
null | null | 2405.06068 | null | null | http://arxiv.org/pdf/2405.06068v1 | 2024-05-09T19:37:57Z | 2024-05-09T19:37:57Z | Deep Learning-Based Residual Useful Lifetime Prediction for Assets with
Uncertain Failure Modes | Industrial prognostics focuses on utilizing degradation signals to forecast and continually update the residual useful life of complex engineering systems. However, existing prognostic models for systems with multiple failure modes face several challenges in real-world applications, including overlapping degradation signals from multiple components, the presence of unlabeled historical data, and the similarity of signals across different failure modes. To tackle these issues, this research introduces two prognostic models that integrate the mixture (log)-location-scale distribution with deep learning. This integration facilitates the modeling of overlapping degradation signals, eliminates the need for explicit failure mode identification, and utilizes deep learning to capture complex nonlinear relationships between degradation signals and residual useful lifetimes. Numerical studies validate the superior performance of these proposed models compared to existing methods. | [
"['Yuqi Su' 'Xiaolei Fang']"
]
|
null | null | 2405.06073 | null | null | http://arxiv.org/pdf/2405.06073v1 | 2024-05-09T19:55:07Z | 2024-05-09T19:55:07Z | Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural
Architecture Search | In this paper, we study the robustness of "data-centric" approaches to finding neural network architectures (known as neural architecture search) to data distribution shifts. To audit this robustness, we present a data poisoning attack, when injected to the training data used for architecture search that can prevent the victim algorithm from finding an architecture with optimal accuracy. We first define the attack objective for crafting poisoning samples that can induce the victim to generate sub-optimal architectures. To this end, we weaponize existing search algorithms to generate adversarial architectures that serve as our objectives. We also present techniques that the attacker can use to significantly reduce the computational costs of crafting poisoning samples. In an extensive evaluation of our poisoning attack on a representative architecture search algorithm, we show its surprising robustness. Because our attack employs clean-label poisoning, we also evaluate its robustness against label noise. We find that random label-flipping is more effective in generating sub-optimal architectures than our clean-label attack. Our results suggests that care must be taken for the data this emerging approach uses, and future work is needed to develop robust algorithms. | [
"['Zachary Coalson' 'Huazheng Wang' 'Qingyun Wu' 'Sanghyun Hong']"
]
|
null | null | 2405.06080 | null | null | http://arxiv.org/pdf/2405.06080v1 | 2024-05-09T20:12:46Z | 2024-05-09T20:12:46Z | Scalable Learning of Segment-Level Traffic Congestion Functions | We propose and study a data-driven framework for identifying traffic congestion functions (numerical relationships between observations of macroscopic traffic variables) at global scale and segment-level granularity. In contrast to methods that estimate a separate set of parameters for each roadway, ours learns a single black-box function over all roadways in a metropolitan area. First, we pool traffic data from all segments into one dataset, combining static attributes with dynamic time-dependent features. Second, we train a feed-forward neural network on this dataset, which we can then use on any segment in the area. We evaluate how well our framework identifies congestion functions on observed segments and how it generalizes to unobserved segments and predicts segment attributes on a large dataset covering multiple cities worldwide. For identification error on observed segments, our single data-driven congestion function compares favorably to segment-specific model-based functions on highway roads, but has room to improve on arterial roads. For generalization, our approach shows strong performance across cities and road types: both on unobserved segments in the same city and on zero-shot transfer learning between cities. Finally, for predicting segment attributes, we find that our approach can approximate critical densities for individual segments using their static properties. | [
"['Shushman Choudhury' 'Abdul Rahman Kreidieh' 'Iveel Tsogsuren'\n 'Neha Arora' 'Carolina Osorio' 'Alexandre Bayen']"
]
|
null | null | 2405.06089 | null | null | http://arxiv.org/pdf/2405.06089v3 | 2024-06-25T20:28:29Z | 2024-05-09T20:30:10Z | Learning Low-dimensional Latent Dynamics from High-dimensional
Observations: Non-asymptotics and Lower Bounds | In this paper, we focus on learning a linear time-invariant (LTI) model with low-dimensional latent variables but high-dimensional observations. We provide an algorithm that recovers the high-dimensional features, i.e. column space of the observer, embeds the data into low dimensions and learns the low-dimensional model parameters. Our algorithm enjoys a sample complexity guarantee of order $tilde{mathcal{O}}(n/epsilon^2)$, where $n$ is the observation dimension. We further establish a fundamental lower bound indicating this complexity bound is optimal up to logarithmic factors and dimension-independent constants. We show that this inevitable linear factor of $n$ is due to the learning error of the observer's column space in the presence of high-dimensional noises. Extending our results, we consider a meta-learning problem inspired by various real-world applications, where the observer column space can be collectively learned from datasets of multiple LTI systems. An end-to-end algorithm is then proposed, facilitating learning LTI systems from a meta-dataset which breaks the sample complexity lower bound in certain scenarios. | [
"['Yuyang Zhang' 'Shahriar Talebi' 'Na Li']"
]
|
null | null | 2405.06093 | null | null | http://arxiv.org/pdf/2405.06093v1 | 2024-05-09T20:45:58Z | 2024-05-09T20:45:58Z | Selective Fine-tuning on LLM-labeled Data May Reduce Reliance on Human
Annotation: A Case Study Using Schedule-of-Event Table Detection | Large Language Models (LLMs) have demonstrated their efficacy across a broad spectrum of tasks in healthcare applications. However, often LLMs need to be fine-tuned on task-specific expert annotated data to achieve optimal performance, which can be expensive and time consuming. In this study, we fine-tune PaLM-2 with parameter efficient fine-tuning (PEFT) using noisy labels obtained from gemini-pro 1.0 for the detection of Schedule-of-Event (SoE) tables, which specify care plan in clinical trial protocols. We introduce a filtering mechanism to select high-confidence labels for this table classification task, thereby reducing the noise in the auto-generated labels. We show that fine-tuned PaLM-2 with those labels achieves performance that exceeds the gemini-pro 1.0 and other LLMs. Furthermore, its performance is close to a PaLM-2 fine-tuned on labels obtained from non-expert annotators. Our results show that leveraging LLM-generated labels through powerful models like gemini-pro can potentially serve as a viable strategy for improving LLM performance through fine-tuning in specialized tasks, particularly in domains where expert annotations are scarce, expensive, or time-consuming to obtain. | [
"['Bhawesh Kumar' 'Jonathan Amar' 'Eric Yang' 'Nan Li' 'Yugang Jia']"
]
|
null | null | 2405.06107 | null | null | http://arxiv.org/pdf/2405.06107v1 | 2024-05-09T21:28:52Z | 2024-05-09T21:28:52Z | Transforming the Bootstrap: Using Transformers to Compute Scattering
Amplitudes in Planar N = 4 Super Yang-Mills Theory | We pursue the use of deep learning methods to improve state-of-the-art computations in theoretical high-energy physics. Planar N = 4 Super Yang-Mills theory is a close cousin to the theory that describes Higgs boson production at the Large Hadron Collider; its scattering amplitudes are large mathematical expressions containing integer coefficients. In this paper, we apply Transformers to predict these coefficients. The problem can be formulated in a language-like representation amenable to standard cross-entropy training objectives. We design two related experiments and show that the model achieves high accuracy (> 98%) on both tasks. Our work shows that Transformers can be applied successfully to problems in theoretical physics that require exact solutions. | [
"['Tianji Cai' 'Garrett W. Merz' 'François Charton' 'Niklas Nolte'\n 'Matthias Wilhelm' 'Kyle Cranmer' 'Lance J. Dixon']"
]
|
null | null | 2405.06119 | null | null | http://arxiv.org/pdf/2405.06119v1 | 2024-05-09T21:53:27Z | 2024-05-09T21:53:27Z | Gradient Flow Based Phase-Field Modeling Using Separable Neural Networks | The $L^2$ gradient flow of the Ginzburg-Landau free energy functional leads to the Allen Cahn equation that is widely used for modeling phase separation. Machine learning methods for solving the Allen-Cahn equation in its strong form suffer from inaccuracies in collocation techniques, errors in computing higher-order spatial derivatives through automatic differentiation, and the large system size required by the space-time approach. To overcome these limitations, we propose a separable neural network-based approximation of the phase field in a minimizing movement scheme to solve the aforementioned gradient flow problem. At each time step, the separable neural network is used to approximate the phase field in space through a low-rank tensor decomposition thereby accelerating the derivative calculations. The minimizing movement scheme naturally allows for the use of Gauss quadrature technique to compute the functional. A `$tanh$' transformation is applied on the neural network-predicted phase field to strictly bounds the solutions within the values of the two phases. For this transformation, a theoretical guarantee for energy stability of the minimizing movement scheme is established. Our results suggest that bounding the solution through this transformation is the key to effectively model sharp interfaces through separable neural network. The proposed method outperforms the state-of-the-art machine learning methods for phase separation problems and is an order of magnitude faster than the finite element method. | [
"['Revanth Mattey' 'Susanta Ghosh']"
]
|
null | null | 2405.06145 | null | null | http://arxiv.org/pdf/2405.06145v1 | 2024-05-09T23:43:57Z | 2024-05-09T23:43:57Z | Reddit-Impacts: A Named Entity Recognition Dataset for Analyzing
Clinical and Social Effects of Substance Use Derived from Social Media | Substance use disorders (SUDs) are a growing concern globally, necessitating enhanced understanding of the problem and its trends through data-driven research. Social media are unique and important sources of information about SUDs, particularly since the data in such sources are often generated by people with lived experiences. In this paper, we introduce Reddit-Impacts, a challenging Named Entity Recognition (NER) dataset curated from subreddits dedicated to discussions on prescription and illicit opioids, as well as medications for opioid use disorder. The dataset specifically concentrates on the lesser-studied, yet critically important, aspects of substance use--its clinical and social impacts. We collected data from chosen subreddits using the publicly available Application Programming Interface for Reddit. We manually annotated text spans representing clinical and social impacts reported by people who also reported personal nonmedical use of substances including but not limited to opioids, stimulants and benzodiazepines. Our objective is to create a resource that can enable the development of systems that can automatically detect clinical and social impacts of substance use from text-based social media data. The successful development of such systems may enable us to better understand how nonmedical use of substances affects individual health and societal dynamics, aiding the development of effective public health strategies. In addition to creating the annotated data set, we applied several machine learning models to establish baseline performances. Specifically, we experimented with transformer models like BERT, and RoBERTa, one few-shot learning model DANN by leveraging the full training dataset, and GPT-3.5 by using one-shot learning, for automatic NER of clinical and social impacts. The dataset has been made available through the 2024 SMM4H shared tasks. | [
"['Yao Ge' 'Sudeshna Das' \"Karen O'Connor\" 'Mohammed Ali Al-Garadi'\n 'Graciela Gonzalez-Hernandez' 'Abeed Sarker']"
]
|
null | null | 2405.06147 | null | null | http://arxiv.org/pdf/2405.06147v2 | 2024-06-02T02:48:05Z | 2024-05-10T00:06:02Z | State-Free Inference of State-Space Models: The Transfer Function
Approach | We approach designing a state-space model for deep learning applications through its dual representation, the transfer function, and uncover a highly efficient sequence parallel inference algorithm that is state-free: unlike other proposed algorithms, state-free inference does not incur any significant memory or computational cost with an increase in state size. We achieve this using properties of the proposed frequency domain transfer function parametrization, which enables direct computation of its corresponding convolutional kernel's spectrum via a single Fast Fourier Transform. Our experimental results across multiple sequence lengths and state sizes illustrates, on average, a 35% training speed improvement over S4 layers -- parametrized in time-domain -- on the Long Range Arena benchmark, while delivering state-of-the-art downstream performances over other attention-free approaches. Moreover, we report improved perplexity in language modeling over a long convolutional Hyena baseline, by simply introducing our transfer function parametrization. Our code is available at https://github.com/ruke1ire/RTF. | [
"['Rom N. Parnichkun' 'Stefano Massaroli' 'Alessandro Moro'\n 'Jimmy T. H. Smith' 'Ramin Hasani' 'Mathias Lechner' 'Qi An'\n 'Christopher Ré' 'Hajime Asama' 'Stefano Ermon' 'Taiji Suzuki'\n 'Atsushi Yamashita' 'Michael Poli']"
]
|
null | null | 2405.06148 | null | null | http://arxiv.org/pdf/2405.06148v1 | 2024-05-10T00:13:39Z | 2024-05-10T00:13:39Z | Detecting Moving Objects With Machine Learning | The scientific study of the Solar System's minor bodies ultimately starts with a search for those bodies. This chapter presents a review of the use of machine learning techniques to find moving objects, both natural and artificial, in astronomical imagery. After a short review of the classical non-machine learning techniques that are historically used, I review the relatively nascent machine learning literature, which can broadly be summarized into three categories: streak detection, detection of moving point sources in image sequences, and detection of moving sources in shift and stack searches. In most cases, convolutional neural networks are utilized, which is the obvious choice given the imagery nature of the inputs. In this chapter I present two example networks: a Residual Network I designed which is in use in various shift and stack searches, and a convolutional neural network that was designed for prediction of source brightnesses and their uncertainties in those same shift-stacks. In discussion of the literature and example networks, I discuss various pitfalls with the use of machine learning techniques, including a discussion on the important issue of overfitting. I discuss various pitfall associated with the use of machine learning techniques, and what I consider best practices to follow in the application of machine learning to a new problem, including methods for the creation of robust training sets, validation, and training to avoid overfitting. | [
"['Wesley C. Fraser']"
]
|
null | null | 2405.06161 | null | null | http://arxiv.org/pdf/2405.06161v2 | 2024-05-21T18:12:09Z | 2024-05-10T00:50:08Z | (A Partial Survey of) Decentralized, Cooperative Multi-Agent
Reinforcement Learning | Multi-agent reinforcement learning (MARL) has exploded in popularity in recent years. Many approaches have been developed but they can be divided into three main types: centralized training and execution (CTE), centralized training for decentralized execution (CTDE), and Decentralized training and execution (DTE).Decentralized training and execution methods make the fewest assumptions and are often simple to implement. In fact, as I'll discuss, any single-agent RL method can be used for DTE by just letting each agent learn separately. Of course, there are pros and cons to such approaches as I discuss below. It is worth noting that DTE is required if no offline coordination is available. That is, if all agents must learn during online interactions without prior coordination, learning and execution must both be decentralized. DTE methods can be applied in cooperative, competitive, or mixed cases but this text will focus on the cooperative MARL case. In this text, I will first give a brief description of the cooperative MARL problem in the form of the Dec-POMDP. Then, I will discuss value-based DTE methods starting with independent Q-learning and its extensions and then discuss the extension to the deep case with DQN, the additional complications this causes, and methods that have been developed to (attempt to) address these issues. Next, I will discuss policy gradient DTE methods starting with independent REINFORCE (i.e., vanilla policy gradient), and then extending to the actor-critic case and deep variants (such as independent PPO). Finally, I will discuss some general topics related to DTE and future directions. | [
"['Christopher Amato']"
]
|
null | null | 2405.06172 | null | null | http://arxiv.org/pdf/2405.06172v1 | 2024-05-10T01:30:25Z | 2024-05-10T01:30:25Z | Anomaly Detection in Graph Structured Data: A Survey | Real-world graphs are complex to process for performing effective analysis, such as anomaly detection. However, recently, there have been several research efforts addressing the issues surrounding graph-based anomaly detection. In this paper, we discuss a comprehensive overview of anomaly detection techniques on graph data. We also discuss the various application domains which use those anomaly detection techniques. We present a new taxonomy that categorizes the different state-of-the-art anomaly detection methods based on assumptions and techniques. Within each category, we discuss the fundamental research ideas that have been done to improve anomaly detection. We further discuss the advantages and disadvantages of current anomaly detection techniques. Finally, we present potential future research directions in anomaly detection on graph-structured data. | [
"['Prabin B Lamichhane' 'William Eberle']"
]
|
null | null | 2405.06178 | null | null | http://arxiv.org/pdf/2405.06178v1 | 2024-05-10T01:45:09Z | 2024-05-10T01:45:09Z | ACTION: Augmentation and Computation Toolbox for Brain Network Analysis
with Functional MRI | Functional magnetic resonance imaging (fMRI) has been increasingly employed to investigate functional brain activity. Many fMRI-related software/toolboxes have been developed, providing specialized algorithms for fMRI analysis. However, existing toolboxes seldom consider fMRI data augmentation, which is quite useful, especially in studies with limited or imbalanced data. Moreover, current studies usually focus on analyzing fMRI using conventional machine learning models that rely on human-engineered fMRI features, without investigating deep learning models that can automatically learn data-driven fMRI representations. In this work, we develop an open-source toolbox, called Augmentation and Computation Toolbox for braIn netwOrk aNalysis (ACTION), offering comprehensive functions to streamline fMRI analysis. The ACTION is a Python-based and cross-platform toolbox with graphical user-friendly interfaces. It enables automatic fMRI augmentation, covering blood-oxygen-level-dependent (BOLD) signal augmentation and brain network augmentation. Many popular methods for brain network construction and network feature extraction are included. In particular, it supports constructing deep learning models, which leverage large-scale auxiliary unlabeled data (3,800+ resting-state fMRI scans) for model pretraining to enhance model performance for downstream tasks. To facilitate multi-site fMRI studies, it is also equipped with several popular federated learning strategies. Furthermore, it enables users to design and test custom algorithms through scripting, greatly improving its utility and extensibility. We demonstrate the effectiveness and user-friendliness of ACTION on real fMRI data and present the experimental results. The software, along with its source code and manual, can be accessed online. | [
"['Yuqi Fang' 'Junhao Zhang' 'Linmin Wang' 'Qianqian Wang' 'Mingxia Liu']"
]
|
null | null | 2405.06192 | null | null | http://arxiv.org/pdf/2405.06192v1 | 2024-05-10T02:21:42Z | 2024-05-10T02:21:42Z | Contrastive Representation for Data Filtering in Cross-Domain Offline
Reinforcement Learning | Cross-domain offline reinforcement learning leverages source domain data with diverse transition dynamics to alleviate the data requirement for the target domain. However, simply merging the data of two domains leads to performance degradation due to the dynamics mismatch. Existing methods address this problem by measuring the dynamics gap via domain classifiers while relying on the assumptions of the transferability of paired domains. In this paper, we propose a novel representation-based approach to measure the domain gap, where the representation is learned through a contrastive objective by sampling transitions from different domains. We show that such an objective recovers the mutual-information gap of transition functions in two domains without suffering from the unbounded issue of the dynamics gap in handling significantly different domains. Based on the representations, we introduce a data filtering algorithm that selectively shares transitions from the source domain according to the contrastive score functions. Empirical results on various tasks demonstrate that our method achieves superior performance, using only 10% of the target data to achieve 89.2% of the performance on 100% target dataset with state-of-the-art methods. | [
"['Xiaoyu Wen' 'Chenjia Bai' 'Kang Xu' 'Xudong Yu' 'Yang Zhang'\n 'Xuelong Li' 'Zhen Wang']"
]
|
null | null | 2405.06196 | null | null | http://arxiv.org/pdf/2405.06196v2 | 2024-06-27T14:19:56Z | 2024-05-10T02:23:56Z | VLSM-Adapter: Finetuning Vision-Language Segmentation Efficiently with
Lightweight Blocks | Foundation Vision-Language Models (VLMs) trained using large-scale open-domain images and text pairs have recently been adapted to develop Vision-Language Segmentation Models (VLSMs) that allow providing text prompts during inference to guide image segmentation. If robust and powerful VLSMs can be built for medical images, it could aid medical professionals in many clinical tasks where they must spend substantial time delineating the target structure of interest. VLSMs for medical images resort to fine-tuning base VLM or VLSM pretrained on open-domain natural image datasets due to fewer annotated medical image datasets; this fine-tuning is resource-consuming and expensive as it usually requires updating all or a significant fraction of the pretrained parameters. Recently, lightweight blocks called adapters have been proposed in VLMs that keep the pretrained model frozen and only train adapters during fine-tuning, substantially reducing the computing resources required. We introduce a novel adapter, VLSM-Adapter, that can fine-tune pretrained vision-language segmentation models using transformer encoders. Our experiments in widely used CLIP-based segmentation models show that with only 3 million trainable parameters, the VLSM-Adapter outperforms state-of-the-art and is comparable to the upper bound end-to-end fine-tuning. The source code is available at: https://github.com/naamiinepal/vlsm-adapter. | [
"['Manish Dhakal' 'Rabin Adhikari' 'Safal Thapaliya' 'Bishesh Khanal']"
]
|
null | null | 2405.06206 | null | null | http://arxiv.org/pdf/2405.06206v1 | 2024-05-10T02:44:25Z | 2024-05-10T02:44:25Z | Concealing Backdoor Model Updates in Federated Learning by
Trigger-Optimized Data Poisoning | Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data. Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks, where adversaries poison the local training data of a subset of clients using a backdoor trigger, aiming to make the aggregated model produce malicious results when the same backdoor condition is met by an inference-time input. Existing backdoor attacks in FL suffer from common deficiencies: fixed trigger patterns and reliance on the assistance of model poisoning. State-of-the-art defenses based on Byzantine-robust aggregation exhibit a good defense performance on these attacks because of the significant divergence between malicious and benign model updates. To effectively conceal malicious model updates among benign ones, we propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger, making backdoor data have minimal effect on model updates. We provide theoretical justifications for DPOT's attacking principle and display experimental results showing that DPOT, via only a data-poisoning attack, effectively undermines state-of-the-art defenses and outperforms existing backdoor attack techniques on various datasets. | [
"['Yujie Zhang' 'Neil Gong' 'Michael K. Reiter']"
]
|
null | null | 2405.06219 | null | null | http://arxiv.org/pdf/2405.06219v2 | 2024-05-13T14:39:11Z | 2024-05-10T03:06:24Z | SKVQ: Sliding-window Key and Value Cache Quantization for Large Language
Models | Large language models (LLMs) can now handle longer sequences of tokens, enabling complex tasks like book understanding and generating lengthy novels. However, the key-value (KV) cache required for LLMs consumes substantial memory as context length increasing, becoming the bottleneck for deployment. In this paper, we present a strategy called SKVQ, which stands for sliding-window KV cache quantization, to address the issue of extremely low bitwidth KV cache quantization. To achieve this, SKVQ rearranges the channels of the KV cache in order to improve the similarity of channels in quantization groups, and applies clipped dynamic quantization at the group level. Additionally, SKVQ ensures that the most recent window tokens in the KV cache are preserved with high precision. This helps maintain the accuracy of a small but important portion of the KV cache.SKVQ achieves high compression ratios while maintaining accuracy. Our evaluation on LLMs demonstrates that SKVQ surpasses previous quantization approaches, allowing for quantization of the KV cache to 2-bit keys and 1.5-bit values with minimal loss of accuracy. With SKVQ, it is possible to process context lengths of up to 1M on an 80GB memory GPU for a 7b model and up to 7 times faster decoding. | [
"['Haojie Duanmu' 'Zhihang Yuan' 'Xiuhong Li' 'Jiangfei Duan'\n 'Xingcheng Zhang' 'Dahua Lin']"
]
|
null | null | 2405.06234 | null | null | http://arxiv.org/pdf/2405.06234v1 | 2024-05-10T04:00:50Z | 2024-05-10T04:00:50Z | TS3IM: Unveiling Structural Similarity in Time Series through Image
Similarity Assessment Insights | In the realm of time series analysis, accurately measuring similarity is crucial for applications such as forecasting, anomaly detection, and clustering. However, existing metrics often fail to capture the complex, multidimensional nature of time series data, limiting their effectiveness and application. This paper introduces the Structured Similarity Index Measure for Time Series (TS3IM), a novel approach inspired by the success of the Structural Similarity Index Measure (SSIM) in image analysis, tailored to address these limitations by assessing structural similarity in time series. TS3IM evaluates multiple dimensions of similarity-trend, variability, and structural integrity-offering a more nuanced and comprehensive measure. This metric represents a significant leap forward, providing a robust tool for analyzing temporal data and offering more accurate and comprehensive sequence analysis and decision support in fields such as monitoring power consumption, analyzing traffic flow, and adversarial recognition. Our extensive experimental results also show that compared with traditional methods that rely heavily on computational correlation, TS3IM is 1.87 times more similar to Dynamic Time Warping (DTW) in evaluation results and improves by more than 50% in adversarial recognition. | [
"['Yuhan Liu' 'Ke Tu']"
]
|
null | null | 2405.06238 | null | null | http://arxiv.org/pdf/2405.06238v2 | 2024-05-28T01:48:32Z | 2024-05-10T04:13:07Z | A Novel Pseudo Nearest Neighbor Classification Method Using Local
Harmonic Mean Distance | In the realm of machine learning, the KNN classification algorithm is widely recognized for its simplicity and efficiency. However, its sensitivity to the K value poses challenges, especially with small sample sizes or outliers, impacting classification performance. This article introduces a novel KNN-based classifier called LMPHNN (Novel Pseudo Nearest Neighbor Classification Method Using Local Harmonic Mean Distance). LMPHNN leverages harmonic mean distance (HMD) to improve classification performance based on LMPNN rules and HMD. The classifier begins by identifying k nearest neighbors for each class and generates distinct local vectors as prototypes. Pseudo nearest neighbors (PNNs) are then created based on the local mean for each class, determined by comparing the HMD of the sample with the initial k group. Classification is determined by calculating the Euclidean distance between the query sample and PNNs, based on the local mean of these categories. Extensive experiments on various real UCI datasets and combined datasets compare LMPHNN with seven KNN-based classifiers, using precision, recall, accuracy, and F1 as evaluation metrics. LMPHNN achieves an average precision of 97%, surpassing other methods by 14%. The average recall improves by 12%, with an average accuracy enhancement of 5%. Additionally, LMPHNN demonstrates a 13% higher average F1 value compared to other methods. In summary, LMPHNN outperforms other classifiers, showcasing lower sensitivity with small sample sizes. | [
"['Junzhuo Chen' 'Zhixin Lu' 'Shitong Kang']"
]
|
null | null | 2405.06247 | null | null | http://arxiv.org/pdf/2405.06247v1 | 2024-05-10T05:09:59Z | 2024-05-10T05:09:59Z | Disttack: Graph Adversarial Attacks Toward Distributed GNN Training | Graph Neural Networks (GNNs) have emerged as potent models for graph learning. Distributing the training process across multiple computing nodes is the most promising solution to address the challenges of ever-growing real-world graphs. However, current adversarial attack methods on GNNs neglect the characteristics and applications of the distributed scenario, leading to suboptimal performance and inefficiency in attacking distributed GNN training. In this study, we introduce Disttack, the first framework of adversarial attacks for distributed GNN training that leverages the characteristics of frequent gradient updates in a distributed system. Specifically, Disttack corrupts distributed GNN training by injecting adversarial attacks into one single computing node. The attacked subgraphs are precisely perturbed to induce an abnormal gradient ascent in backpropagation, disrupting gradient synchronization between computing nodes and thus leading to a significant performance decline of the trained GNN. We evaluate Disttack on four large real-world graphs by attacking five widely adopted GNNs. Compared with the state-of-the-art attack method, experimental results demonstrate that Disttack amplifies the model accuracy degradation by 2.75$times$ and achieves speedup by 17.33$times$ on average while maintaining unnoticeability. | [
"['Yuxiang Zhang' 'Xin Liu' 'Meng Wu' 'Wei Yan' 'Mingyu Yan' 'Xiaochun Ye'\n 'Dongrui Fan']"
]
|
null | null | 2405.06263 | null | null | http://arxiv.org/pdf/2405.06263v2 | 2024-05-30T09:40:02Z | 2024-05-10T06:28:42Z | Learning Latent Dynamic Robust Representations for World Models | Visual Model-Based Reinforcement Learning (MBRL) promises to encapsulate agent's knowledge about the underlying dynamics of the environment, enabling learning a world model as a useful planner. However, top MBRL agents such as Dreamer often struggle with visual pixel-based inputs in the presence of exogenous or irrelevant noise in the observation space, due to failure to capture task-specific features while filtering out irrelevant spatio-temporal details. To tackle this problem, we apply a spatio-temporal masking strategy, a bisimulation principle, combined with latent reconstruction, to capture endogenous task-specific aspects of the environment for world models, effectively eliminating non-essential information. Joint training of representations, dynamics, and policy often leads to instabilities. To further address this issue, we develop a Hybrid Recurrent State-Space Model (HRSSM) structure, enhancing state representation robustness for effective policy learning. Our empirical evaluation demonstrates significant performance improvements over existing methods in a range of visually complex control tasks such as Maniskill cite{gu2023maniskill2} with exogenous distractors from the Matterport environment. Our code is avaliable at https://github.com/bit1029public/HRSSM. | [
"['Ruixiang Sun' 'Hongyu Zang' 'Xin Li' 'Riashat Islam']"
]
|
null | null | 2405.06270 | null | null | http://arxiv.org/pdf/2405.06270v3 | 2024-06-03T16:23:28Z | 2024-05-10T06:52:44Z | XAI4LLM. Let Machine Learning Models and LLMs Collaborate for Enhanced
In-Context Learning in Healthcare | The integration of Large Language Models (LLMs) into healthcare diagnostics offers a promising avenue for clinical decision-making. This study outlines the development of a novel method for zero-shot/few-shot in-context learning (ICL) by integrating medical domain knowledge using a multi-layered structured prompt. We also explore the efficacy of two communication styles between the user and LLMs: the Numerical Conversational (NC) style, which processes data incrementally, and the Natural Language Single-Turn (NL-ST) style, which employs long narrative prompts. Our study systematically evaluates the diagnostic accuracy and risk factors, including gender bias and false negative rates, using a dataset of 920 patient records in various few-shot scenarios. Results indicate that traditional clinical machine learning (ML) models generally outperform LLMs in zero-shot and few-shot settings. However, the performance gap narrows significantly when employing few-shot examples alongside effective explainable AI (XAI) methods as sources of domain knowledge. Moreover, with sufficient time and an increased number of examples, the conversational style (NC) nearly matches the performance of ML models. Most notably, LLMs demonstrate comparable or superior cost-sensitive accuracy relative to ML models. This research confirms that, with appropriate domain knowledge and tailored communication strategies, LLMs can significantly enhance diagnostic processes. The findings highlight the importance of optimizing the number of training examples and communication styles to improve accuracy and reduce biases in LLM applications. | [
"['Fatemeh Nazary' 'Yashar Deldjoo' 'Tommaso Di Noia' 'Eugenio di Sciascio']"
]
|
null | null | 2405.06284 | null | null | http://arxiv.org/pdf/2405.06284v1 | 2024-05-10T07:34:36Z | 2024-05-10T07:34:36Z | Modality-agnostic Domain Generalizable Medical Image Segmentation by
Multi-Frequency in Multi-Scale Attention | Generalizability in deep neural networks plays a pivotal role in medical image segmentation. However, deep learning-based medical image analyses tend to overlook the importance of frequency variance, which is critical element for achieving a model that is both modality-agnostic and domain-generalizable. Additionally, various models fail to account for the potential information loss that can arise from multi-task learning under deep supervision, a factor that can impair the model representation ability. To address these challenges, we propose a Modality-agnostic Domain Generalizable Network (MADGNet) for medical image segmentation, which comprises two key components: a Multi-Frequency in Multi-Scale Attention (MFMSA) block and Ensemble Sub-Decoding Module (E-SDM). The MFMSA block refines the process of spatial feature extraction, particularly in capturing boundary features, by incorporating multi-frequency and multi-scale features, thereby offering informative cues for tissue outline and anatomical structures. Moreover, we propose E-SDM to mitigate information loss in multi-task learning with deep supervision, especially during substantial upsampling from low resolution. We evaluate the segmentation performance of MADGNet across six modalities and fifteen datasets. Through extensive experiments, we demonstrate that MADGNet consistently outperforms state-of-the-art models across various modalities, showcasing superior segmentation performance. This affirms MADGNet as a robust solution for medical image segmentation that excels in diverse imaging scenarios. Our MADGNet code is available in GitHub Link. | [
"['Ju-Hyeon Nam' 'Nur Suriza Syazwany' 'Su Jung Kim' 'Sang-Chul Lee']"
]
|
null | null | 2405.06286 | null | null | http://arxiv.org/pdf/2405.06286v1 | 2024-05-10T07:36:03Z | 2024-05-10T07:36:03Z | A Joint Approach Towards Data-Driven Virtual Testing for Automated
Driving: The AVEAS Project | With growing complexity and responsibility of automated driving functions in road traffic and growing scope of their operational design domains, there is increasing demand for covering significant parts of development, validation, and verification via virtual environments and simulation models. If, however, simulations are meant not only to augment real-world experiments, but to replace them, quantitative approaches are required that measure to what degree and under which preconditions simulation models adequately represent reality, and thus allow their usage for virtual testing of driving functions. Especially in research and development areas related to the safety impacts of the "open world", there is a significant shortage of real-world data to parametrize and/or validate simulations - especially with respect to the behavior of human traffic participants, whom automated vehicles will meet in mixed traffic. This paper presents the intermediate results of the German AVEAS research project (www.aveas.org) which aims at developing methods and metrics for the harmonized, systematic, and scalable acquisition of real-world data for virtual verification and validation of advanced driver assistance systems and automated driving, and establishing an online database following the FAIR principles. | [
"['Leon Eisemann' 'Mirjam Fehling-Kaschek' 'Silke Forkert'\n 'Andreas Forster' 'Henrik Gommel' 'Susanne Guenther' 'Stephan Hammer'\n 'David Hermann' 'Marvin Klemp' 'Benjamin Lickert' 'Florian Luettner'\n 'Robin Moss' 'Nicole Neis' 'Maria Pohle' 'Dominik Schreiber'\n 'Cathrina Sowa' 'Daniel Stadler' 'Janina Stompe' 'Michael Strobelt'\n 'David Unger' 'Jens Ziehn']"
]
|
null | null | 2405.06293 | null | null | http://arxiv.org/pdf/2405.06293v1 | 2024-05-10T07:51:26Z | 2024-05-10T07:51:26Z | Machine learning for reconstruction of polarity inversion lines from
solar filaments | Solar filaments are well-known tracers of polarity inversion lines that separate two opposite magnetic polarities on the solar photosphere. Because observations of filaments began long before the systematic observations of solar magnetic fields, historical filament catalogs can facilitate the reconstruction of magnetic polarity maps at times when direct magnetic observations were not yet available. In practice, this reconstruction is often ambiguous and typically performed manually. We propose an automatic approach based on a machine-learning model that generates a variety of magnetic polarity maps consistent with filament observations. To evaluate the model and discuss the results we use the catalog of solar filaments and polarity maps compiled by McIntosh. We realize that the process of manual compilation of polarity maps includes not only information on filaments, but also a large amount of prior information, which is difficult to formalize. In order to compensate for the lack of prior knowledge for the machine-learning model, we provide it with polarity information at several reference points. We demonstrate that this process, which can be considered as the user-guided reconstruction or super-resolution, leads to polarity maps that are reasonably close to hand-drawn ones, and additionally allows for uncertainty estimation. | [
"['V. Kisielius' 'E. Illarionov']"
]
|
null | null | 2405.06298 | null | null | http://arxiv.org/pdf/2405.06298v1 | 2024-05-10T08:02:20Z | 2024-05-10T08:02:20Z | PUMA: margin-based data pruning | Deep learning has been able to outperform humans in terms of classification accuracy in many tasks. However, to achieve robustness to adversarial perturbations, the best methodologies require to perform adversarial training on a much larger training set that has been typically augmented using generative models (e.g., diffusion models). Our main objective in this work, is to reduce these data requirements while achieving the same or better accuracy-robustness trade-offs. We focus on data pruning, where some training samples are removed based on the distance to the model classification boundary (i.e., margin). We find that the existing approaches that prune samples with low margin fails to increase robustness when we add a lot of synthetic data, and explain this situation with a perceptron learning task. Moreover, we find that pruning high margin samples for better accuracy increases the harmful impact of mislabeled perturbed data in adversarial training, hurting both robustness and accuracy. We thus propose PUMA, a new data pruning strategy that computes the margin using DeepFool, and prunes the training samples of highest margin without hurting performance by jointly adjusting the training attack norm on the samples of lowest margin. We show that PUMA can be used on top of the current state-of-the-art methodology in robustness, and it is able to significantly improve the model performance unlike the existing data pruning strategies. Not only PUMA achieves similar robustness with less data, but it also significantly increases the model accuracy, improving the performance trade-off. | [
"['Javier Maroto' 'Pascal Frossard']"
]
|
null | null | 2405.06301 | null | null | http://arxiv.org/pdf/2405.06301v1 | 2024-05-10T08:09:53Z | 2024-05-10T08:09:53Z | Learning from String Sequences | The Universal Similarity Metric (USM) has been demonstrated to give practically useful measures of "similarity" between sequence data. Here we have used the USM as an alternative distance metric in a K-Nearest Neighbours (K-NN) learner to allow effective pattern recognition of variable length sequence data. We compare this USM approach with the commonly used string-to-word vector approach. Our experiments have used two data sets of divergent domains: (1) spam email filtering and (2) protein subcellular localization. Our results with this data reveal that the USM-based K-NN learner (1) gives predictions with higher classification accuracy than those output by techniques that use the string-to-word vector approach, and (2) can be used to generate reliable probability forecasts. | [
"['David Lindsay' 'Sian Lindsay']"
]
|
null | null | 2405.06306 | null | null | http://arxiv.org/pdf/2405.06306v1 | 2024-05-10T08:31:04Z | 2024-05-10T08:31:04Z | A NLP Approach to "Review Bombing" in Metacritic PC Videogames User
Ratings | Many videogames suffer "review bombing" -a large volume of unusually low scores that in many cases do not reflect the real quality of the product- when rated by users. By taking Metacritic's 50,000+ user score aggregations for PC games in English language, we use a Natural Language Processing (NLP) approach to try to understand the main words and concepts appearing in such cases, reaching a 0.88 accuracy on a validation set when distinguishing between just bad ratings and review bombings. By uncovering and analyzing the patterns driving this phenomenon, these results could be used to further mitigate these situations. | [
"['Javier Coronado-Blázquez']"
]
|
null | null | 2405.06312 | null | null | http://arxiv.org/pdf/2405.06312v1 | 2024-05-10T08:34:46Z | 2024-05-10T08:34:46Z | FedGCS: A Generative Framework for Efficient Client Selection in
Federated Learning via Gradient-based Optimization | Federated Learning faces significant challenges in statistical and system heterogeneity, along with high energy consumption, necessitating efficient client selection strategies. Traditional approaches, including heuristic and learning-based methods, fall short of addressing these complexities holistically. In response, we propose FedGCS, a novel generative client selection framework that innovatively recasts the client selection process as a generative task. Drawing inspiration from the methodologies used in large language models, FedGCS efficiently encodes abundant decision-making knowledge within a continuous representation space, enabling efficient gradient-based optimization to search for optimal client selection that will be finally output via generation. The framework comprises four steps: (1) automatic collection of diverse "selection-score" pair data using classical client selection methods; (2) training an encoder-evaluator-decoder framework on this data to construct a continuous representation space; (3) employing gradient-based optimization in this space for optimal client selection; (4) generating the final optimal client selection via using beam search for the well-trained decoder. FedGCS outperforms traditional methods by being more comprehensive, generalizable, and efficient, simultaneously optimizing for model performance, latency, and energy consumption. The effectiveness of FedGCS is proven through extensive experimental analyses. | [
"['Zhiyuan Ning' 'Chunlin Tian' 'Meng Xiao' 'Wei Fan' 'Pengyang Wang'\n 'Li Li' 'Pengfei Wang' 'Yuanchun Zhou']"
]
|
null | null | 2405.06330 | null | null | http://arxiv.org/pdf/2405.06330v2 | 2024-06-30T11:19:59Z | 2024-05-10T09:03:12Z | Interpretable Multi-task Learning with Shared Variable Embeddings | This paper proposes a general interpretable predictive system with shared information. The system is able to perform predictions in a multi-task setting where distinct tasks are not bound to have the same input/output structure. Embeddings of input and output variables in a common space are obtained, where the input embeddings are produced through attending to a set of shared embeddings, reused across tasks. All the embeddings are treated as model parameters and learned. Specific restrictions on the space of shared embedings and the sparsity of the attention mechanism are considered. Experiments show that the introduction of shared embeddings does not deteriorate the results obtained from a vanilla variable embeddings method. We run a number of further ablations. Inducing sparsity in the attention mechanism leads to both an increase in accuracy and a significant decrease in the number of training steps required. Shared embeddings provide a measure of interpretability in terms of both a qualitative assessment and the ability to map specific shared embeddings to pre-defined concepts that are not tailored to the considered model. There seems to be a trade-off between accuracy and interpretability. The basic shared embeddings method favors interpretability, whereas the sparse attention method promotes accuracy. The results lead to the conclusion that variable embedding methods may be extended with shared information to provide increased interpretability and accuracy. | [
"['Maciej Żelaszczyk' 'Jacek Mańdziuk']"
]
|
null | null | 2405.06331 | null | null | http://arxiv.org/pdf/2405.06331v1 | 2024-05-10T09:03:27Z | 2024-05-10T09:03:27Z | LMD3: Language Model Data Density Dependence | We develop a methodology for analyzing language model task performance at the individual example level based on training data density estimation. Experiments with paraphrasing as a controlled intervention on finetuning data demonstrate that increasing the support in the training distribution for specific test queries results in a measurable increase in density, which is also a significant predictor of the performance increase caused by the intervention. Experiments with pretraining data demonstrate that we can explain a significant fraction of the variance in model perplexity via density measurements. We conclude that our framework can provide statistical evidence of the dependence of a target model's predictions on subsets of its training data, and can more generally be used to characterize the support (or lack thereof) in the training data for a given test task. | [
"['John Kirchenbauer' 'Garrett Honke' 'Gowthami Somepalli' 'Jonas Geiping'\n 'Daphne Ippolito' 'Katherine Lee' 'Tom Goldstein' 'David Andre']"
]
|
null | null | 2405.06361 | null | null | http://arxiv.org/pdf/2405.06361v1 | 2024-05-10T09:56:02Z | 2024-05-10T09:56:02Z | Certified $\ell_2$ Attribution Robustness via Uniformly Smoothed
Attributions | Model attribution is a popular tool to explain the rationales behind model predictions. However, recent work suggests that the attributions are vulnerable to minute perturbations, which can be added to input samples to fool the attributions while maintaining the prediction outputs. Although empirical studies have shown positive performance via adversarial training, an effective certified defense method is eminently needed to understand the robustness of attributions. In this work, we propose to use uniform smoothing technique that augments the vanilla attributions by noises uniformly sampled from a certain space. It is proved that, for all perturbations within the attack region, the cosine similarity between uniformly smoothed attribution of perturbed sample and the unperturbed sample is guaranteed to be lower bounded. We also derive alternative formulations of the certification that is equivalent to the original one and provides the maximum size of perturbation or the minimum smoothing radius such that the attribution can not be perturbed. We evaluate the proposed method on three datasets and show that the proposed method can effectively protect the attributions from attacks, regardless of the architecture of networks, training schemes and the size of the datasets. | [
"['Fan Wang' 'Adams Wai-Kin Kong']"
]
|
null | null | 2405.06363 | null | null | http://arxiv.org/pdf/2405.06363v1 | 2024-05-10T09:58:47Z | 2024-05-10T09:58:47Z | Projection by Convolution: Optimal Sample Complexity for Reinforcement
Learning in Continuous-Space MDPs | We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators. Given access to a generative model, we achieve rate-optimal sample complexity by performing a simple, emph{perturbed} version of least-squares value iteration with orthogonal trigonometric polynomials as features. Key to our solution is a novel projection technique based on ideas from harmonic analysis. Our~$widetilde{mathcal{O}}(epsilon^{-2-d/(nu+1)})$ sample complexity, where $d$ is the dimension of the state-action space and $nu$ the order of smoothness, recovers the state-of-the-art result of discretization approaches for the special case of Lipschitz MDPs $(nu=0)$. At the same time, for $nutoinfty$, it recovers and greatly generalizes the $mathcal{O}(epsilon^{-2})$ rate of low-rank MDPs, which are more amenable to regression approaches. In this sense, our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs. | [
"['Davide Maran' 'Alberto Maria Metelli' 'Matteo Papini'\n 'Marcello Restelli']"
]
|
null | null | 2405.06368 | null | null | http://arxiv.org/pdf/2405.06368v2 | 2024-05-28T09:41:17Z | 2024-05-10T10:10:37Z | DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under
Differentially Private Federated Learning using Dynamic Low-Rank Adaptation | Federated learning (FL) allows clients in an Internet of Things (IoT) system to collaboratively train a global model without sharing their local data with a server. However, clients' contributions to the server can still leak sensitive information. Differential privacy (DP) addresses such leakage by providing formal privacy guarantees, with mechanisms that add randomness to the clients' contributions. The randomness makes it infeasible to train large transformer-based models, common in modern IoT systems. In this work, we empirically evaluate the practicality of fine-tuning large scale on-device transformer-based models with differential privacy in a federated learning system. We conduct comprehensive experiments on various system properties for tasks spanning a multitude of domains: speech recognition, computer vision (CV) and natural language understanding (NLU). Our results show that full fine-tuning under differentially private federated learning (DP-FL) generally leads to huge performance degradation which can be alleviated by reducing the dimensionality of contributions through parameter-efficient fine-tuning (PEFT). Our benchmarks of existing DP-PEFT methods show that DP-Low-Rank Adaptation (DP-LoRA) consistently outperforms other methods. An even more promising approach, DyLoRA, which makes the low rank variable, when naively combined with FL would straightforwardly break differential privacy. We therefore propose an adaptation method that can be combined with differential privacy and call it DP-DyLoRA. Finally, we are able to reduce the accuracy degradation and word error rate (WER) increase due to DP to less than 2% and 7% respectively with 1 million clients and a stringent privacy budget of {epsilon}=2. | [
"['Jie Xu' 'Karthikeyan Saravanan' 'Rogier van Dalen' 'Haaris Mehmood'\n 'David Tuckey' 'Mete Ozay']"
]
|
null | null | 2405.06394 | null | null | http://arxiv.org/pdf/2405.06394v2 | 2024-05-13T20:27:34Z | 2024-05-10T11:08:20Z | Memory Mosaics | Memory Mosaics are networks of associative memories working in concert to achieve a prediction task of interest. Like transformers, memory mosaics possess compositional capabilities and in-context learning capabilities. Unlike transformers, memory mosaics achieve these capabilities in comparatively transparent ways. We demonstrate these capabilities on toy examples and we also show that memory mosaics perform as well or better than transformers on medium-scale language modeling tasks. | [
"['Jianyu Zhang' 'Niklas Nolte' 'Ranajoy Sadhukhan' 'Beidi Chen'\n 'Léon Bottou']"
]
|
null | null | 2405.06399 | null | null | http://arxiv.org/pdf/2405.06399v1 | 2024-05-10T11:22:31Z | 2024-05-10T11:22:31Z | Program Synthesis using Inductive Logic Programming for the Abstraction
and Reasoning Corpus | The Abstraction and Reasoning Corpus (ARC) is a general artificial intelligence benchmark that is currently unsolvable by any Machine Learning method, including Large Language Models (LLMs). It demands strong generalization and reasoning capabilities which are known to be weaknesses of Neural Network based systems. In this work, we propose a Program Synthesis system that uses Inductive Logic Programming (ILP), a branch of Symbolic AI, to solve ARC. We have manually defined a simple Domain Specific Language (DSL) that corresponds to a small set of object-centric abstractions relevant to ARC. This is the Background Knowledge used by ILP to create Logic Programs that provide reasoning capabilities to our system. The full system is capable of generalize to unseen tasks, since ILP can create Logic Program(s) from few examples, in the case of ARC: pairs of Input-Output grids examples for each task. These Logic Programs are able to generate Objects present in the Output grid and the combination of these can form a complete program that transforms an Input grid into an Output grid. We randomly chose some tasks from ARC that dont require more than the small number of the Object primitives we implemented and show that given only these, our system can solve tasks that require each, such different reasoning. | [
"['Filipe Marinho Rocha' 'Inês Dutra' 'Vítor Santos Costa']"
]
|
null | null | 2405.06409 | null | null | http://arxiv.org/pdf/2405.06409v1 | 2024-05-10T11:43:35Z | 2024-05-10T11:43:35Z | Visualizing Neural Network Imagination | In certain situations, neural networks will represent environment states in their hidden activations. Our goal is to visualize what environment states the networks are representing. We experiment with a recurrent neural network (RNN) architecture with a decoder network at the end. After training, we apply the decoder to the intermediate representations of the network to visualize what they represent. We define a quantitative interpretability metric and use it to demonstrate that hidden states can be highly interpretable on a simple task. We also develop autoencoder and adversarial techniques and show that benefit interpretability. | [
"['Nevan Wichers' 'Victor Tao' 'Riccardo Volpato' 'Fazl Barez']"
]
|
null | null | 2405.06415 | null | null | http://arxiv.org/pdf/2405.06415v1 | 2024-05-10T11:55:27Z | 2024-05-10T11:55:27Z | Generalization analysis with deep ReLU networks for metric and
similarity learning | While considerable theoretical progress has been devoted to the study of metric and similarity learning, the generalization mystery is still missing. In this paper, we study the generalization performance of metric and similarity learning by leveraging the specific structure of the true metric (the target function). Specifically, by deriving the explicit form of the true metric for metric and similarity learning with the hinge loss, we construct a structured deep ReLU neural network as an approximation of the true metric, whose approximation ability relies on the network complexity. Here, the network complexity corresponds to the depth, the number of nonzero weights and the computation units of the network. Consider the hypothesis space which consists of the structured deep ReLU networks, we develop the excess generalization error bounds for a metric and similarity learning problem by estimating the approximation error and the estimation error carefully. An optimal excess risk rate is derived by choosing the proper capacity of the constructed hypothesis space. To the best of our knowledge, this is the first-ever-known generalization analysis providing the excess generalization error for metric and similarity learning. In addition, we investigate the properties of the true metric of metric and similarity learning with general losses. | [
"['Junyu Zhou' 'Puyu Wang' 'Ding-Xuan Zhou']"
]
|
null | null | 2405.06418 | null | null | http://arxiv.org/pdf/2405.06418v2 | 2024-06-03T14:27:59Z | 2024-05-10T12:03:53Z | PAC-Bayesian Generalization Bounds for Knowledge Graph Representation
Learning | While a number of knowledge graph representation learning (KGRL) methods have been proposed over the past decade, very few theoretical analyses have been conducted on them. In this paper, we present the first PAC-Bayesian generalization bounds for KGRL methods. To analyze a broad class of KGRL models, we propose a generic framework named ReED (Relation-aware Encoder-Decoder), which consists of a relation-aware message passing encoder and a triplet classification decoder. Our ReED framework can express at least 15 different existing KGRL models, including not only graph neural network-based models such as R-GCN and CompGCN but also shallow-architecture models such as RotatE and ANALOGY. Our generalization bounds for the ReED framework provide theoretical grounds for the commonly used tricks in KGRL, e.g., parameter-sharing and weight normalization schemes, and guide desirable design choices for practical KGRL methods. We empirically show that the critical factors in our generalization bounds can explain actual generalization errors on three real-world knowledge graphs. | [
"['Jaejun Lee' 'Minsung Hwang' 'Joyce Jiyoung Whang']"
]
|
null | null | 2405.06419 | null | null | http://arxiv.org/pdf/2405.06419v1 | 2024-05-10T12:10:22Z | 2024-05-10T12:10:22Z | Time Evidence Fusion Network: Multi-source View in Long-Term Time Series
Forecasting | In real-world scenarios, time series forecasting often demands timeliness, making research on model backbones a perennially hot topic. To meet these performance demands, we propose a novel backbone from the perspective of information fusion. Introducing the Basic Probability Assignment (BPA) Module and the Time Evidence Fusion Network (TEFN), based on evidence theory, allows us to achieve superior performance. On the other hand, the perspective of multi-source information fusion effectively improves the accuracy of forecasting. Due to the fact that BPA is generated by fuzzy theory, TEFN also has considerable interpretability. In real data experiments, the TEFN partially achieved state-of-the-art, with low errors comparable to PatchTST, and operating efficiency surpass performance models such as Dlinear. Meanwhile, TEFN has high robustness and small error fluctuations in the random hyperparameter selection. TEFN is not a model that achieves the ultimate in single aspect, but a model that balances performance, accuracy, stability, and interpretability. | [
"['Tianxiang Zhan' 'Yuanpeng He' 'Zhen Li' 'Yong Deng']"
]
|
null | null | 2405.06424 | null | null | http://arxiv.org/pdf/2405.06424v2 | 2024-05-19T17:35:43Z | 2024-05-10T12:14:11Z | Improving Instruction Following in Language Models through Proxy-Based
Uncertainty Estimation | Assessing response quality to instructions in language models is vital but challenging due to the complexity of human language across different contexts. This complexity often results in ambiguous or inconsistent interpretations, making accurate assessment difficult. To address this issue, we propose a novel Uncertainty-aware Reward Model (URM) that introduces a robust uncertainty estimation for the quality of paired responses based on Bayesian approximation. Trained with preference datasets, our uncertainty-enabled proxy not only scores rewards for responses but also evaluates their inherent uncertainty. Empirical results demonstrate significant benefits of incorporating the proposed proxy into language model training. Our method boosts the instruction following capability of language models by refining data curation for training and improving policy optimization objectives, thereby surpassing existing methods by a large margin on benchmarks such as Vicuna and MT-bench. These findings highlight that our proposed approach substantially advances language model training and paves a new way of harnessing uncertainty within language models. | [
"['JoonHo Lee' 'Jae Oh Woo' 'Juree Seok' 'Parisa Hassanzadeh'\n 'Wooseok Jang' 'JuYoun Son' 'Sima Didari' 'Baruch Gutow' 'Heng Hao'\n 'Hankyu Moon' 'Wenjun Hu' 'Yeong-Dae Kwon' 'Taehee Lee' 'Seungjai Min']"
]
|
null | null | 2405.06425 | null | null | http://arxiv.org/pdf/2405.06425v1 | 2024-05-10T12:15:02Z | 2024-05-10T12:15:02Z | Koopman-Based Surrogate Modelling of Turbulent Rayleigh-Bénard
Convection | Several related works have introduced Koopman-based Machine Learning architectures as a surrogate model for dynamical systems. These architectures aim to learn non-linear measurements (also known as observables) of the system's state that evolve by a linear operator and are, therefore, amenable to model-based linear control techniques. So far, mainly simple systems have been targeted, and Koopman architectures as reduced-order models for more complex dynamics have not been fully explored. Hence, we use a Koopman-inspired architecture called the Linear Recurrent Autoencoder Network (LRAN) for learning reduced-order dynamics in convection flows of a Rayleigh B'enard Convection (RBC) system at different amounts of turbulence. The data is obtained from direct numerical simulations of the RBC system. A traditional fluid dynamics method, the Kernel Dynamic Mode Decomposition (KDMD), is used to compare the LRAN. For both methods, we performed hyperparameter sweeps to identify optimal settings. We used a Normalized Sum of Square Error measure for the quantitative evaluation of the models, and we also studied the model predictions qualitatively. We obtained more accurate predictions with the LRAN than with KDMD in the most turbulent setting. We conjecture that this is due to the LRAN's flexibility in learning complicated observables from data, thereby serving as a viable surrogate model for the main structure of fluid dynamics in turbulent convection settings. In contrast, KDMD was more effective in lower turbulence settings due to the repetitiveness of the convection flow. The feasibility of Koopman-based surrogate models for turbulent fluid flows opens possibilities for efficient model-based control techniques useful in a variety of industrial settings. | [
"['Thorben Markmann' 'Michiel Straat' 'Barbara Hammer']"
]
|
null | null | 2405.06433 | null | null | http://arxiv.org/pdf/2405.06433v2 | 2024-05-22T06:09:54Z | 2024-05-10T12:25:06Z | Fair Mixed Effects Support Vector Machine | To ensure unbiased and ethical automated predictions, fairness must be a core principle in machine learning applications. Fairness in machine learning aims to mitigate biases present in the training data and model imperfections that could lead to discriminatory outcomes. This is achieved by preventing the model from making decisions based on sensitive characteristics like ethnicity or sexual orientation. A fundamental assumption in machine learning is the independence of observations. However, this assumption often does not hold true for data describing social phenomena, where data points are often clustered based. Hence, if the machine learning models do not account for the cluster correlations, the results may be biased. Especially high is the bias in cases where the cluster assignment is correlated to the variable of interest. We present a fair mixed effects support vector machine algorithm that can handle both problems simultaneously. With a reproducible simulation study we demonstrate the impact of clustered data on the quality of fair machine learning predictions. | [
"['João Vitor Pamplona' 'Jan Pablo Burgard']"
]
|
null | null | 2405.06443 | null | null | http://arxiv.org/pdf/2405.06443v1 | 2024-05-10T12:48:57Z | 2024-05-10T12:48:57Z | Residual-based Attention Physics-informed Neural Networks for Efficient
Spatio-Temporal Lifetime Assessment of Transformers Operated in Renewable
Power Plants | Transformers are vital assets for the reliable and efficient operation of power and energy systems. They support the integration of renewables to the grid through improved grid stability and operation efficiency. Monitoring the health of transformers is essential to ensure grid reliability and efficiency. Thermal insulation ageing is a key transformer failure mode, which is generally tracked by monitoring the hotspot temperature (HST). However, HST measurement is complex and expensive and often estimated from indirect measurements. Existing computationally-efficient HST models focus on space-agnostic thermal models, providing worst-case HST estimates. This article introduces an efficient spatio-temporal model for transformer winding temperature and ageing estimation, which leverages physics-based partial differential equations (PDEs) with data-driven Neural Networks (NN) in a Physics Informed Neural Networks (PINNs) configuration to improve prediction accuracy and acquire spatio-temporal resolution. The computational efficiency of the PINN model is improved through the implementation of the Residual-Based Attention scheme that accelerates the PINN model convergence. PINN based oil temperature predictions are used to estimate spatio-temporal transformer winding temperature values, which are validated through PDE resolution models and fiber optic sensor measurements, respectively. Furthermore, the spatio-temporal transformer ageing model is inferred, aiding transformer health management decision-making and providing insights into localized thermal ageing phenomena in the transformer insulation. Results are validated with a distribution transformer operated on a floating photovoltaic power plant. | [
"['Ibai Ramirez' 'Joel Pino' 'David Pardo' 'Mikel Sanz' 'Luis del Rio'\n 'Alvaro Ortiz' 'Kateryna Morozovska' 'Jose I. Aizpurua']"
]
|
null | null | 2405.06463 | null | null | http://arxiv.org/pdf/2405.06463v2 | 2024-05-13T16:46:34Z | 2024-05-10T13:15:42Z | MRSegmentator: Robust Multi-Modality Segmentation of 40 Classes in MRI
and CT Sequences | Purpose: To introduce a deep learning model capable of multi-organ segmentation in MRI scans, offering a solution to the current limitations in MRI analysis due to challenges in resolution, standardized intensity values, and variability in sequences. Materials and Methods: he model was trained on 1,200 manually annotated MRI scans from the UK Biobank, 221 in-house MRI scans and 1228 CT scans, leveraging cross-modality transfer learning from CT segmentation models. A human-in-the-loop annotation workflow was employed to efficiently create high-quality segmentations. The model's performance was evaluated on NAKO and the AMOS22 dataset containing 600 and 60 MRI examinations. Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD) was used to assess segmentation accuracy. The model will be open sourced. Results: The model showcased high accuracy in segmenting well-defined organs, achieving Dice Similarity Coefficient (DSC) scores of 0.97 for the right and left lungs, and 0.95 for the heart. It also demonstrated robustness in organs like the liver (DSC: 0.96) and kidneys (DSC: 0.95 left, 0.95 right), which present more variability. However, segmentation of smaller and complex structures such as the portal and splenic veins (DSC: 0.54) and adrenal glands (DSC: 0.65 left, 0.61 right) revealed the need for further model optimization. Conclusion: The proposed model is a robust, tool for accurate segmentation of 40 anatomical structures in MRI and CT images. By leveraging cross-modality learning and interactive annotation, the model achieves strong performance and generalizability across diverse datasets, making it a valuable resource for researchers and clinicians. It is open source and can be downloaded from https://github.com/hhaentze/MRSegmentator. | [
"['Hartmut Häntze' 'Lina Xu' 'Felix J. Dorfner' 'Leonhard Donle'\n 'Daniel Truhn' 'Hugo Aerts' 'Mathias Prokop' 'Bram van Ginneken'\n 'Alessa Hering' 'Lisa C. Adams' 'Keno K. Bressem']"
]
|
null | null | 2405.06464 | null | null | http://arxiv.org/pdf/2405.06464v3 | 2024-05-25T10:46:46Z | 2024-05-10T13:16:23Z | Single-seed generation of Brownian paths and integrals for adaptive and
high order SDE solvers | Despite the success of adaptive time-stepping in ODE simulation, it has so far seen few applications for Stochastic Differential Equations (SDEs). To simulate SDEs adaptively, methods such as the Virtual Brownian Tree (VBT) have been developed, which can generate Brownian motion (BM) non-chronologically. However, in most applications, knowing only the values of Brownian motion is not enough to achieve a high order of convergence; for that, we must compute time-integrals of BM such as $int_s^t W_r , dr$. With the aim of using high order SDE solvers adaptively, we extend the VBT to generate these integrals of BM in addition to the Brownian increments. A JAX-based implementation of our construction is included in the popular Diffrax library (https://github.com/patrick-kidger/diffrax). Since the entire Brownian path produced by VBT is uniquely determined by a single PRNG seed, previously generated samples need not be stored, which results in a constant memory footprint and enables experiment repeatability and strong error estimation. Based on binary search, the VBT's time complexity is logarithmic in the tolerance parameter $varepsilon$. Unlike the original VBT algorithm, which was only precise at some dyadic times, we prove that our construction exactly matches the joint distribution of the Brownian motion and its time integrals at any query times, provided they are at least $varepsilon$ apart. We present two applications of adaptive high order solvers enabled by our new VBT. Using adaptive solvers to simulate a high-volatility CIR model, we achieve more than twice the convergence order of constant stepping. We apply an adaptive third order underdamped or kinetic Langevin solver to an MCMC problem, where our approach outperforms the No U-Turn Sampler, while using only a tenth of its function evaluations. | [
"['Andraž Jelinčič' 'James Foster' 'Patrick Kidger']"
]
|
null | null | 2405.06480 | null | null | http://arxiv.org/pdf/2405.06480v1 | 2024-05-10T13:57:13Z | 2024-05-10T13:57:13Z | Incentive-compatible Bandits: Importance Weighting No More | We study the problem of incentive-compatible online learning with bandit feedback. In this class of problems, the experts are self-interested agents who might misrepresent their preferences with the goal of being selected most often. The goal is to devise algorithms which are simultaneously incentive-compatible, that is the experts are incentivised to report their true preferences, and have no regret with respect to the preferences of the best fixed expert in hindsight. citet{freeman2020no} propose an algorithm in the full information setting with optimal $O(sqrt{T log(K)})$ regret and $O(T^{2/3}(Klog(K))^{1/3})$ regret in the bandit setting. In this work we propose the first incentive-compatible algorithms that enjoy $O(sqrt{KT})$ regret bounds. We further demonstrate how simple loss-biasing allows the algorithm proposed in Freeman et al. 2020 to enjoy $tilde O(sqrt{KT})$ regret. As a byproduct of our approach we obtain the first bandit algorithm with nearly optimal regret bounds in the adversarial setting which works entirely on the observed loss sequence without the need for importance-weighted estimators. Finally, we provide an incentive-compatible algorithm that enjoys asymptotically optimal best-of-both-worlds regret guarantees, i.e., logarithmic regret in the stochastic regime as well as worst-case $O(sqrt{KT})$ regret. | [
"['Julian Zimmert' 'Teodor V. Marinov']"
]
|
null | null | 2405.06487 | null | null | http://arxiv.org/pdf/2405.06487v1 | 2024-05-10T14:07:58Z | 2024-05-10T14:07:58Z | Improving Deep Learning Model Calibration for Cardiac Applications using
Deterministic Uncertainty Networks and Uncertainty-aware Training | Improving calibration performance in deep learning (DL) classification models is important when planning the use of DL in a decision-support setting. In such a scenario, a confident wrong prediction could lead to a lack of trust and/or harm in a high-risk application. We evaluate the impact on accuracy and calibration of two types of approach that aim to improve DL classification model calibration: deterministic uncertainty methods (DUM) and uncertainty-aware training. Specifically, we test the performance of three DUMs and two uncertainty-aware training approaches as well as their combinations. To evaluate their utility, we use two realistic clinical applications from the field of cardiac imaging: artefact detection from phase contrast cardiac magnetic resonance (CMR) and disease diagnosis from the public ACDC CMR dataset. Our results indicate that both DUMs and uncertainty-aware training can improve both accuracy and calibration in both of our applications, with DUMs generally offering the best improvements. We also investigate the combination of the two approaches, resulting in a novel deterministic uncertainty-aware training approach. This provides further improvements for some combinations of DUMs and uncertainty-aware training approaches. | [
"['Tareen Dawood' 'Bram Ruijsink' 'Reza Razavi' 'Andrew P. King'\n 'Esther Puyol-Antón']"
]
|
null | null | 2405.06522 | null | null | http://arxiv.org/pdf/2405.06522v1 | 2024-05-10T15:06:53Z | 2024-05-10T15:06:53Z | Heterogeneous Graph Neural Networks with Loss-decrease-aware Curriculum
Learning | In recent years, heterogeneous graph neural networks (HGNNs) have achieved excellent performance in handling heterogeneous information networks (HINs). Curriculum learning is a machine learning strategy where training examples are presented to a model in a structured order, starting with easy examples and gradually increasing difficulty, aiming to improve learning efficiency and generalization. To better exploit the rich information in HINs, previous methods have started to explore the use of curriculum learning strategy to train HGNNs. Specifically, these works utilize the absolute value of the loss at each training epoch to evaluate the learning difficulty of each training sample. However, the relative loss, rather than the absolute value of loss, reveals the learning difficulty. Therefore, we propose a novel loss-decrease-aware training schedule (LDTS). LDTS uses the trend of loss decrease between each training epoch to better evaluating the difficulty of training samples, thereby enhancing the curriculum learning of HGNNs for downstream tasks. Additionally, we propose a sampling strategy to alleviate training imbalance issues. Our method further demonstrate the efficacy of curriculum learning in enhancing HGNNs capabilities. We call our method Loss-decrease-aware Heterogeneous Graph Neural Networks (LDHGNN). The code is public at https://github.com/wangyili00/LDHGNN. | [
"['Yili Wang']"
]
|
null | null | 2405.06535 | null | null | http://arxiv.org/pdf/2405.06535v1 | 2024-05-10T15:27:35Z | 2024-05-10T15:27:35Z | Controllable Image Generation With Composed Parallel Token Prediction | Compositional image generation requires models to generalise well in situations where two or more input concepts do not necessarily appear together in training (compositional generalisation). Despite recent progress in compositional image generation via composing continuous sampling processes such as diffusion and energy-based models, composing discrete generative processes has remained an open challenge, with the promise of providing improvements in efficiency, interpretability and simplicity. To this end, we propose a formulation for controllable conditional generation of images via composing the log-probability outputs of discrete generative models of the latent space. Our approach, when applied alongside VQ-VAE and VQ-GAN, achieves state-of-the-art generation accuracy in three distinct settings (FFHQ, Positional CLEVR and Relational CLEVR) while attaining competitive Fr'echet Inception Distance (FID) scores. Our method attains an average generation accuracy of $80.71%$ across the studied settings. Our method also outperforms the next-best approach (ranked by accuracy) in terms of FID in seven out of nine experiments, with an average FID of $24.23$ (an average improvement of $-9.58$). Furthermore, our method offers a $2.3times$ to $12times$ speedup over comparable continuous compositional methods on our hardware. We find that our method can generalise to combinations of input conditions that lie outside the training data (e.g. more objects per image) in addition to offering an interpretable dimension of controllability via concept weighting. We further demonstrate that our approach can be readily applied to an open pre-trained discrete text-to-image model without any fine-tuning, allowing for fine-grained control of text-to-image generation. | [
"['Jamie Stirling' 'Noura Al-Moubayed']"
]
|
null | null | 2405.06545 | null | null | http://arxiv.org/pdf/2405.06545v1 | 2024-05-10T15:40:50Z | 2024-05-10T15:40:50Z | Mitigating Hallucinations in Large Language Models via
Self-Refinement-Enhanced Knowledge Retrieval | Large language models (LLMs) have demonstrated remarkable capabilities across various domains, although their susceptibility to hallucination poses significant challenges for their deployment in critical areas such as healthcare. To address this issue, retrieving relevant facts from knowledge graphs (KGs) is considered a promising method. Existing KG-augmented approaches tend to be resource-intensive, requiring multiple rounds of retrieval and verification for each factoid, which impedes their application in real-world scenarios. In this study, we propose Self-Refinement-Enhanced Knowledge Graph Retrieval (Re-KGR) to augment the factuality of LLMs' responses with less retrieval efforts in the medical field. Our approach leverages the attribution of next-token predictive probability distributions across different tokens, and various model layers to primarily identify tokens with a high potential for hallucination, reducing verification rounds by refining knowledge triples associated with these tokens. Moreover, we rectify inaccurate content using retrieved knowledge in the post-processing stage, which improves the truthfulness of generated responses. Experimental results on a medical dataset demonstrate that our approach can enhance the factual capability of LLMs across various foundational models as evidenced by the highest scores on truthfulness. | [
"['Mengjia Niu' 'Hao Li' 'Jie Shi' 'Hamed Haddadi' 'Fan Mo']"
]
|
null | null | 2405.06546 | null | null | http://arxiv.org/pdf/2405.06546v1 | 2024-05-10T15:43:17Z | 2024-05-10T15:43:17Z | Sharp analysis of out-of-distribution error for "importance-weighted"
estimators in the overparameterized regime | Overparameterized models that achieve zero training error are observed to generalize well on average, but degrade in performance when faced with data that is under-represented in the training sample. In this work, we study an overparameterized Gaussian mixture model imbued with a spurious feature, and sharply analyze the in-distribution and out-of-distribution test error of a cost-sensitive interpolating solution that incorporates "importance weights". Compared to recent work Wang et al. (2021), Behnia et al. (2022), our analysis is sharp with matching upper and lower bounds, and significantly weakens required assumptions on data dimensionality. Our error characterizations also apply to any choice of importance weights and unveil a novel tradeoff between worst-case robustness to distribution shift and average accuracy as a function of the importance weight magnitude. | [
"['Kuo-Wei Lai' 'Vidya Muthukumar']"
]
|
null | null | 2405.06553 | null | null | http://arxiv.org/pdf/2405.06553v1 | 2024-05-10T15:54:55Z | 2024-05-10T15:54:55Z | Scalable Property Valuation Models via Graph-based Deep Learning | This paper aims to enrich the capabilities of existing deep learning-based automated valuation models through an efficient graph representation of peer dependencies, thus capturing intricate spatial relationships. In particular, we develop two novel graph neural network models that effectively identify sequences of neighboring houses with similar features, employing different message passing algorithms. The first strategy consider standard spatial graph convolutions, while the second one utilizes transformer graph convolutions. This approach confers scalability to the modeling process. The experimental evaluation is conducted using a proprietary dataset comprising approximately 200,000 houses located in Santiago, Chile. We show that employing tailored graph neural networks significantly improves the accuracy of house price prediction, especially when utilizing transformer convolutional message passing layers. | [
"['Enrique Riveros' 'Carla Vairetti' 'Christian Wegmann' 'Santiago Truffa'\n 'Sebastián Maldonado']"
]
|
null | null | 2405.06558 | null | null | http://arxiv.org/pdf/2405.06558v2 | 2024-06-05T09:39:06Z | 2024-05-10T16:00:29Z | Random matrix theory improved Fréchet mean of symmetric positive
definite matrices | In this study, we consider the realm of covariance matrices in machine learning, particularly focusing on computing Fr'echet means on the manifold of symmetric positive definite matrices, commonly referred to as Karcher or geometric means. Such means are leveraged in numerous machine-learning tasks. Relying on advanced statistical tools, we introduce a random matrix theory-based method that estimates Fr'echet means, which is particularly beneficial when dealing with low sample support and a high number of matrices to average. Our experimental evaluation, involving both synthetic and real-world EEG and hyperspectral datasets, shows that we largely outperform state-of-the-art methods. | [
"['Florent Bouchard' 'Ammar Mian' 'Malik Tiomoko' 'Guillaume Ginolhac'\n 'Frédéric Pascal']"
]
|
null | null | 2405.06561 | null | null | http://arxiv.org/pdf/2405.06561v1 | 2024-05-10T16:02:41Z | 2024-05-10T16:02:41Z | Reservoir Computing Benchmarks: a review, a taxonomy, some best
practices | Reservoir Computing is an Unconventional Computation model to perform computation on various different substrates, such as RNNs or physical materials. The method takes a "black-box" approach, training only the outputs of the system it is built on. As such, evaluating the computational capacity of these systems can be challenging. We review and critique the evaluation methods used in the field of Reservoir Computing. We introduce a categorisation of benchmark tasks. We review multiple examples of benchmarks from the literature as applied to reservoir computing, and note their strengths and shortcomings. We suggest ways in which benchmarks and their uses may be improved to the benefit of the reservoir computing community | [
"['Chester Wringe' 'Martin Trefzer' 'Susan Stepney']"
]
|
null | null | 2405.06569 | null | null | http://arxiv.org/pdf/2405.06569v1 | 2024-05-10T16:12:35Z | 2024-05-10T16:12:35Z | Efficient Federated Low Rank Matrix Completion | In this work, we develop and analyze a Gradient Descent (GD) based solution, called Alternating GD and Minimization (AltGDmin), for efficiently solving the low rank matrix completion (LRMC) in a federated setting. LRMC involves recovering an $n times q$ rank-$r$ matrix $Xstar$ from a subset of its entries when $r ll min(n,q)$. Our theoretical guarantees (iteration and sample complexity bounds) imply that AltGDmin is the most communication-efficient solution in a federated setting, is one of the fastest, and has the second best sample complexity among all iterative solutions to LRMC. In addition, we also prove two important corollaries. (a) We provide a guarantee for AltGDmin for solving the noisy LRMC problem. (b) We show how our lemmas can be used to provide an improved sample complexity guarantee for AltMin, which is the fastest centralized solution. | [
"['Ahmed Ali Abbasi' 'Namrata Vaswani']"
]
|
null | null | 2405.06575 | null | null | http://arxiv.org/pdf/2405.06575v1 | 2024-05-10T16:22:33Z | 2024-05-10T16:22:33Z | No-Regret is not enough! Bandits with General Constraints through
Adaptive Regret Minimization | In the bandits with knapsacks framework (BwK) the learner has $m$ resource-consumption (packing) constraints. We focus on the generalization of BwK in which the learner has a set of general long-term constraints. The goal of the learner is to maximize their cumulative reward, while at the same time achieving small cumulative constraints violations. In this scenario, there exist simple instances where conventional methods for BwK fail to yield sublinear violations of constraints. We show that it is possible to circumvent this issue by requiring the primal and dual algorithm to be weakly adaptive. Indeed, even in absence on any information on the Slater's parameter $rho$ characterizing the problem, the interplay between weakly adaptive primal and dual regret minimizers yields a "self-bounding" property of dual variables. In particular, their norm remains suitably upper bounded across the entire time horizon even without explicit projection steps. By exploiting this property, we provide best-of-both-worlds guarantees for stochastic and adversarial inputs. In the first case, we show that the algorithm guarantees sublinear regret. In the latter case, we establish a tight competitive ratio of $rho/(1+rho)$. In both settings, constraints violations are guaranteed to be sublinear in time. Finally, this results allow us to obtain new result for the problem of contextual bandits with linear constraints, providing the first no-$alpha$-regret guarantees for adversarial contexts. | [
"['Martino Bernasconi' 'Matteo Castiglioni' 'Andrea Celli']"
]
|
null | null | 2405.06582 | null | null | http://arxiv.org/pdf/2405.06582v3 | 2024-06-04T07:34:01Z | 2024-05-10T16:36:59Z | The Role of Learning Algorithms in Collective Action | Collective action in machine learning is the study of the control that a coordinated group can have over machine learning algorithms. While previous research has concentrated on assessing the impact of collectives against Bayes (sub-)optimal classifiers, this perspective is limited in that it does not account for the choice of learning algorithm. Since classifiers seldom behave like Bayes classifiers and are influenced by the choice of learning algorithms along with their inherent biases, in this work we initiate the study of how the choice of the learning algorithm plays a role in the success of a collective in practical settings. Specifically, we focus on distributionally robust optimization (DRO), popular for improving a worst group error, and on the ubiquitous stochastic gradient descent (SGD), due to its inductive bias for "simpler" functions. Our empirical results, supported by a theoretical foundation, show that the effective size and success of the collective are highly dependent on properties of the learning algorithm. This highlights the necessity of taking the learning algorithm into account when studying the impact of collective action in machine learning. | [
"['Omri Ben-Dov' 'Jake Fawkes' 'Samira Samadi' 'Amartya Sanyal']"
]
|
null | null | 2405.06590 | null | null | http://arxiv.org/pdf/2405.06590v1 | 2024-05-10T16:46:32Z | 2024-05-10T16:46:32Z | Decomposing weather forecasting into advection and convection with
neural networks | Operational weather forecasting models have advanced for decades on both the explicit numerical solvers and the empirical physical parameterization schemes. However, the involved high computational costs and uncertainties in these existing schemes are requiring potential improvements through alternative machine learning methods. Previous works use a unified model to learn the dynamics and physics of the atmospheric model. Contrarily, we propose a simple yet effective machine learning model that learns the horizontal movement in the dynamical core and vertical movement in the physical parameterization separately. By replacing the advection with a graph attention network and the convection with a multi-layer perceptron, our model provides a new and efficient perspective to simulate the transition of variables in atmospheric models. We also assess the model's performance over a 5-day iterative forecasting. Under the same input variables and training methods, our model outperforms existing data-driven methods with a significantly-reduced number of parameters with a resolution of 5.625 deg. Overall, this work aims to contribute to the ongoing efforts that leverage machine learning techniques for improving both the accuracy and efficiency of global weather forecasting. | [
"['Mengxuan Chen' 'Ziqi Yuan' 'Jinxiao Zhang' 'Runmin Dong' 'Haohuan Fu']"
]
|
null | null | 2405.06604 | null | null | http://arxiv.org/pdf/2405.06604v1 | 2024-05-10T17:11:31Z | 2024-05-10T17:11:31Z | Explaining Text Similarity in Transformer Models | As Transformers have become state-of-the-art models for natural language processing (NLP) tasks, the need to understand and explain their predictions is increasingly apparent. Especially in unsupervised applications, such as information retrieval tasks, similarity models built on top of foundation model representations have been widely applied. However, their inner prediction mechanisms have mostly remained opaque. Recent advances in explainable AI have made it possible to mitigate these limitations by leveraging improved explanations for Transformers through layer-wise relevance propagation (LRP). Using BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, we investigate which feature interactions drive similarity in NLP models. We validate the resulting explanations and demonstrate their utility in three corpus-level use cases, analyzing grammatical interactions, multilingual semantics, and biomedical text retrieval. Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights. | [
"['Alexandros Vasileiou' 'Oliver Eberle']"
]
|
null | null | 2405.06605 | null | null | http://arxiv.org/pdf/2405.06605v2 | 2024-06-03T20:38:35Z | 2024-05-10T17:12:48Z | Calo-VQ: Vector-Quantized Two-Stage Generative Model in Calorimeter
Simulation | We introduce a novel machine learning method developed for the fast simulation of calorimeter detector response, adapting vector-quantized variational autoencoder (VQ-VAE). Our model adopts a two-stage generation strategy: initially compressing geometry-aware calorimeter data into a discrete latent space, followed by the application of a sequence model to learn and generate the latent tokens. Extensive experimentation on the Calo-challenge dataset underscores the efficiency of our approach, showcasing a remarkable improvement in the generation speed compared with conventional method by a factor of 2000. Remarkably, our model achieves the generation of calorimeter showers within milliseconds. Furthermore, comprehensive quantitative evaluations across various metrics are performed to validate physics performance of generation. | [
"['Qibin Liu' 'Chase Shimmin' 'Xiulong Liu' 'Eli Shlizerman' 'Shu Li'\n 'Shih-Chieh Hsu']"
]
|
null | null | 2405.06626 | null | null | http://arxiv.org/pdf/2405.06626v1 | 2024-05-10T17:40:02Z | 2024-05-10T17:40:02Z | Characterizing the Accuracy - Efficiency Trade-off of Low-rank
Decomposition in Language Models | Large language models (LLMs) have emerged and presented their general problem-solving capabilities with one model. However, the model size has increased dramatically with billions of parameters to enable such broad problem-solving capabilities. In addition, due to the dominance of matrix-matrix and matrix-vector multiplications in LLMs, the compute-to-model size ratio is significantly lower than that of CNNs. This shift pushes LLMs from a computation-bound regime to a memory-bound regime. Therefore, optimizing the memory footprint and traffic is an important optimization direction for LLMs today. Model compression methods such as quantization and parameter pruning have been actively explored for achieving the memory footprint and traffic optimization. However, the accuracy-efficiency trade-off of rank pruning for LLMs is not well-understood yet. Therefore, we characterize the accuracy-efficiency trade-off of a low-rank decomposition method, specifically Tucker decomposition, on recent language models, including an open-source LLM, Llama 2. We formalize the low-rank decomposition design space and show that the decomposition design space is enormous (e.g., O($2^{37}$) for Llama2-7B). To navigate such a vast design space, we formulate the design space and perform thorough case studies of accuracy-efficiency trade-offs using six widely used LLM benchmarks on BERT and Llama 2 models. Our results show that we can achieve a 9% model size reduction with minimal accuracy drops, which range from 4%p to 10%p, depending on the difficulty of the benchmark, without any retraining to recover accuracy after decomposition. The results show that low-rank decomposition can be a promising direction for LLM-based applications that require real-time service in scale (e.g., AI agent assist and real-time coding assistant), where the latency is as important as the model accuracy. | [
"['Chakshu Moar' 'Michael Pellauer' 'Hyoukjun Kwon']"
]
|
null | null | 2405.06627 | null | null | http://arxiv.org/pdf/2405.06627v3 | 2024-06-05T15:49:11Z | 2024-05-10T17:40:24Z | Conformal Validity Guarantees Exist for Any Data Distribution (and How
to Find Them) | As artificial intelligence (AI) / machine learning (ML) gain widespread adoption, practitioners are increasingly seeking means to quantify and control the risk these systems incur. This challenge is especially salient when such systems have autonomy to collect their own data, such as in black-box optimization and active learning, where their actions induce sequential feedback-loop shifts in the data distribution. Conformal prediction is a promising approach to uncertainty and risk quantification, but prior variants' validity guarantees have assumed some form of ``quasi-exchangeability'' on the data distribution, thereby excluding many types of sequential shifts. In this paper we prove that conformal prediction can theoretically be extended to textit{any} joint data distribution, not just exchangeable or quasi-exchangeable ones. Although the most general case is exceedingly impractical to compute, for concrete practical applications we outline a procedure for deriving specific conformal algorithms for any data distribution, and we use this procedure to derive tractable algorithms for a series of AI/ML-agent-induced covariate shifts. We evaluate the proposed algorithms empirically on synthetic black-box optimization and active learning tasks. | [
"['Drew Prinster' 'Samuel Stanton' 'Anqi Liu' 'Suchi Saria']"
]
|
null | null | 2405.06636 | null | null | http://arxiv.org/pdf/2405.06636v2 | 2024-05-22T11:01:22Z | 2024-05-10T17:53:05Z | Federated Document Visual Question Answering: A Pilot Study | An important handicap of document analysis research is that documents tend to be copyrighted or contain private information, which prohibits their open publication and the creation of centralised, large-scale document datasets. Instead, documents are scattered in private data silos, making extensive training over heterogeneous data a tedious task. In this work, we explore the use of a federated learning (FL) scheme as a way to train a shared model on decentralised private document data. We focus on the problem of Document VQA, a task particularly suited to this approach, as the type of reasoning capabilities required from the model can be quite different in diverse domains. Enabling training over heterogeneous document datasets can thus substantially enrich DocVQA models. We assemble existing DocVQA datasets from diverse domains to reflect the data heterogeneity in real-world applications. We explore the self-pretraining technique in this multi-modal setting, where the same data is used for both pretraining and finetuning, making it relevant for privacy preservation. We further propose combining self-pretraining with a Federated DocVQA training method using centralized adaptive optimization that outperforms the FedAvg baseline. With extensive experiments, we also present a multi-faceted analysis on training DocVQA models with FL, which provides insights for future research on this task. We show that our pretraining strategies can effectively learn and scale up under federated training with diverse DocVQA datasets and tuning hyperparameters is essential for practical document tasks under federation. | [
"['Khanh Nguyen' 'Dimosthenis Karatzas']"
]
|
null | null | 2405.06639 | null | null | http://arxiv.org/pdf/2405.06639v1 | 2024-05-10T17:59:04Z | 2024-05-10T17:59:04Z | Value Augmented Sampling for Language Model Alignment and
Personalization | Aligning Large Language Models (LLMs) to cater to different human preferences, learning new skills, and unlearning harmful behavior is an important problem. Search-based methods, such as Best-of-N or Monte-Carlo Tree Search, are performant, but impractical for LLM adaptation due to their high inference cost. On the other hand, using Reinforcement Learning (RL) for adaptation is computationally efficient, but performs worse due to the optimization challenges in co-training the value function and the policy. We present a new framework for reward optimization, Value Augmented Sampling (VAS), that can maximize different reward functions using data sampled from only the initial, frozen LLM. VAS solves for the optimal reward-maximizing policy without co-training the policy and the value function, making the optimization stable, outperforming established baselines, such as PPO and DPO, on standard benchmarks, and achieving comparable results to Best-of-128 with lower inference cost. Unlike existing RL methods that require changing the weights of the LLM, VAS does not require access to the weights of the pre-trained LLM. Thus, it can even adapt LLMs (e.g., ChatGPT), which are available only as APIs. In addition, our algorithm unlocks the new capability of composing several rewards and controlling the extent of each one during deployment time, paving the road ahead for the future of aligned, personalized LLMs. | [
"['Seungwook Han' 'Idan Shenfeld' 'Akash Srivastava' 'Yoon Kim'\n 'Pulkit Agrawal']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.