categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.09541 | null | null | http://arxiv.org/pdf/2405.09541v2 | 2024-06-27T07:46:23Z | 2024-05-15T17:55:05Z | Spectral complexity of deep neural networks | It is well-known that randomly initialized, push-forward, fully-connected neural networks weakly converge to isotropic Gaussian processes, in the limit where the width of all layers goes to infinity. In this paper, we propose to use the angular power spectrum of the limiting field to characterize the complexity of the network architecture. In particular, we define sequences of random variables associated with the angular power spectrum, and provide a full characterization of the network complexity in terms of the asymptotic distribution of these sequences as the depth diverges. On this basis, we classify neural networks as low-disorder, sparse, or high-disorder; we show how this classification highlights a number of distinct features for standard activation functions, and in particular, sparsity properties of ReLU networks. Our theoretical results are also validated by numerical simulations. | [
"['Simmaco Di Lillo' 'Domenico Marinucci' 'Michele Salvi' 'Stefano Vigogna']"
]
|
null | null | 2405.09542 | null | null | http://arxiv.org/pdf/2405.09542v1 | 2024-04-25T18:21:43Z | 2024-04-25T18:21:43Z | Hybrid Magnonic Reservoir Computing | Magnonic systems have been a major area of research interest due to their potential benefits in speed and lower power consumption compared to traditional computing. One particular area that they may be of advantage is as Physical Reservoir Computers in machine learning models. In this work, we build on an established design for using an Auto-Oscillation Ring as a reservoir computer by introducing a simple neural network midstream and introduce an additional design using a spin wave guide with a scattering regime for processing data with different types of inputs. We simulate these designs on the new micro magnetic simulation software, Magnum.np, and show that the designs are capable of performing on various real world data sets comparably or better than traditional dense neural networks. | [
"['Cliff B. Abbott' 'Dmytro A. Bozhko']"
]
|
null | null | 2405.09543 | null | null | http://arxiv.org/pdf/2405.09543v1 | 2024-04-26T08:16:54Z | 2024-04-26T08:16:54Z | Algorithmic Fairness: A Tolerance Perspective | Recent advancements in machine learning and deep learning have brought algorithmic fairness into sharp focus, illuminating concerns over discriminatory decision making that negatively impacts certain individuals or groups. These concerns have manifested in legal, ethical, and societal challenges, including the erosion of trust in intelligent systems. In response, this survey delves into the existing literature on algorithmic fairness, specifically highlighting its multifaceted social consequences. We introduce a novel taxonomy based on 'tolerance', a term we define as the degree to which variations in fairness outcomes are acceptable, providing a structured approach to understanding the subtleties of fairness within algorithmic decisions. Our systematic review covers diverse industries, revealing critical insights into the balance between algorithmic decision making and social equity. By synthesizing these insights, we outline a series of emerging challenges and propose strategic directions for future research and policy making, with the goal of advancing the field towards more equitable algorithmic systems. | [
"['Renqiang Luo' 'Tao Tang' 'Feng Xia' 'Jiaying Liu' 'Chengpei Xu'\n 'Leo Yu Zhang' 'Wei Xiang' 'Chengqi Zhang']"
]
|
null | null | 2405.09545 | null | null | http://arxiv.org/pdf/2405.09545v1 | 2024-04-27T05:47:38Z | 2024-04-27T05:47:38Z | Intrinsic Voltage Offsets in Memcapacitive Bio-Membranes Enable
High-Performance Physical Reservoir Computing | Reservoir computing is a brain-inspired machine learning framework for processing temporal data by mapping inputs into high-dimensional spaces. Physical reservoir computers (PRCs) leverage native fading memory and nonlinearity in physical substrates, including atomic switches, photonics, volatile memristors, and, recently, memcapacitors, to achieve efficient high-dimensional mapping. Traditional PRCs often consist of homogeneous device arrays, which rely on input encoding methods and large stochastic device-to-device variations for increased nonlinearity and high-dimensional mapping. These approaches incur high pre-processing costs and restrict real-time deployment. Here, we introduce a novel heterogeneous memcapacitor-based PRC that exploits internal voltage offsets to enable both monotonic and non-monotonic input-state correlations crucial for efficient high-dimensional transformations. We demonstrate our approach's efficacy by predicting a second-order nonlinear dynamical system with an extremely low prediction error (0.00018). Additionally, we predict a chaotic H'enon map, achieving a low normalized root mean square error (0.080). Unlike previous PRCs, such errors are achieved without input encoding methods, underscoring the power of distinct input-state correlations. Most importantly, we generalize our approach to other neuromorphic devices that lack inherent voltage offsets using externally applied offsets to realize various input-state correlations. Our approach and unprecedented performance are a major milestone towards high-performance full in-materia PRCs. | [
"['Ahmed S. Mohamed' 'Anurag Dhungel' 'Md Sakib Hasan' 'Joseph S. Najem']"
]
|
null | null | 2405.09557 | null | null | http://arxiv.org/pdf/2405.09557v2 | 2024-05-29T13:25:16Z | 2024-05-02T16:04:30Z | Machine Learning in Short-Reach Optical Systems: A Comprehensive Survey | In recent years, extensive research has been conducted to explore the utilization of machine learning algorithms in various direct-detected and self-coherent short-reach communication applications. These applications encompass a wide range of tasks, including bandwidth request prediction, signal quality monitoring, fault detection, traffic prediction, and digital signal processing (DSP)-based equalization. As a versatile approach, machine learning demonstrates the ability to address stochastic phenomena in optical systems networks where deterministic methods may fall short. However, when it comes to DSP equalization algorithms, their performance improvements are often marginal, and their complexity is prohibitively high, especially in cost-sensitive short-reach communications scenarios such as passive optical networks (PONs). They excel in capturing temporal dependencies, handling irregular or nonlinear patterns effectively, and accommodating variable time intervals. Within this extensive survey, we outline the application of machine learning techniques in short-reach communications, specifically emphasizing their utilization in high-bandwidth demanding PONs. Notably, we introduce a novel taxonomy for time-series methods employed in machine learning signal processing, providing a structured classification framework. Our taxonomy categorizes current time series methods into four distinct groups: traditional methods, Fourier convolution-based methods, transformer-based models, and time-series convolutional networks. Finally, we highlight prospective research directions within this rapidly evolving field and outline specific solutions to mitigate the complexity associated with hardware implementations. We aim to pave the way for more practical and efficient deployment of machine learning approaches in short-reach optical communication systems by addressing complexity concerns. | [
"['Chen Shao' 'Elias Giacoumidis' 'Syed Moktacim Billah' 'Shi Li'\n 'Jialei Li' 'Prashasti Sahu' 'Andre Richter' 'Tobias Kaefer'\n 'Michael Faerber']"
]
|
null | null | 2405.09558 | null | null | http://arxiv.org/abs/2405.09558v1 | 2024-05-02T16:39:37Z | 2024-05-02T16:39:37Z | An EM Body Model for Device-Free Localization with Multiple Antenna
Receivers: A First Study | Device-Free Localization (DFL) employs passive radio techniques capable to detect and locate people without imposing them to wear any electronic device. By exploiting the Integrated Sensing and Communication paradigm, DFL networks employ Radio Frequency (RF) nodes to measure the excess attenuation introduced by the subjects (i.e., human bodies) moving inside the monitored area, and to estimate their positions and movements. Physical, statistical, and ElectroMagnetic (EM) models have been proposed in the literature to estimate the body positions according to the RF signals collected by the nodes. These body models usually employ a single-antenna processing for localization purposes. However, the availability of low-cost multi-antenna devices such as those used for WLAN (Wireless Local Area Network) applications and the timely development of array-based body models, allow us to employ array-based processing techniques in DFL networks. By exploiting a suitable array-capable EM body model, this paper proposes an array-based framework to improve people sensing and localization. In particular, some simulations are proposed and discussed to compare the model results in both single- and multi-antenna scenarios. The proposed framework paves the way for a wider use of multi-antenna devices (e.g., those employed in current IEEE 802.11ac/ax/be and forthcoming IEEE 802.11be networks) and novel beamforming algorithms for DFL scenarios. | [
"['Vittorio Rampa' 'Federica Fieramosca' 'Stefano Savazzi'\n \"Michele D'Amico\"]"
]
|
null | null | 2405.09559 | null | null | http://arxiv.org/pdf/2405.09559v1 | 2024-05-02T16:56:09Z | 2024-05-02T16:56:09Z | KID-PPG: Knowledge Informed Deep Learning for Extracting Heart Rate from
a Smartwatch | Accurate extraction of heart rate from photoplethysmography (PPG) signals remains challenging due to motion artifacts and signal degradation. Although deep learning methods trained as a data-driven inference problem offer promising solutions, they often underutilize existing knowledge from the medical and signal processing community. In this paper, we address three shortcomings of deep learning models: motion artifact removal, degradation assessment, and physiologically plausible analysis of the PPG signal. We propose KID-PPG, a knowledge-informed deep learning model that integrates expert knowledge through adaptive linear filtering, deep probabilistic inference, and data augmentation. We evaluate KID-PPG on the PPGDalia dataset, achieving an average mean absolute error of 2.85 beats per minute, surpassing existing reproducible methods. Our results demonstrate a significant performance improvement in heart rate tracking through the incorporation of prior knowledge into deep learning models. This approach shows promise in enhancing various biomedical applications by incorporating existing expert knowledge in deep learning models. | [
"['Christodoulos Kechris' 'Jonathan Dan' 'Jose Miranda' 'David Atienza']"
]
|
null | null | 2405.09560 | null | null | http://arxiv.org/pdf/2405.09560v1 | 2024-05-03T02:02:37Z | 2024-05-03T02:02:37Z | Using In-Service Train Vibration for Detecting Railway Maintenance Needs | The need for the maintenance of railway track systems have been increasing. Traditional methods that are currently being used are either inaccurate, labor and time intensive, or does not enable continuous monitoring of the system. As a result, in-service train vibrations have been shown to be a cheaper alternative for monitoring of railway track systems. In this paper, a method is proposed to detect different maintenance needs of railway track systems using a single pass of train direction. The DR-Train dataset that is publicly available was used. Results show that by using a simple classifier such as the k-nearest neighbor (k-NN) algorithm, the signal energy features of the acceleration data can achieve 76% accuracy on two types of maintenance needs, tamping and surfacing. The results show that the transverse direction is able to more accurately detect maintenance needs, and triaxial accelerometer can give further information on the maintenance needs. Furthermore, this paper demonstrates the use of multi-label classification to detect multiple types of maintenance needs simultaneously. The results show multi-label classification performs only slightly worse than the simple binary classification (72% accuracy) and that this can be a simple method that can easily be deployed in areas that have a history of many maintenance issues. | [
"['Irene Alisjahbana']"
]
|
null | null | 2405.09561 | null | null | http://arxiv.org/pdf/2405.09561v1 | 2024-05-04T22:43:09Z | 2024-05-04T22:43:09Z | GAD: A Real-time Gait Anomaly Detection System with Online Adaptive
Learning | Gait anomaly detection is a task that involves detecting deviations from a person's normal gait pattern. These deviations can indicate health issues and medical conditions in the healthcare domain, or fraudulent impersonation and unauthorized identity access in the security domain. A number of gait anomaly detection approaches have been introduced, but many of them require offline data preprocessing, offline model learning, setting parameters, and so on, which might restrict their effectiveness and applicability in real-world scenarios. To address these issues, this paper introduces GAD, a real-time gait anomaly detection system. GAD focuses on detecting anomalies within an individual's three-dimensional accelerometer readings based on dimensionality reduction and Long Short-Term Memory (LSTM). Upon being launched, GAD begins collecting a gait segment from the user and training an anomaly detector to learn the user's walking pattern on the fly. If the subsequent model verification is successful, which involves validating the trained detector using the user's subsequent steps, the detector is employed to identify abnormalities in the user's subsequent gait readings at the user's request. The anomaly detector will be retained online to adapt to minor pattern changes and will undergo retraining as long as it cannot provide adequate prediction. We explored two methods for capturing users' gait segments: a personalized method tailored to each individual's step length, and a uniform method utilizing a fixed step length. Experimental results using an open-source gait dataset show that GAD achieves a higher detection accuracy ratio when combined with the personalized method. | [
"['Ming-Chang Lee' 'Jia-Chun Lin' 'Sokratis Katsikas']"
]
|
null | null | 2405.09562 | null | null | http://arxiv.org/pdf/2405.09562v1 | 2024-05-06T10:21:13Z | 2024-05-06T10:21:13Z | MEET: Mixture of Experts Extra Tree-Based sEMG Hand Gesture
Identification | Artificial intelligence (AI) has made significant advances in recent years and opened up new possibilities in exploring applications in various fields such as biomedical, robotics, education, industry, etc. Among these fields, human hand gesture recognition is a subject of study that has recently emerged as a research interest in robotic hand control using electromyography (EMG). Surface electromyography (sEMG) is a primary technique used in EMG, which is popular due to its non-invasive nature and is used to capture gesture movements using signal acquisition devices placed on the surface of the forearm. Moreover, these signals are pre-processed to extract significant handcrafted features through time and frequency domain analysis. These are helpful and act as input to machine learning (ML) models to identify hand gestures. However, handling multiple classes and biases are major limitations that can affect the performance of an ML model. Therefore, to address this issue, a new mixture of experts extra tree (MEET) model is proposed to identify more accurate and effective hand gesture movements. This model combines individual ML models referred to as experts, each focusing on a minimal class of two. Moreover, a fully trained model known as the gate is employed to weigh the output of individual expert models. This amalgamation of the expert models with the gate model is known as a mixture of experts extra tree (MEET) model. In this study, four subjects with six hand gesture movements have been considered and their identification is evaluated among eleven models, including the MEET classifier. Results elucidate that the MEET classifier performed best among other algorithms and identified hand gesture movement accurately. | [
"['Naveen Gehlot' 'Ashutosh Jena' 'Rajesh Kumar' 'Mahipal Bukya']"
]
|
null | null | 2405.09563 | null | null | http://arxiv.org/pdf/2405.09563v1 | 2024-05-06T14:47:48Z | 2024-05-06T14:47:48Z | Stressor Type Matters! -- Exploring Factors Influencing Cross-Dataset
Generalizability of Physiological Stress Detection | Automatic stress detection using heart rate variability (HRV) features has gained significant traction as it utilizes unobtrusive wearable sensors measuring signals like electrocardiogram (ECG) or blood volume pulse (BVP). However, detecting stress through such physiological signals presents a considerable challenge owing to the variations in recorded signals influenced by factors, such as perceived stress intensity and measurement devices. Consequently, stress detection models developed on one dataset may perform poorly on unseen data collected under different conditions. To address this challenge, this study explores the generalizability of machine learning models trained on HRV features for binary stress detection. Our goal extends beyond evaluating generalization performance; we aim to identify the characteristics of datasets that have the most significant influence on generalizability. We leverage four publicly available stress datasets (WESAD, SWELL-KW, ForDigitStress, VerBIO) that vary in at least one of the characteristics such as stress elicitation techniques, stress intensity, and sensor devices. Employing a cross-dataset evaluation approach, we explore which of these characteristics strongly influence model generalizability. Our findings reveal a crucial factor affecting model generalizability: stressor type. Models achieved good performance across datasets when the type of stressor (e.g., social stress in our case) remains consistent. Factors like stress intensity or brand of the measurement device had minimal impact on cross-dataset performance. Based on our findings, we recommend matching the stressor type when deploying HRV-based stress models in new environments. To the best of our knowledge, this is the first study to systematically investigate factors influencing the cross-dataset applicability of HRV-based stress models. | [
"['Pooja Prajod' 'Bhargavi Mahesh' 'Elisabeth André']"
]
|
null | null | 2405.09564 | null | null | http://arxiv.org/pdf/2405.09564v1 | 2024-05-07T13:54:12Z | 2024-05-07T13:54:12Z | Detecting 5G Narrowband Jammers with CNN, k-nearest Neighbors, and
Support Vector Machines | 5G cellular networks are particularly vulnerable against narrowband jammers that target specific control sub-channels in the radio signal. One mitigation approach is to detect such jamming attacks with an online observation system, based on machine learning. We propose to detect jamming at the physical layer with a pre-trained machine learning model that performs binary classification. Based on data from an experimental 5G network, we study the performance of different classification models. A convolutional neural network will be compared to support vector machines and k-nearest neighbors, where the last two methods are combined with principal component analysis. The obtained results show substantial differences in terms of classification accuracy and computation time. | [
"['Matteo Varotto' 'Florian Heinrichs' 'Timo Schuerg' 'Stefano Tomasin'\n 'Stefan Valentin']"
]
|
null | null | 2405.09565 | null | null | http://arxiv.org/pdf/2405.09565v1 | 2024-05-07T14:02:34Z | 2024-05-07T14:02:34Z | One-Class Classification as GLRT for Jamming Detection in Private 5G
Networks | 5G mobile networks are vulnerable to jamming attacks that may jeopardize valuable applications such as industry automation. In this paper, we propose to analyze radio signals with a dedicated device to detect jamming attacks. We pursue a learning approach, with the detector being a CNN implementing a GLRT. To this end, the CNN is trained as a two-class classifier using two datasets: one of real legitimate signals and another generated artificially so that the resulting classifier implements the GLRT. The artificial dataset is generated mimicking different types of jamming signals. We evaluate the performance of this detector using experimental data obtained from a private 5G network and several jamming signals, showing the technique's effectiveness in detecting the attacks. | [
"['Matteo Varotto' 'Stefan Valentin' 'Francesco Ardizzon'\n 'Samuele Marzotto' 'Stefano Tomasin']"
]
|
null | null | 2405.09566 | null | null | http://arxiv.org/pdf/2405.09566v1 | 2024-05-08T11:25:12Z | 2024-05-08T11:25:12Z | Detection of Sleep Oxygen Desaturations from Electroencephalogram
Signals | In this work, we leverage machine learning techniques to identify potential biomarkers of oxygen desaturation during sleep exclusively from electroencephalogram (EEG) signals in pediatric patients with sleep apnea. Development of a machine learning technique which can successfully identify EEG signals from patients with sleep apnea as well as identify latent EEG signals which come from subjects who experience oxygen desaturations but do not themselves occur during oxygen desaturation events would provide a strong step towards developing a brain-based biomarker for sleep apnea in order to aid with easier diagnosis of this disease. We leverage a large corpus of data, and show that machine learning enables us to classify EEG signals as occurring during oxygen desaturations or not occurring during oxygen desaturations with an average 66.8% balanced accuracy. We furthermore investigate the ability of machine learning models to identify subjects who experience oxygen desaturations from EEG data that does not occur during oxygen desaturations. We conclude that there is a potential biomarker for oxygen desaturation in EEG data. | [
"['Shashank Manjunath' 'Aarti Sathyanarayana']"
]
|
null | null | 2405.09567 | null | null | http://arxiv.org/pdf/2405.09567v1 | 2024-05-08T19:59:16Z | 2024-05-08T19:59:16Z | ECG-SMART-NET: A Deep Learning Architecture for Precise ECG Diagnosis of
Occlusion Myocardial Infarction | In this paper we describe ECG-SMART-NET for identification of occlusion myocardial infarction (OMI). OMI is a severe form of heart attack characterized by complete blockage of one or more coronary arteries requiring immediate referral for cardiac catheterization to restore blood flow to the heart. Two thirds of OMI cases are difficult to visually identify from a 12-lead electrocardiogram (ECG) and can be potentially fatal if not identified in a timely fashion. Previous works on this topic are scarce, and current state-of-the-art evidence suggests that both random forests with engineered features and convolutional neural networks (CNNs) are promising approaches to improve the ECG detection of OMI. While the ResNet architecture has been successfully adapted for use with ECG recordings, it is not ideally suited to capture informative temporal features within each lead and the spatial concordance or discordance across leads. We propose a clinically informed modification of the ResNet-18 architecture. The model first learns temporal features through temporal convolutional layers with 1xk kernels followed by a spatial convolutional layer, after the residual blocks, with 12x1 kernels to learn spatial features. The new ECG-SMART-NET was benchmarked against the original ResNet-18 and other state-of-the-art models on a multisite real-word clinical dataset that consists of 10,893 ECGs from 7,297 unique patients (rate of OMI = 6.5%). ECG-SMART-NET outperformed other models in the classification of OMI with a test AUC score of 0.889 +/- 0.027 and a test average precision score of 0.587 +/- 0.087. | [
"['Nathan T. Riek' 'Murat Akcakaya' 'Zeineb Bouzid' 'Tanmay Gokhale'\n 'Stephanie Helman' 'Karina Kraevsky-Philips' 'Rui Qi Ji' 'Ervin Sejdic'\n 'Jessica K. Zègre-Hemsey' 'Christian Martin-Gill' 'Clifton W. Callaway'\n 'Samir Saba' 'Salah Al-Zaiti']"
]
|
null | null | 2405.09568 | null | null | http://arxiv.org/abs/2405.09568v1 | 2024-05-08T21:36:49Z | 2024-05-08T21:36:49Z | Dynamic GNNs for Precise Seizure Detection and Classification from EEG
Data | Diagnosing epilepsy requires accurate seizure detection and classification, but traditional manual EEG signal analysis is resource-intensive. Meanwhile, automated algorithms often overlook EEG's geometric and semantic properties critical for interpreting brain activity. This paper introduces NeuroGNN, a dynamic Graph Neural Network (GNN) framework that captures the dynamic interplay between the EEG electrode locations and the semantics of their corresponding brain regions. The specific brain region where an electrode is placed critically shapes the nature of captured EEG signals. Each brain region governs distinct cognitive functions, emotions, and sensory processing, influencing both the semantic and spatial relationships within the EEG data. Understanding and modeling these intricate brain relationships are essential for accurate and meaningful insights into brain activity. This is precisely where the proposed NeuroGNN framework excels by dynamically constructing a graph that encapsulates these evolving spatial, temporal, semantic, and taxonomic correlations to improve precision in seizure detection and classification. Our extensive experiments with real-world data demonstrate that NeuroGNN significantly outperforms existing state-of-the-art models. | [
"['Arash Hajisafi' 'Haowen Lin' 'Yao-Yi Chiang' 'Cyrus Shahabi']"
]
|
null | null | 2405.09569 | null | null | http://arxiv.org/pdf/2405.09569v1 | 2024-05-09T14:45:02Z | 2024-05-09T14:45:02Z | GaitMotion: A Multitask Dataset for Pathological Gait Forecasting | Gait benchmark empowers uncounted encouraging research fields such as gait recognition, humanoid locomotion, etc. Despite the growing focus on gait analysis, the research community is hindered by the limitations of the currently available databases, which mostly consist of videos or images with limited labeling. In this paper, we introduce GaitMotion, a multitask dataset leveraging wearable sensors to capture the patients' real-time movement with pathological gait. This dataset offers extensive ground-truth labeling for multiple tasks, including step/stride segmentation and step/stride length prediction, empowers researchers with a more holistic understanding of gait disturbances linked to neurological impairments. The wearable gait analysis suit captures the gait cycle, pattern, and parameters for both normal and pathological subjects. This data may prove beneficial for healthcare products focused on patient progress monitoring and post-disease recovery, as well as for forensics technologies aimed at person reidentification, and biomechanics research to aid in the development of humanoid robotics. Moreover, the analysis has considered the drift in data distribution across individual subjects. This drift can be attributed to each participant's unique behavioral habits or potential displacement of the sensor. Stride length variance for normal, Parkinson's, and stroke patients are compared to recognize the pathological walking pattern. As the baseline and benchmark, we provide an error of 14.1, 13.3, and 12.2 centimeters of stride length prediction for normal, Parkinson's, and Stroke gaits separately. We also analyzed the gait characteristics for normal and pathological gaits in terms of the gait cycle and gait parameters. | [
"['Wenwen Zhang' 'Hao Zhang' 'Zenan Jiang' 'Jing Wang' 'Amir Servati'\n 'Peyman Servati']"
]
|
null | null | 2405.09570 | null | null | http://arxiv.org/pdf/2405.09570v1 | 2024-05-10T03:12:17Z | 2024-05-10T03:12:17Z | FunnelNet: An End-to-End Deep Learning Framework to Monitor Digital
Heart Murmur in Real-Time | Objective: Heart murmurs are abnormal sounds caused by turbulent blood flow within the heart. Several diagnostic methods are available to detect heart murmurs and their severity, such as cardiac auscultation, echocardiography, phonocardiogram (PCG), etc. However, these methods have limitations, including extensive training and experience among healthcare providers, cost and accessibility of echocardiography, as well as noise interference and PCG data processing. This study aims to develop a novel end-to-end real-time heart murmur detection approach using traditional and depthwise separable convolutional networks. Methods: Continuous wavelet transform (CWT) was applied to extract meaningful features from the PCG data. The proposed network has three parts: the Squeeze net, the Bottleneck, and the Expansion net. The Squeeze net generates a compressed data representation, whereas the Bottleneck layer reduces computational complexity using a depthwise-separable convolutional network. The Expansion net is responsible for up-sampling the compressed data to a higher dimension, capturing tiny details of the representative data. Results: For evaluation, we used four publicly available datasets and achieved state-of-the-art performance in all datasets. Furthermore, we tested our proposed network on two resource-constrained devices: a Raspberry PI and an Android device, stripping it down into a tiny machine learning model (TinyML), achieving a maximum of 99.70%. Conclusion: The proposed model offers a deep learning framework for real-time accurate heart murmur detection within limited resources. Significance: It will significantly result in more accessible and practical medical services and reduced diagnosis time to assist medical professionals. The code is publicly available at TBA. | [
"['Md Jobayer' 'Md. Mehedi Hasan Shawon' 'Md Rakibul Hasan' 'Shreya Ghosh'\n 'Tom Gedeon' 'Md Zakir Hossain']"
]
|
null | null | 2405.09579 | null | null | http://arxiv.org/pdf/2405.09579v1 | 2024-05-14T18:09:43Z | 2024-05-14T18:09:43Z | Scalable Sparse Regression for Model Discovery: The Fast Lane to Insight | There exist endless examples of dynamical systems with vast available data and unsatisfying mathematical descriptions. Sparse regression applied to symbolic libraries has quickly emerged as a powerful tool for learning governing equations directly from data; these learned equations balance quantitative accuracy with qualitative simplicity and human interpretability. Here, I present a general purpose, model agnostic sparse regression algorithm that extends a recently proposed exhaustive search leveraging iterative Singular Value Decompositions (SVD). This accelerated scheme, Scalable Pruning for Rapid Identification of Null vecTors (SPRINT), uses bisection with analytic bounds to quickly identify optimal rank-1 modifications to null vectors. It is intended to maintain sensitivity to small coefficients and be of reasonable computational cost for large symbolic libraries. A calculation that would take the age of the universe with an exhaustive search but can be achieved in a day with SPRINT. | [
"['Matthew Golden']"
]
|
null | null | 2405.09580 | null | null | http://arxiv.org/pdf/2405.09580v1 | 2024-05-14T19:13:50Z | 2024-05-14T19:13:50Z | Error-margin Analysis for Hidden Neuron Activation Labels | Understanding how high-level concepts are represented within artificial neural networks is a fundamental challenge in the field of artificial intelligence. While existing literature in explainable AI emphasizes the importance of labeling neurons with concepts to understand their functioning, they mostly focus on identifying what stimulus activates a neuron in most cases, this corresponds to the notion of recall in information retrieval. We argue that this is only the first-part of a two-part job, it is imperative to also investigate neuron responses to other stimuli, i.e., their precision. We call this the neuron labels error margin. | [
"['Abhilekha Dalal' 'Rushrukh Rayan' 'Pascal Hitzler']"
]
|
null | null | 2405.09584 | null | null | http://arxiv.org/pdf/2405.09584v2 | 2024-05-22T22:01:40Z | 2024-05-15T05:33:49Z | Restless Bandit Problem with Rewards Generated by a Linear Gaussian
Dynamical System | Decision-making under uncertainty is a fundamental problem encountered frequently and can be formulated as a stochastic multi-armed bandit problem. In the problem, the learner interacts with an environment by choosing an action at each round, where a round is an instance of an interaction. In response, the environment reveals a reward, which is sampled from a stochastic process, to the learner. The goal of the learner is to maximize cumulative reward. In this work, we assume that the rewards are the inner product of an action vector and a state vector generated by a linear Gaussian dynamical system. To predict the reward for each action, we propose a method that takes a linear combination of previously observed rewards for predicting each action's next reward. We show that, regardless of the sequence of previous actions chosen, the reward sampled for any previously chosen action can be used for predicting another action's future reward, i.e. the reward sampled for action 1 at round $t-1$ can be used for predicting the reward for action $2$ at round $t$. This is accomplished by designing a modified Kalman filter with a matrix representation that can be learned for reward prediction. Numerical evaluations are carried out on a set of linear Gaussian dynamical systems and are compared with 2 other well-known stochastic multi-armed bandit algorithms. | [
"['Jonathan Gornet' 'Bruno Sinopoli']"
]
|
null | null | 2405.09585 | null | null | http://arxiv.org/pdf/2405.09585v3 | 2024-06-24T09:56:35Z | 2024-05-15T07:31:06Z | An Embarrassingly Simple Approach to Enhance Transformer Performance in
Genomic Selection for Crop Breeding | Genomic selection (GS), as a critical crop breeding strategy, plays a key role in enhancing food production and addressing the global hunger crisis. The predominant approaches in GS currently revolve around employing statistical methods for prediction. However, statistical methods often come with two main limitations: strong statistical priors and linear assumptions. A recent trend is to capture the non-linear relationships between markers by deep learning. However, as crop datasets are commonly long sequences with limited samples, the robustness of deep learning models, especially Transformers, remains a challenge. In this work, to unleash the unexplored potential of attention mechanism for the task of interest, we propose a simple yet effective Transformer-based framework that enables end-to-end training of the whole sequence. Via experiments on rice3k and wheat3k datasets, we show that, with simple tricks such as k-mer tokenization and random masking, Transformer can achieve overall superior performance against seminal methods on GS tasks of interest. | [
"['Renqi Chen' 'Wenwei Han' 'Haohao Zhang' 'Haoyang Su' 'Zhefan Wang'\n 'Xiaolei Liu' 'Hao Jiang' 'Wanli Ouyang' 'Nanqing Dong']"
]
|
null | null | 2405.09588 | null | null | http://arxiv.org/pdf/2405.09588v1 | 2024-05-15T09:26:24Z | 2024-05-15T09:26:24Z | Training Deep Learning Models with Hybrid Datasets for Robust Automatic
Target Detection on real SAR images | In this work, we propose to tackle several challenges hindering the development of Automatic Target Detection (ATD) algorithms for ground targets in SAR images. To address the lack of representative training data, we propose a Deep Learning approach to train ATD models with synthetic target signatures produced with the MOCEM simulator. We define an incrustation pipeline to incorporate synthetic targets into real backgrounds. Using this hybrid dataset, we train ATD models specifically tailored to bridge the domain gap between synthetic and real data. Our approach notably relies on massive physics-based data augmentation techniques and Adversarial Training of two deep-learning detection architectures. We then test these models on several datasets, including (1) patchworks of real SAR images, (2) images with the incrustation of real targets in real backgrounds, and (3) images with the incrustation of synthetic background objects in real backgrounds. Results show that the produced hybrid datasets are exempt from image overlay bias. Our approach can reach up to 90% of Average Precision on real data while exclusively using synthetic targets for training. | [
"['Benjamin Camus' 'Théo Voillemin' 'Corentin Le Barbu'\n 'Jean-Christophe Louvigné' 'Carole Belloni' 'Emmanuel Vallée']"
]
|
null | null | 2405.09589 | null | null | http://arxiv.org/pdf/2405.09589v2 | 2024-05-20T06:30:06Z | 2024-05-15T10:16:25Z | Unveiling Hallucination in Text, Image, Video, and Audio Foundation
Models: A Comprehensive Survey | The rapid advancement of foundation models (FMs) across language, image, audio, and video domains has shown remarkable capabilities in diverse tasks. However, the proliferation of FMs brings forth a critical challenge: the potential to generate hallucinated outputs, particularly in high-stakes applications. The tendency of foundation models to produce hallucinated content arguably represents the biggest hindrance to their widespread adoption in real-world scenarios, especially in domains where reliability and accuracy are paramount. This survey paper presents a comprehensive overview of recent developments that aim to identify and mitigate the problem of hallucination in FMs, spanning text, image, video, and audio modalities. By synthesizing recent advancements in detecting and mitigating hallucination across various modalities, the paper aims to provide valuable insights for researchers, developers, and practitioners. Essentially, it establishes a clear framework encompassing definition, taxonomy, and detection strategies for addressing hallucination in multimodal foundation models, laying the foundation for future research in this pivotal area. | [
"['Pranab Sahoo' 'Prabhash Meharia' 'Akash Ghosh' 'Sriparna Saha'\n 'Vinija Jain' 'Aman Chadha']"
]
|
null | null | 2405.09591 | null | null | http://arxiv.org/pdf/2405.09591v2 | 2024-05-17T07:03:16Z | 2024-05-15T11:58:08Z | A Comprehensive Survey on Data Augmentation | Data augmentation is a series of techniques that generate high-quality artificial data by manipulating existing data samples. By leveraging data augmentation techniques, AI models can achieve significantly improved applicability in tasks involving scarce or imbalanced datasets, thereby substantially enhancing AI models' generalization capabilities. Existing literature surveys only focus on a certain type of specific modality data, and categorize these methods from modality-specific and operation-centric perspectives, which lacks a consistent summary of data augmentation methods across multiple modalities and limits the comprehension of how existing data samples serve the data augmentation process. To bridge this gap, we propose a more enlightening taxonomy that encompasses data augmentation techniques for different common data modalities. Specifically, from a data-centric perspective, this survey proposes a modality-independent taxonomy by investigating how to take advantage of the intrinsic relationship between data samples, including single-wise, pair-wise, and population-wise sample data augmentation methods. Additionally, we categorize data augmentation methods across five data modalities through a unified inductive approach. | [
"['Zaitian Wang' 'Pengfei Wang' 'Kunpeng Liu' 'Pengyang Wang' 'Yanjie Fu'\n 'Chang-Tien Lu' 'Charu C. Aggarwal' 'Jian Pei' 'Yuanchun Zhou']"
]
|
null | null | 2405.09592 | null | null | http://arxiv.org/pdf/2405.09592v1 | 2024-05-15T12:07:43Z | 2024-05-15T12:07:43Z | A Survey of Generative Techniques for Spatial-Temporal Data Mining | This paper focuses on the integration of generative techniques into spatial-temporal data mining, considering the significant growth and diverse nature of spatial-temporal data. With the advancements in RNNs, CNNs, and other non-generative techniques, researchers have explored their application in capturing temporal and spatial dependencies within spatial-temporal data. However, the emergence of generative techniques such as LLMs, SSL, Seq2Seq and diffusion models has opened up new possibilities for enhancing spatial-temporal data mining further. The paper provides a comprehensive analysis of generative technique-based spatial-temporal methods and introduces a standardized framework specifically designed for the spatial-temporal data mining pipeline. By offering a detailed review and a novel taxonomy of spatial-temporal methodology utilizing generative techniques, the paper enables a deeper understanding of the various techniques employed in this field. Furthermore, the paper highlights promising future research directions, urging researchers to delve deeper into spatial-temporal data mining. It emphasizes the need to explore untapped opportunities and push the boundaries of knowledge to unlock new insights and improve the effectiveness and efficiency of spatial-temporal data mining. By integrating generative techniques and providing a standardized framework, the paper contributes to advancing the field and encourages researchers to explore the vast potential of generative techniques in spatial-temporal data mining. | [
"['Qianru Zhang' 'Haixin Wang' 'Cheng Long' 'Liangcai Su' 'Xingwei He'\n 'Jianlong Chang' 'Tailin Wu' 'Hongzhi Yin' 'Siu-Ming Yiu' 'Qi Tian'\n 'Christian S. Jensen']"
]
|
null | null | 2405.09594 | null | null | http://arxiv.org/pdf/2405.09594v1 | 2024-05-15T12:27:38Z | 2024-05-15T12:27:38Z | Learning Generalized Medical Image Representations through Image-Graph
Contrastive Pretraining | Medical image interpretation using deep learning has shown promise but often requires extensive expert-annotated datasets. To reduce this annotation burden, we develop an Image-Graph Contrastive Learning framework that pairs chest X-rays with structured report knowledge graphs automatically extracted from radiology notes. Our approach uniquely encodes the disconnected graph components via a relational graph convolution network and transformer attention. In experiments on the CheXpert dataset, this novel graph encoding strategy enabled the framework to outperform existing methods that use image-text contrastive learning in 1% linear evaluation and few-shot settings, while achieving comparable performance to radiologists. By exploiting unlabeled paired images and text, our framework demonstrates the potential of structured clinical insights to enhance contrastive learning for medical images. This work points toward reducing demands on medical experts for annotations, improving diagnostic precision, and advancing patient care through robust medical image understanding. | [
"['Sameer Khanna' 'Daniel Michael' 'Marinka Zitnik' 'Pranav Rajpurkar']"
]
|
null | null | 2405.09596 | null | null | http://arxiv.org/pdf/2405.09596v1 | 2024-05-15T13:43:07Z | 2024-05-15T13:43:07Z | Enhancing Maritime Trajectory Forecasting via H3 Index and Causal
Language Modelling (CLM) | The prediction of ship trajectories is a growing field of study in artificial intelligence. Traditional methods rely on the use of LSTM, GRU networks, and even Transformer architectures for the prediction of spatio-temporal series. This study proposes a viable alternative for predicting these trajectories using only GNSS positions. It considers this spatio-temporal problem as a natural language processing problem. The latitude/longitude coordinates of AIS messages are transformed into cell identifiers using the H3 index. Thanks to the pseudo-octal representation, it becomes easier for language models to learn the spatial hierarchy of the H3 index. The method is compared with a classical Kalman filter, widely used in the maritime domain, and introduces the Fr'echet distance as the main evaluation metric. We show that it is possible to predict ship trajectories quite precisely up to 8 hours with 30 minutes of context. We demonstrate that this alternative works well enough to predict trajectories worldwide. | [
"['Nicolas Drapier' 'Aladine Chetouani' 'Aurélien Chateigner']"
]
|
null | null | 2405.09597 | null | null | http://arxiv.org/pdf/2405.09597v1 | 2024-05-15T13:50:23Z | 2024-05-15T13:50:23Z | When AI Eats Itself: On the Caveats of Data Pollution in the Era of
Generative AI | Generative artificial intelligence (AI) technologies and large models are producing realistic outputs across various domains, such as images, text, speech, and music. Creating these advanced generative models requires significant resources, particularly large and high-quality datasets. To minimize training expenses, many algorithm developers use data created by the models themselves as a cost-effective training solution. However, not all synthetic data effectively improve model performance, necessitating a strategic balance in the use of real versus synthetic data to optimize outcomes. Currently, the previously well-controlled integration of real and synthetic data is becoming uncontrollable. The widespread and unregulated dissemination of synthetic data online leads to the contamination of datasets traditionally compiled through web scraping, now mixed with unlabeled synthetic data. This trend portends a future where generative AI systems may increasingly rely blindly on consuming self-generated data, raising concerns about model performance and ethical issues. What will happen if generative AI continuously consumes itself without discernment? What measures can we take to mitigate the potential adverse effects? There is a significant gap in the scientific literature regarding the impact of synthetic data use in generative AI, particularly in terms of the fusion of multimodal information. To address this research gap, this review investigates the consequences of integrating synthetic data blindly on training generative AI on both image and text modalities and explores strategies to mitigate these effects. The goal is to offer a comprehensive view of synthetic data's role, advocating for a balanced approach to its use and exploring practices that promote the sustainable development of generative AI technologies in the era of large models. | [
"['Xiaodan Xing' 'Fadong Shi' 'Jiahao Huang' 'Yinzhe Wu' 'Yang Nan'\n 'Sheng Zhang' 'Yingying Fang' 'Mike Roberts' 'Carola-Bibiane Schönlieb'\n 'Javier Del Ser' 'Guang Yang']"
]
|
null | null | 2405.09598 | null | null | http://arxiv.org/abs/2405.09598v1 | 2024-05-15T14:06:28Z | 2024-05-15T14:06:28Z | Properties that allow or prohibit transferability of adversarial attacks
among quantized networks | Deep Neural Networks (DNNs) are known to be vulnerable to adversarial examples. Further, these adversarial examples are found to be transferable from the source network in which they are crafted to a black-box target network. As the trend of using deep learning on embedded devices grows, it becomes relevant to study the transferability properties of adversarial examples among compressed networks. In this paper, we consider quantization as a network compression technique and evaluate the performance of transfer-based attacks when the source and target networks are quantized at different bitwidths. We explore how algorithm specific properties affect transferability by considering various adversarial example generation algorithms. Furthermore, we examine transferability in a more realistic scenario where the source and target networks may differ in bitwidth and other model-related properties like capacity and architecture. We find that although quantization reduces transferability, certain attack types demonstrate an ability to enhance it. Additionally, the average transferability of adversarial examples among quantized versions of a network can be used to estimate the transferability to quantized target networks with varying capacity and architecture. | [
"['Abhishek Shrestha' 'Jürgen Großmann']"
]
|
null | null | 2405.09600 | null | null | http://arxiv.org/pdf/2405.09600v1 | 2024-05-15T14:14:34Z | 2024-05-15T14:14:34Z | Aggregate Representation Measure for Predictive Model Reusability | In this paper, we propose a predictive quantifier to estimate the retraining cost of a trained model in distribution shifts. The proposed Aggregated Representation Measure (ARM) quantifies the change in the model's representation from the old to new data distribution. It provides, before actually retraining the model, a single concise index of resources - epochs, energy, and carbon emissions - required for the retraining. This enables reuse of a model with a much lower cost than training a new model from scratch. The experimental results indicate that ARM reasonably predicts retraining costs for varying noise intensities and enables comparisons among multiple model architectures to determine the most cost-effective and sustainable option. | [
"['Vishwesh Sangarya' 'Richard Bradford' 'Jung-Eun Kim']"
]
|
null | null | 2405.09602 | null | null | http://arxiv.org/pdf/2405.09602v1 | 2024-05-15T15:17:52Z | 2024-05-15T15:17:52Z | Improving Label Error Detection and Elimination with Uncertainty
Quantification | Identifying and handling label errors can significantly enhance the accuracy of supervised machine learning models. Recent approaches for identifying label errors demonstrate that a low self-confidence of models with respect to a certain label represents a good indicator of an erroneous label. However, latest work has built on softmax probabilities to measure self-confidence. In this paper, we argue that -- as softmax probabilities do not reflect a model's predictive uncertainty accurately -- label error detection requires more sophisticated measures of model uncertainty. Therefore, we develop a range of novel, model-agnostic algorithms for Uncertainty Quantification-Based Label Error Detection (UQ-LED), which combine the techniques of confident learning (CL), Monte Carlo Dropout (MCD), model uncertainty measures (e.g., entropy), and ensemble learning to enhance label error detection. We comprehensively evaluate our algorithms on four image classification benchmark datasets in two stages. In the first stage, we demonstrate that our UQ-LED algorithms outperform state-of-the-art confident learning in identifying label errors. In the second stage, we show that removing all identified errors from the training data based on our approach results in higher accuracies than training on all available labeled data. Importantly, besides our contributions to the detection of label errors, we particularly propose a novel approach to generate realistic, class-dependent label errors synthetically. Overall, our study demonstrates that selectively cleaning datasets with UQ-LED algorithms leads to more accurate classifications than using larger, noisier datasets. | [
"['Johannes Jakubik' 'Michael Vössing' 'Manil Maskey' 'Christopher Wölfle'\n 'Gerhard Satzger']"
]
|
null | null | 2405.09605 | null | null | http://arxiv.org/pdf/2405.09605v1 | 2024-05-15T17:19:42Z | 2024-05-15T17:19:42Z | Elements of World Knowledge (EWOK): A cognition-inspired framework for
evaluating basic world knowledge in language models | The ability to build and leverage world models is essential for a general-purpose AI agent. Testing such capabilities is hard, in part because the building blocks of world models are ill-defined. We present Elements of World Knowledge (EWOK), a framework for evaluating world modeling in language models by testing their ability to use knowledge of a concept to match a target text with a plausible/implausible context. EWOK targets specific concepts from multiple knowledge domains known to be vital for world modeling in humans. Domains range from social interactions (help/hinder) to spatial relations (left/right). Both, contexts and targets are minimal pairs. Objects, agents, and locations in the items can be flexibly filled in enabling easy generation of multiple controlled datasets. We then introduce EWOK-CORE-1.0, a dataset of 4,374 items covering 11 world knowledge domains. We evaluate 20 openweights large language models (1.3B--70B parameters) across a battery of evaluation paradigms along with a human norming study comprising 12,480 measurements. The overall performance of all tested models is worse than human performance, with results varying drastically across domains. These data highlight simple cases where even large models fail and present rich avenues for targeted research on LLM world modeling capabilities. | [
"['Anna A. Ivanova' 'Aalok Sathe' 'Benjamin Lipkin' 'Unnathi Kumar'\n 'Setayesh Radkani' 'Thomas H. Clark' 'Carina Kauf' 'Jennifer Hu'\n 'R. T. Pramod' 'Gabriel Grand' 'Vivian Paulun' 'Maria Ryskina'\n 'Ekin Akyürek' 'Ethan Wilcox' 'Nafisa Rashid' 'Leshem Choshen'\n 'Roger Levy' 'Evelina Fedorenko' 'Joshua Tenenbaum' 'Jacob Andreas']"
]
|
null | null | 2405.09637 | null | null | http://arxiv.org/pdf/2405.09637v2 | 2024-06-08T21:02:15Z | 2024-04-29T13:31:00Z | CLASSP: a Biologically-Inspired Approach to Continual Learning through
Adjustment Suppression and Sparsity Promotion | This paper introduces a new biologically-inspired training method named Continual Learning through Adjustment Suppression and Sparsity Promotion (CLASSP). CLASSP is based on two main principles observed in neuroscience, particularly in the context of synaptic transmission and Long-Term Potentiation (LTP). The first principle is a decay rate over the weight adjustment, which is implemented as a generalization of the AdaGrad optimization algorithm. This means that weights that have received many updates should have lower learning rates as they likely encode important information about previously seen data. However, this principle results in a diffuse distribution of updates throughout the model, as it promotes updates for weights that haven't been previously updated, while a sparse update distribution is preferred to leave weights unassigned for future tasks. Therefore, the second principle introduces a threshold on the loss gradient. This promotes sparse learning by updating a weight only if the loss gradient with respect to that weight is above a certain threshold, i.e. only updating weights with a significant impact on the current loss. Both principles reflect phenomena observed in LTP, where a threshold effect and a gradual saturation of potentiation have been observed. CLASSP is implemented in a Python/PyTorch class, making it applicable to any model. When compared with Elastic Weight Consolidation (EWC) using Computer Vision and sentiment analysis datasets, CLASSP demonstrates superior performance in terms of accuracy and memory footprint. | [
"['Oswaldo Ludwig']"
]
|
null | null | 2405.09638 | null | null | http://arxiv.org/abs/2405.09638v1 | 2024-04-29T14:54:37Z | 2024-04-29T14:54:37Z | HMAR: Hierarchical Masked Attention for Multi-Behaviour Recommendation | In the context of recommendation systems, addressing multi-behavioral user interactions has become vital for understanding the evolving user behavior. Recent models utilize techniques like graph neural networks and attention mechanisms for modeling diverse behaviors, but capturing sequential patterns in historical interactions remains challenging. To tackle this, we introduce Hierarchical Masked Attention for multi-behavior recommendation (HMAR). Specifically, our approach applies masked self-attention to items of the same behavior, followed by self-attention across all behaviors. Additionally, we propose historical behavior indicators to encode the historical frequency of each items behavior in the input sequence. Furthermore, the HMAR model operates in a multi-task setting, allowing it to learn item behaviors and their associated ranking scores concurrently. Extensive experimental results on four real-world datasets demonstrate that our proposed model outperforms state-of-the-art methods. Our code and datasets are available here (https://github.com/Shereen-Elsayed/HMAR). | [
"['Shereen Elsayed' 'Ahmed Rashed' 'Lars Schmidt-Thieme']"
]
|
null | null | 2405.09660 | null | null | http://arxiv.org/pdf/2405.09660v2 | 2024-06-10T07:32:12Z | 2024-05-15T19:03:08Z | Fast Two-Time-Scale Stochastic Gradient Method with Applications in
Reinforcement Learning | Two-time-scale optimization is a framework introduced in Zeng et al. (2024) that abstracts a range of policy evaluation and policy optimization problems in reinforcement learning (RL). Akin to bi-level optimization under a particular type of stochastic oracle, the two-time-scale optimization framework has an upper level objective whose gradient evaluation depends on the solution of a lower level problem, which is to find the root of a strongly monotone operator. In this work, we propose a new method for solving two-time-scale optimization that achieves significantly faster convergence than the prior arts. The key idea of our approach is to leverage an averaging step to improve the estimates of the operators in both lower and upper levels before using them to update the decision variables. These additional averaging steps eliminate the direct coupling between the main variables, enabling the accelerated performance of our algorithm. We characterize the finite-time convergence rates of the proposed algorithm under various conditions of the underlying objective function, including strong convexity, convexity, Polyak-Lojasiewicz condition, and general non-convexity. These rates significantly improve over the best-known complexity of the standard two-time-scale stochastic approximation algorithm. When applied to RL, we show how the proposed algorithm specializes to novel online sample-based methods that surpass or match the performance of the existing state of the art. Finally, we support our theoretical results with numerical simulations in RL. | [
"['Sihan Zeng' 'Thinh T. Doan']"
]
|
null | null | 2405.09673 | null | null | http://arxiv.org/pdf/2405.09673v1 | 2024-05-15T19:27:45Z | 2024-05-15T19:27:45Z | LoRA Learns Less and Forgets Less | Low-Rank Adaptation (LoRA) is a widely-used parameter-efficient finetuning method for large language models. LoRA saves memory by training only low rank perturbations to selected weight matrices. In this work, we compare the performance of LoRA and full finetuning on two target domains, programming and mathematics. We consider both the instruction finetuning ($approx$100K prompt-response pairs) and continued pretraining ($approx$10B unstructured tokens) data regimes. Our results show that, in most settings, LoRA substantially underperforms full finetuning. Nevertheless, LoRA exhibits a desirable form of regularization: it better maintains the base model's performance on tasks outside the target domain. We show that LoRA provides stronger regularization compared to common techniques such as weight decay and dropout; it also helps maintain more diverse generations. We show that full finetuning learns perturbations with a rank that is 10-100X greater than typical LoRA configurations, possibly explaining some of the reported gaps. We conclude by proposing best practices for finetuning with LoRA. | [
"['Dan Biderman' 'Jose Gonzalez Ortiz' 'Jacob Portes' 'Mansheej Paul'\n 'Philip Greengard' 'Connor Jennings' 'Daniel King' 'Sam Havens'\n 'Vitaliy Chiley' 'Jonathan Frankle' 'Cody Blakeney' 'John P. Cunningham']"
]
|
null | null | 2405.09688 | null | null | http://arxiv.org/pdf/2405.09688v2 | 2024-05-25T23:08:47Z | 2024-05-15T20:27:56Z | From Local to Global Order: A Theory of Neural Synaptic Balance | We develop a theory of neural synaptic balance and how it can emerge or be enforced in neural networks. For a given additive cost function $R$ (regularizer), a neuron is said to be in balance if the total cost of its input weights is equal to the total cost of its output weights. The basic example is provided by feedforward networks of ReLU units trained with $L_2$ regularizers, which exhibit balance after proper training. The theory explains this phenomenon and extends it in several directions. The first direction is the extension to bilinear and other activation functions. The second direction is the extension to more general regularizers, including all $L_p$ ($p>0$) regularizers. The third direction is the extension to non-layered architectures, recurrent architectures, convolutional architectures, as well as architectures with mixed activation functions. The theory is based on two local neuronal operations: scaling which is commutative, and balancing which is not commutative. Finally, and most importantly, given any initial set of weights, when local balancing operations are applied to each neuron in a stochastic manner, global order always emerges through the convergence of the stochastic balancing algorithm to the same unique set of balanced weights. The reason for this convergence is the existence of an underlying strictly convex optimization problem where the relevant variables are constrained to a linear, only architecture-dependent, manifold. The theory is corroborated through various simulations carried out on benchmark data sets. Scaling and balancing operations are entirely local and thus physically plausible in biological and neuromorphic networks. | [
"['Pierre Baldi' 'Alireza Rahmansetayesh']"
]
|
null | null | 2405.09689 | null | null | http://arxiv.org/pdf/2405.09689v1 | 2024-05-15T20:37:48Z | 2024-05-15T20:37:48Z | Generalized Holographic Reduced Representations | Deep learning has achieved remarkable success in recent years. Central to its success is its ability to learn representations that preserve task-relevant structure. However, massive energy, compute, and data costs are required to learn general representations. This paper explores Hyperdimensional Computing (HDC), a computationally and data-efficient brain-inspired alternative. HDC acts as a bridge between connectionist and symbolic approaches to artificial intelligence (AI), allowing explicit specification of representational structure as in symbolic approaches while retaining the flexibility of connectionist approaches. However, HDC's simplicity poses challenges for encoding complex compositional structures, especially in its binding operation. To address this, we propose Generalized Holographic Reduced Representations (GHRR), an extension of Fourier Holographic Reduced Representations (FHRR), a specific HDC implementation. GHRR introduces a flexible, non-commutative binding operation, enabling improved encoding of complex data structures while preserving HDC's desirable properties of robustness and transparency. In this work, we introduce the GHRR framework, prove its theoretical properties and its adherence to HDC properties, explore its kernel and binding characteristics, and perform empirical experiments showcasing its flexible non-commutativity, enhanced decoding accuracy for compositional structures, and improved memorization capacity compared to FHRR. | [
"['Calvin Yeung' 'Zhuowen Zou' 'Mohsen Imani']"
]
|
null | null | 2405.09707 | null | null | http://arxiv.org/pdf/2405.09707v1 | 2024-05-15T21:13:54Z | 2024-05-15T21:13:54Z | Point2SSM++: Self-Supervised Learning of Anatomical Shape Models from
Point Clouds | Correspondence-based statistical shape modeling (SSM) stands as a powerful technology for morphometric analysis in clinical research. SSM facilitates population-level characterization and quantification of anatomical shapes such as bones and organs, aiding in pathology and disease diagnostics and treatment planning. Despite its potential, SSM remains under-utilized in medical research due to the significant overhead associated with automatic construction methods, which demand complete, aligned shape surface representations. Additionally, optimization-based techniques rely on bias-inducing assumptions or templates and have prolonged inference times as the entire cohort is simultaneously optimized. To overcome these challenges, we introduce Point2SSM++, a principled, self-supervised deep learning approach that directly learns correspondence points from point cloud representations of anatomical shapes. Point2SSM++ is robust to misaligned and inconsistent input, providing SSM that accurately samples individual shape surfaces while effectively capturing population-level statistics. Additionally, we present principled extensions of Point2SSM++ to adapt it for dynamic spatiotemporal and multi-anatomy use cases, demonstrating the broad versatility of the Point2SSM++ framework. Furthermore, we present extensions of Point2SSM++ tailored for dynamic spatiotemporal and multi-anatomy scenarios, showcasing the broad versatility of the framework. Through extensive validation across diverse anatomies, evaluation metrics, and clinically relevant downstream tasks, we demonstrate Point2SSM++'s superiority over existing state-of-the-art deep learning models and traditional approaches. Point2SSM++ substantially enhances the feasibility of SSM generation and significantly broadens its array of potential clinical applications. | [
"['Jadie Adams' 'Shireen Elhabian']"
]
|
null | null | 2405.09719 | null | null | http://arxiv.org/pdf/2405.09719v2 | 2024-05-25T16:08:23Z | 2024-05-15T22:28:23Z | Spectral Editing of Activations for Large Language Model Alignment | Large language models (LLMs) often exhibit undesirable behaviours, such as generating untruthful or biased content. Editing their internal representations has been shown to be effective in mitigating such behaviours on top of the existing alignment methods. We propose a novel inference-time editing method, namely spectral editing of activations (SEA), to project the input representations into directions with maximal covariance with the positive demonstrations (e.g., truthful) while minimising covariance with the negative demonstrations (e.g., hallucinated). We also extend our method to non-linear editing using feature functions. We run extensive experiments on benchmarks concerning truthfulness and bias with six open-source LLMs of different sizes and model families. The results demonstrate the superiority of SEA in effectiveness, generalisation to similar tasks, as well as computation and data efficiency. We also show that SEA editing only has a limited negative impact on other model capabilities. | [
"['Yifu Qiu' 'Zheng Zhao' 'Yftah Ziser' 'Anna Korhonen' 'Edoardo M. Ponti'\n 'Shay B. Cohen']"
]
|
null | null | 2405.09742 | null | null | http://arxiv.org/pdf/2405.09742v1 | 2024-05-16T00:52:03Z | 2024-05-16T00:52:03Z | Random Scaling and Momentum for Non-smooth Non-convex Optimization | Training neural networks requires optimizing a loss function that may be highly irregular, and in particular neither convex nor smooth. Popular training algorithms are based on stochastic gradient descent with momentum (SGDM), for which classical analysis applies only if the loss is either convex or smooth. We show that a very small modification to SGDM closes this gap: simply scale the update at each time point by an exponentially distributed random scalar. The resulting algorithm achieves optimal convergence guarantees. Intriguingly, this result is not derived by a specific analysis of SGDM: instead, it falls naturally out of a more general framework for converting online convex optimization algorithms to non-convex optimization algorithms. | [
"['Qinzi Zhang' 'Ashok Cutkosky']"
]
|
null | null | 2405.09747 | null | null | http://arxiv.org/pdf/2405.09747v1 | 2024-05-16T01:09:33Z | 2024-05-16T01:09:33Z | NIFTY Financial News Headlines Dataset | We introduce and make publicly available the NIFTY Financial News Headlines dataset, designed to facilitate and advance research in financial market forecasting using large language models (LLMs). This dataset comprises two distinct versions tailored for different modeling approaches: (i) NIFTY-LM, which targets supervised fine-tuning (SFT) of LLMs with an auto-regressive, causal language-modeling objective, and (ii) NIFTY-RL, formatted specifically for alignment methods (like reinforcement learning from human feedback (RLHF)) to align LLMs via rejection sampling and reward modeling. Each dataset version provides curated, high-quality data incorporating comprehensive metadata, market indices, and deduplicated financial news headlines systematically filtered and ranked to suit modern LLM frameworks. We also include experiments demonstrating some applications of the dataset in tasks like stock price movement and the role of LLM embeddings in information acquisition/richness. The NIFTY dataset along with utilities (like truncating prompt's context length systematically) are available on Hugging Face at https://huggingface.co/datasets/raeidsaqur/NIFTY. | [
"['Raeid Saqur' 'Ken Kato' 'Nicholas Vinden' 'Frank Rudzicz']"
]
|
null | null | 2405.09756 | null | null | http://arxiv.org/pdf/2405.09756v1 | 2024-05-16T01:45:55Z | 2024-05-16T01:45:55Z | An Autoencoder and Generative Adversarial Networks Approach for
Multi-Omics Data Imbalanced Class Handling and Classification | In the relentless efforts in enhancing medical diagnostics, the integration of state-of-the-art machine learning methodologies has emerged as a promising research area. In molecular biology, there has been an explosion of data generated from multi-omics sequencing. The advent sequencing equipment can provide large number of complicated measurements per one experiment. Therefore, traditional statistical methods face challenging tasks when dealing with such high dimensional data. However, most of the information contained in these datasets is redundant or unrelated and can be effectively reduced to significantly fewer variables without losing much information. Dimensionality reduction techniques are mathematical procedures that allow for this reduction; they have largely been developed through statistics and machine learning disciplines. The other challenge in medical datasets is having an imbalanced number of samples in the classes, which leads to biased results in machine learning models. This study, focused on tackling these challenges in a neural network that incorporates autoencoder to extract latent space of the features, and Generative Adversarial Networks (GAN) to generate synthetic samples. Latent space is the reduced dimensional space that captures the meaningful features of the original data. Our model starts with feature selection to select the discriminative features before feeding them to the neural network. Then, the model predicts the outcome of cancer for different datasets. The proposed model outperformed other existing models by scoring accuracy of 95.09% for bladder cancer dataset and 88.82% for the breast cancer dataset. | [
"['Ibrahim Al-Hurani' 'Abedalrhman Alkhateeb' 'Salama Ikki']"
]
|
null | null | 2405.09771 | null | null | http://arxiv.org/pdf/2405.09771v1 | 2024-05-16T02:22:09Z | 2024-05-16T02:22:09Z | Harmonizing Generalization and Personalization in Federated Prompt
Learning | Federated Prompt Learning (FPL) incorporates large pre-trained Vision-Language models (VLM) into federated learning through prompt tuning. The transferable representations and remarkable generalization capacity of VLM make them highly compatible with the integration of federated learning. Addressing data heterogeneity in federated learning requires personalization, but excessive focus on it across clients could compromise the model's ability to generalize effectively. To preserve the impressive generalization capability of VLM, it is crucial to strike a balance between personalization and generalization in FPL. To tackle this challenge, we proposed Federated Prompt Learning with CLIP Generalization and low-rank Personalization (FedPGP), which employs pre-trained CLIP to provide knowledge-guidance on the global prompt for improved generalization and incorporates a low-rank adaptation term to personalize the global prompt. Further, FedPGP integrates a prompt-wise contrastive loss to achieve knowledge guidance and personalized adaptation simultaneously, enabling a harmonious balance between personalization and generalization in FPL. We conduct extensive experiments on various datasets to explore base-to-novel generalization in both category-level and domain-level scenarios with heterogeneous data, showing the superiority of FedPGP in balancing generalization and personalization. | [
"['Tianyu Cui' 'Hongxia Li' 'Jingya Wang' 'Ye Shi']"
]
|
null | null | 2405.09781 | null | null | http://arxiv.org/pdf/2405.09781v1 | 2024-05-16T03:00:41Z | 2024-05-16T03:00:41Z | An Independent Implementation of Quantum Machine Learning Algorithms in
Qiskit for Genomic Data | In this paper, we explore the power of Quantum Machine Learning as we extend, implement and evaluate algorithms like Quantum Support Vector Classifier (QSVC), Pegasos-QSVC, Variational Quantum Circuits (VQC), and Quantum Neural Networks (QNN) in Qiskit with diverse feature mapping techniques for genomic sequence classification. | [
"['Navneet Singh' 'Shiva Raj Pokhrel']"
]
|
null | null | 2405.09783 | null | null | http://arxiv.org/pdf/2405.09783v1 | 2024-05-16T03:04:10Z | 2024-05-16T03:04:10Z | LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance
Physical Scientific Discovery | Large Language Models have recently gained significant attention in scientific discovery for their extensive knowledge and advanced reasoning capabilities. However, they encounter challenges in effectively simulating observational feedback and grounding it with language to propel advancements in physical scientific discovery. Conversely, human scientists undertake scientific discovery by formulating hypotheses, conducting experiments, and revising theories through observational analysis. Inspired by this, we propose to enhance the knowledge-driven, abstract reasoning abilities of LLMs with the computational strength of simulations. We introduce Scientific Generative Agent (SGA), a bilevel optimization framework: LLMs act as knowledgeable and versatile thinkers, proposing scientific hypotheses and reason about discrete components, such as physics equations or molecule structures; meanwhile, simulations function as experimental platforms, providing observational feedback and optimizing via differentiability for continuous parts, such as physical parameters. We conduct extensive experiments to demonstrate our framework's efficacy in constitutive law discovery and molecular design, unveiling novel solutions that differ from conventional human expectations yet remain coherent upon analysis. | [
"['Pingchuan Ma' 'Tsun-Hsuan Wang' 'Minghao Guo' 'Zhiqing Sun'\n 'Joshua B. Tenenbaum' 'Daniela Rus' 'Chuang Gan' 'Wojciech Matusik']"
]
|
null | null | 2405.09784 | null | null | http://arxiv.org/pdf/2405.09784v3 | 2024-05-23T13:15:06Z | 2024-05-16T03:04:33Z | Online bipartite matching with imperfect advice | We study the problem of online unweighted bipartite matching with $n$ offline vertices and $n$ online vertices where one wishes to be competitive against the optimal offline algorithm. While the classic RANKING algorithm of Karp et al. [1990] provably attains competitive ratio of $1-1/e > 1/2$, we show that no learning-augmented method can be both 1-consistent and strictly better than $1/2$-robust under the adversarial arrival model. Meanwhile, under the random arrival model, we show how one can utilize methods from distribution testing to design an algorithm that takes in external advice about the online vertices and provably achieves competitive ratio interpolating between any ratio attainable by advice-free methods and the optimal ratio of 1, depending on the advice quality. | [
"['Davin Choo' 'Themis Gouleakis' 'Chun Kai Ling' 'Arnab Bhattacharyya']"
]
|
null | null | 2405.09786 | null | null | http://arxiv.org/pdf/2405.09786v3 | 2024-06-02T15:06:02Z | 2024-05-16T03:19:52Z | IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling
Consistency | Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can maliciously trigger model misclassifications by implanting a hidden backdoor during model training. This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) as a `firewall' to filter out malicious testing images. Our method is motivated by an intriguing phenomenon, i.e., parameter-oriented scaling consistency (PSC), where the prediction confidences of poisoned samples are significantly more consistent than those of benign ones when amplifying model parameters. In particular, we provide theoretical analysis to safeguard the foundations of the PSC phenomenon. We also design an adaptive method to select BN layers to scale up for effective detection. Extensive experiments are conducted on benchmark datasets, verifying the effectiveness and efficiency of our IBD-PSC method and its resistance to adaptive attacks. Codes are available at href{https://github.com/THUYimingLi/BackdoorBox}{BackdoorBox}. | [
"['Linshan Hou' 'Ruili Feng' 'Zhongyun Hua' 'Wei Luo' 'Leo Yu Zhang'\n 'Yiming Li']"
]
|
null | null | 2405.09787 | null | null | http://arxiv.org/pdf/2405.09787v1 | 2024-05-16T03:23:57Z | 2024-05-16T03:23:57Z | Analysis of the BraTS 2023 Intracranial Meningioma Segmentation
Challenge | We describe the design and results from the BraTS 2023 Intracranial Meningioma Segmentation Challenge. The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas, which are typically benign extra-axial tumors with diverse radiologic and anatomical presentation and a propensity for multiplicity. Nine participating teams each developed deep-learning automated segmentation models using image data from the largest multi-institutional systematically expert annotated multilabel multi-sequence meningioma MRI dataset to date, which included 1000 training set cases, 141 validation set cases, and 283 hidden test set cases. Each case included T2, T2/FLAIR, T1, and T1Gd brain MRI sequences with associated tumor compartment labels delineating enhancing tumor, non-enhancing tumor, and surrounding non-enhancing T2/FLAIR hyperintensity. Participant automated segmentation models were evaluated and ranked based on a scoring system evaluating lesion-wise metrics including dice similarity coefficient (DSC) and 95% Hausdorff Distance. The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor, respectively and a corresponding average DSC of 0.899, 0.904, and 0.871, respectively. These results serve as state-of-the-art benchmarks for future pre-operative meningioma automated segmentation algorithms. Additionally, we found that 1286 of 1424 cases (90.3%) had at least 1 compartment voxel abutting the edge of the skull-stripped image edge, which requires further investigation into optimal pre-processing face anonymization steps. | [
"['Dominic LaBella' 'Ujjwal Baid' 'Omaditya Khanna' 'Shan McBurney-Lin'\n 'Ryan McLean' 'Pierre Nedelec' 'Arif Rashid' 'Nourel Hoda Tahon'\n 'Talissa Altes' 'Radhika Bhalerao' 'Yaseen Dhemesh' 'Devon Godfrey'\n 'Fathi Hilal' 'Scott Floyd' 'Anastasia Janas' 'Anahita Fathi Kazerooni'\n 'John Kirkpatrick' 'Collin Kent' 'Florian Kofler' 'Kevin Leu'\n 'Nazanin Maleki' 'Bjoern Menze' 'Maxence Pajot' 'Zachary J. Reitman'\n 'Jeffrey D. Rudie' 'Rachit Saluja' 'Yury Velichko' 'Chunhao Wang'\n 'Pranav Warman' 'Maruf Adewole' 'Jake Albrecht' 'Udunna Anazodo'\n 'Syed Muhammad Anwar' 'Timothy Bergquist' 'Sully Francis Chen'\n 'Verena Chung' 'Gian-Marco Conte' 'Farouk Dako' 'James Eddy' 'Ivan Ezhov'\n 'Nastaran Khalili' 'Juan Eugenio Iglesias' 'Zhifan Jiang'\n 'Elaine Johanson' 'Koen Van Leemput' 'Hongwei Bran Li'\n 'Marius George Linguraru' 'Xinyang Liu' 'Aria Mahtabfar' 'Zeke Meier'\n 'Ahmed W. Moawad' 'John Mongan' 'Marie Piraud'\n 'Russell Takeshi Shinohara' 'Walter F. Wiggins' 'Aly H. Abayazeed'\n 'Rachel Akinola' 'András Jakab' 'Michel Bilello'\n 'Maria Correia de Verdier' 'Priscila Crivellaro' 'Christos Davatzikos'\n 'Keyvan Farahani' 'John Freymann' 'Christopher Hess' 'Raymond Huang'\n 'Philipp Lohmann' 'Mana Moassefi' 'Matthew W. Pease' 'Phillipp Vollmuth'\n 'Nico Sollmann' 'David Diffley' 'Khanak K. Nandolia' 'Daniel I. Warren'\n 'Ali Hussain' 'Pascal Fehringer' 'Yulia Bronstein' 'Lisa Deptula'\n 'Evan G. Stein' 'Mahsa Taherzadeh' 'Eduardo Portela de Oliveira'\n 'Aoife Haughey' 'Marinos Kontzialis' 'Luca Saba' 'Benjamin Turner'\n 'Melanie M. T. Brüßeler' 'Shehbaz Ansari' 'Athanasios Gkampenis'\n 'David Maximilian Weiss' 'Aya Mansour' 'Islam H. Shawali'\n 'Nikolay Yordanov' 'Joel M. Stein' 'Roula Hourani'\n 'Mohammed Yahya Moshebah' 'Ahmed Magdy Abouelatta' 'Tanvir Rizvi'\n 'Klara Willms' 'Dann C. Martin' 'Abdullah Okar' \"Gennaro D'Anna\"\n 'Ahmed Taha' 'Yasaman Sharifi' 'Shahriar Faghani' 'Dominic Kite'\n 'Marco Pinho' 'Muhammad Ammar Haider' 'Alejandro Aristizabal'\n 'Alexandros Karargyris' 'Hasan Kassem' 'Sarthak Pati' 'Micah Sheller'\n 'Michelle Alonso-Basanta' 'Javier Villanueva-Meyer'\n 'Andreas M. Rauschecker' 'Ayman Nada' 'Mariam Aboian' 'Adam E. Flanders'\n 'Benedikt Wiestler' 'Spyridon Bakas' 'Evan Calabrese']"
]
|
null | null | 2405.09798 | null | null | http://arxiv.org/pdf/2405.09798v1 | 2024-05-16T04:02:43Z | 2024-05-16T04:02:43Z | Many-Shot In-Context Learning in Multimodal Foundation Models | Large language models are well-known to be effective at few-shot in-context learning (ICL). Recent advancements in multimodal foundation models have enabled unprecedentedly long context windows, presenting an opportunity to explore their capability to perform ICL with many more demonstrating examples. In this work, we evaluate the performance of multimodal foundation models scaling from few-shot to many-shot ICL. We benchmark GPT-4o and Gemini 1.5 Pro across 10 datasets spanning multiple domains (natural imagery, medical imagery, remote sensing, and molecular imagery) and tasks (multi-class, multi-label, and fine-grained classification). We observe that many-shot ICL, including up to almost 2,000 multimodal demonstrating examples, leads to substantial improvements compared to few-shot (<100 examples) ICL across all of the datasets. Further, Gemini 1.5 Pro performance continues to improve log-linearly up to the maximum number of tested examples on many datasets. Given the high inference costs associated with the long prompts required for many-shot ICL, we also explore the impact of batching multiple queries in a single API call. We show that batching up to 50 queries can lead to performance improvements under zero-shot and many-shot ICL, with substantial gains in the zero-shot setting on multiple datasets, while drastically reducing per-query cost and latency. Finally, we measure ICL data efficiency of the models, or the rate at which the models learn from more demonstrating examples. We find that while GPT-4o and Gemini 1.5 Pro achieve similar zero-shot performance across the datasets, Gemini 1.5 Pro exhibits higher ICL data efficiency than GPT-4o on most datasets. Our results suggest that many-shot ICL could enable users to efficiently adapt multimodal foundation models to new applications and domains. Our codebase is publicly available at https://github.com/stanfordmlgroup/ManyICL . | [
"['Yixing Jiang' 'Jeremy Irvin' 'Ji Hun Wang' 'Muhammad Ahmed Chaudhry'\n 'Jonathan H. Chen' 'Andrew Y. Ng']"
]
|
null | null | 2405.09800 | null | null | http://arxiv.org/pdf/2405.09800v1 | 2024-05-16T04:13:17Z | 2024-05-16T04:13:17Z | Manifold Integrated Gradients: Riemannian Geometry for Feature
Attribution | In this paper, we dive into the reliability concerns of Integrated Gradients (IG), a prevalent feature attribution method for black-box deep learning models. We particularly address two predominant challenges associated with IG: the generation of noisy feature visualizations for vision models and the vulnerability to adversarial attributional attacks. Our approach involves an adaptation of path-based feature attribution, aligning the path of attribution more closely to the intrinsic geometry of the data manifold. Our experiments utilise deep generative models applied to several real-world image datasets. They demonstrate that IG along the geodesics conforms to the curved geometry of the Riemannian data manifold, generating more perceptually intuitive explanations and, subsequently, substantially increasing robustness to targeted attributional attacks. | [
"['Eslam Zaher' 'Maciej Trzaskowski' 'Quan Nguyen' 'Fred Roosta']"
]
|
null | null | 2405.09802 | null | null | http://arxiv.org/pdf/2405.09802v2 | 2024-07-08T06:06:34Z | 2024-05-16T04:21:09Z | Analysis and Predictive Modeling of Solar Coronal Holes Using Computer
Vision and LSTM Networks | In the era of space exploration, coronal holes on the sun play a significant role due to their impact on satellites and aircraft through their open magnetic fields and increased solar wind emissions. This study employs computer vision techniques to detect coronal hole regions and estimate their sizes using imagery from the Solar Dynamics Observatory (SDO). Additionally, we utilize deep learning methods, specifically Long Short-Term Memory (LSTM) networks, to analyze trends in the area of coronal holes and predict their areas across various solar regions over a span of seven days. By examining time series data, we aim to identify patterns in coronal hole behavior and understand their potential effects on space weather. This research enhances our ability to anticipate and prepare for space weather events that could affect Earth's technological systems. | [
"['Juyoung Yun' 'Jungmin Shin']"
]
|
null | null | 2405.09806 | null | null | http://arxiv.org/pdf/2405.09806v2 | 2024-07-10T04:04:06Z | 2024-05-16T04:28:44Z | MediSyn: Text-Guided Diffusion Models for Broad Medical 2D and 3D Image
Synthesis | Diffusion models have recently gained significant traction due to their ability to generate high-fidelity and diverse images and videos conditioned on text prompts. In medicine, this application promises to address the critical challenge of data scarcity, a consequence of barriers in data sharing, stringent patient privacy regulations, and disparities in patient population and demographics. By generating realistic and varying medical 2D and 3D images, these models offer a rich, privacy-respecting resource for algorithmic training and research. To this end, we introduce MediSyn, a pair of instruction-tuned text-guided latent diffusion models with the ability to generate high-fidelity and diverse medical 2D and 3D images across specialties and modalities. Through established metrics, we show significant improvement in broad medical image and video synthesis guided by text prompts. | [
"['Joseph Cho' 'Cyril Zakka' 'Dhamanpreet Kaur' 'Rohan Shad'\n 'Ross Wightman' 'Akshay Chaudhari' 'William Hiesinger']"
]
|
null | null | 2405.09817 | null | null | http://arxiv.org/pdf/2405.09817v2 | 2024-05-17T05:39:52Z | 2024-05-16T05:20:47Z | Active Learning with Fully Bayesian Neural Networks for Discontinuous
and Nonstationary Data | Active learning optimizes the exploration of large parameter spaces by strategically selecting which experiments or simulations to conduct, thus reducing resource consumption and potentially accelerating scientific discovery. A key component of this approach is a probabilistic surrogate model, typically a Gaussian Process (GP), which approximates an unknown functional relationship between control parameters and a target property. However, conventional GPs often struggle when applied to systems with discontinuities and non-stationarities, prompting the exploration of alternative models. This limitation becomes particularly relevant in physical science problems, which are often characterized by abrupt transitions between different system states and rapid changes in physical property behavior. Fully Bayesian Neural Networks (FBNNs) serve as a promising substitute, treating all neural network weights probabilistically and leveraging advanced Markov Chain Monte Carlo techniques for direct sampling from the posterior distribution. This approach enables FBNNs to provide reliable predictive distributions, crucial for making informed decisions under uncertainty in the active learning setting. Although traditionally considered too computationally expensive for 'big data' applications, many physical sciences problems involve small amounts of data in relatively low-dimensional parameter spaces. Here, we assess the suitability and performance of FBNNs with the No-U-Turn Sampler for active learning tasks in the 'small data' regime, highlighting their potential to enhance predictive accuracy and reliability on test functions relevant to problems in physical sciences. | [
"['Maxim Ziatdinov']"
]
|
null | null | 2405.09819 | null | null | http://arxiv.org/pdf/2405.09819v1 | 2024-05-16T05:36:28Z | 2024-05-16T05:36:28Z | Automating the Training and Deployment of Models in MLOps by Integrating
Systems with Machine Learning | This article introduces the importance of machine learning in real-world applications and explores the rise of MLOps (Machine Learning Operations) and its importance for solving challenges such as model deployment and performance monitoring. By reviewing the evolution of MLOps and its relationship to traditional software development methods, the paper proposes ways to integrate the system into machine learning to solve the problems faced by existing MLOps and improve productivity. This paper focuses on the importance of automated model training, and the method to ensure the transparency and repeatability of the training process through version control system. In addition, the challenges of integrating machine learning components into traditional CI/CD pipelines are discussed, and solutions such as versioning environments and containerization are proposed. Finally, the paper emphasizes the importance of continuous monitoring and feedback loops after model deployment to maintain model performance and reliability. Using case studies and best practices from Netflix, the article presents key strategies and lessons learned for successful implementation of MLOps practices, providing valuable references for other organizations to build and optimize their own MLOps practices. | [
"['Penghao Liang' 'Bo Song' 'Xiaoan Zhan' 'Zhou Chen' 'Jiaqiang Yuan']"
]
|
null | null | 2405.09820 | null | null | http://arxiv.org/pdf/2405.09820v1 | 2024-05-16T05:37:06Z | 2024-05-16T05:37:06Z | Densely Distilling Cumulative Knowledge for Continual Learning | Continual learning, involving sequential training on diverse tasks, often faces catastrophic forgetting. While knowledge distillation-based approaches exhibit notable success in preventing forgetting, we pinpoint a limitation in their ability to distill the cumulative knowledge of all the previous tasks. To remedy this, we propose Dense Knowledge Distillation (DKD). DKD uses a task pool to track the model's capabilities. It partitions the output logits of the model into dense groups, each corresponding to a task in the task pool. It then distills all tasks' knowledge using all groups. However, using all the groups can be computationally expensive, we also suggest random group selection in each optimization step. Moreover, we propose an adaptive weighting scheme, which balances the learning of new classes and the retention of old classes, based on the count and similarity of the classes. Our DKD outperforms recent state-of-the-art baselines across diverse benchmarks and scenarios. Empirical analysis underscores DKD's ability to enhance model stability, promote flatter minima for improved generalization, and remains robust across various memory budgets and task orders. Moreover, it seamlessly integrates with other CL methods to boost performance and proves versatile in offline scenarios like model compression. | [
"['Zenglin Shi' 'Pei Liu' 'Tong Su' 'Yunpeng Wu' 'Kuien Liu' 'Yu Song'\n 'Meng Wang']"
]
|
null | null | 2405.09821 | null | null | http://arxiv.org/abs/2405.09821v2 | 2024-07-15T05:03:01Z | 2024-05-16T05:37:50Z | Evaluating Algorithmic Bias in Models for Predicting Academic
Performance of Filipino Students | Algorithmic bias is a major issue in machine learning models in educational contexts. However, it has not yet been studied thoroughly in Asian learning contexts, and only limited work has considered algorithmic bias based on regional (sub-national) background. As a step towards addressing this gap, this paper examines the population of 5,986 students at a large university in the Philippines, investigating algorithmic bias based on students' regional background. The university used the Canvas learning management system (LMS) in its online courses across a broad range of domains. Over the period of three semesters, we collected 48.7 million log records of the students' activity in Canvas. We used these logs to train binary classification models that predict student grades from the LMS activity. The best-performing model reached AUC of 0.75 and weighted F1-score of 0.79. Subsequently, we examined the data for bias based on students' region. Evaluation using three metrics: AUC, weighted F1-score, and MADD showed consistent results across all demographic groups. Thus, no unfairness was observed against a particular student group in the grade predictions. | [
"['Valdemar Švábenský' 'Mélina Verger' 'Maria Mercedes T. Rodrigo'\n 'Clarence James G. Monterozo' 'Ryan S. Baker'\n 'Miguel Zenon Nicanor Lerias Saavedra' 'Sébastien Lallé'\n 'Atsushi Shimada']"
]
|
null | null | 2405.09827 | null | null | http://arxiv.org/pdf/2405.09827v1 | 2024-05-16T05:56:03Z | 2024-05-16T05:56:03Z | Parallel Backpropagation for Shared-Feature Visualization | High-level visual brain regions contain subareas in which neurons appear to respond more strongly to examples of a particular semantic category, like faces or bodies, rather than objects. However, recent work has shown that while this finding holds on average, some out-of-category stimuli also activate neurons in these regions. This may be due to visual features common among the preferred class also being present in other images. Here, we propose a deep-learning-based approach for visualizing these features. For each neuron, we identify relevant visual features driving its selectivity by modelling responses to images based on latent activations of a deep neural network. Given an out-of-category image which strongly activates the neuron, our method first identifies a reference image from the preferred category yielding a similar feature activation pattern. We then backpropagate latent activations of both images to the pixel level, while enhancing the identified shared dimensions and attenuating non-shared features. The procedure highlights image regions containing shared features driving responses of the model neuron. We apply the algorithm to novel recordings from body-selective regions in macaque IT cortex in order to understand why some images of objects excite these neurons. Visualizations reveal object parts which resemble parts of a macaque body, shedding light on neural preference of these objects. | [
"['Alexander Lappe' 'Anna Bognár' 'Ghazaleh Ghamkhari Nejad'\n 'Albert Mukovskiy' 'Lucas Martini' 'Martin A. Giese' 'Rufin Vogels']"
]
|
null | null | 2405.09831 | null | null | http://arxiv.org/pdf/2405.09831v5 | 2024-06-21T05:55:23Z | 2024-05-16T06:07:31Z | Nearly Minimax Optimal Regret for Multinomial Logistic Bandit | In this paper, we study the contextual multinomial logit (MNL) bandit problem in which a learning agent sequentially selects an assortment based on contextual information, and user feedback follows an MNL choice model. There has been a significant discrepancy between lower and upper regret bounds, particularly regarding the maximum assortment size $K$. Additionally, the variation in reward structures between these bounds complicates the quest for optimality. Under uniform rewards, where all items have the same expected reward, we establish a regret lower bound of $Omega(dsqrt{smash[b]{T/K}})$ and propose a constant-time algorithm, OFU-MNL+, that achieves a matching upper bound of $tilde{O}(dsqrt{smash[b]{T/K}})$. Under non-uniform rewards, we prove a lower bound of $Omega(dsqrt{T})$ and an upper bound of $tilde{O}(dsqrt{T})$, also achievable by OFU-MNL+. Our empirical studies support these theoretical findings. To the best of our knowledge, this is the first work in the contextual MNL bandit literature to prove minimax optimality -- for either uniform or non-uniform reward setting -- and to propose a computationally efficient algorithm that achieves this optimality up to logarithmic factors. | [
"['Joongkyu Lee' 'Min-hwan Oh']"
]
|
null | null | 2405.09838 | null | null | http://arxiv.org/pdf/2405.09838v1 | 2024-05-16T06:31:02Z | 2024-05-16T06:31:02Z | Unsupervised Work Behavior Pattern Extraction Based on Hierarchical
Probabilistic Model | Evolving consumer demands and market trends have led to businesses increasingly embracing a production approach that prioritizes flexibility and customization. Consequently, factory workers must engage in tasks that are more complex than before. Thus, productivity depends on each worker's skills in assembling products. Therefore, analyzing the behavior of a worker is crucial for work improvement. However, manual analysis is time consuming and does not provide quick and accurate feedback. Machine learning have been attempted to automate the analyses; however, most of these methods need several labels for training. To this end, we extend the Gaussian process hidden semi-Markov model (GP-HSMM), to enable the rapid and automated analysis of worker behavior without pre-training. The model does not require labeled data and can automatically and accurately segment continuous motions into motion classes. The proposed model is a probabilistic model that hierarchically connects GP-HSMM and HSMM, enabling the extraction of behavioral patterns with different granularities. Furthermore, it mutually infers the parameters between the GP-HSMM and HSMM, resulting in accurate motion pattern extraction. We applied the proposed method to motion data in which workers assembled products at an actual production site. The accuracy of behavior pattern extraction was evaluated using normalized Levenshtein distance (NLD). The smaller the value of NLD, the more accurate is the pattern extraction. The NLD of motion patterns captured by GP-HSMM and HSMM layers in our proposed method was 0.50 and 0.33, respectively, which are the smallest compared to that of the baseline methods. | [
"['Issei Saito' 'Tomoaki Nakamura' 'Toshiyuki Hatta' 'Wataru Fujita'\n 'Shintaro Watanabe' 'Shotaro Miwa']"
]
|
null | null | 2405.09839 | null | null | http://arxiv.org/pdf/2405.09839v1 | 2024-05-16T06:35:42Z | 2024-05-16T06:35:42Z | Advances in Robust Federated Learning: Heterogeneity Considerations | In the field of heterogeneous federated learning (FL), the key challenge is to efficiently and collaboratively train models across multiple clients with different data distributions, model structures, task objectives, computational capabilities, and communication resources. This diversity leads to significant heterogeneity, which increases the complexity of model training. In this paper, we first outline the basic concepts of heterogeneous federated learning and summarize the research challenges in federated learning in terms of five aspects: data, model, task, device, and communication. In addition, we explore how existing state-of-the-art approaches cope with the heterogeneity of federated learning, and categorize and review these approaches at three different levels: data-level, model-level, and architecture-level. Subsequently, the paper extensively discusses privacy-preserving strategies in heterogeneous federated learning environments. Finally, the paper discusses current open issues and directions for future research, aiming to promote the further development of heterogeneous federated learning. | [
"['Chuan Chen' 'Tianchi Liao' 'Xiaojun Deng' 'Zihou Wu' 'Sheng Huang'\n 'Zibin Zheng']"
]
|
null | null | 2405.09841 | null | null | http://arxiv.org/pdf/2405.09841v1 | 2024-05-16T06:38:28Z | 2024-05-16T06:38:28Z | Simultaneous Identification of Sparse Structures and Communities in
Heterogeneous Graphical Models | Exploring and detecting community structures hold significant importance in genetics, social sciences, neuroscience, and finance. Especially in graphical models, community detection can encourage the exploration of sets of variables with group-like properties. In this paper, within the framework of Gaussian graphical models, we introduce a novel decomposition of the underlying graphical structure into a sparse part and low-rank diagonal blocks (non-overlapped communities). We illustrate the significance of this decomposition through two modeling perspectives and propose a three-stage estimation procedure with a fast and efficient algorithm for the identification of the sparse structure and communities. Also on the theoretical front, we establish conditions for local identifiability and extend the traditional irrepresentability condition to an adaptive form by constructing an effective norm, which ensures the consistency of model selection for the adaptive $ell_1$ penalized estimator in the second stage. Moreover, we also provide the clustering error bound for the K-means procedure in the third stage. Extensive numerical experiments are conducted to demonstrate the superiority of the proposed method over existing approaches in estimating graph structures. Furthermore, we apply our method to the stock return data, revealing its capability to accurately identify non-overlapped community structures. | [
"['Dapeng Shi' 'Tiandong Wang' 'Zhiliang Ying']"
]
|
null | null | 2405.09858 | null | null | http://arxiv.org/pdf/2405.09858v2 | 2024-07-11T07:09:00Z | 2024-05-16T07:25:15Z | Towards Realistic Incremental Scenario in Class Incremental Semantic
Segmentation | This paper addresses the unrealistic aspect of the commonly adopted Continuous Incremental Semantic Segmentation (CISS) scenario, termed overlapped. We point out that overlapped allows the same image to reappear in future tasks with different pixel labels, which is far from practical incremental learning scenarios. Moreover, we identified that this flawed scenario may lead to biased results for two commonly used techniques in CISS, pseudo-labeling and exemplar memory, resulting in unintended advantages or disadvantages for certain techniques. To mitigate this, a practical scenario called partitioned is proposed, in which the dataset is first divided into distinct subsets representing each class, and then the subsets are assigned to each corresponding task. This efficiently addresses the issue above while meeting the requirement of CISS scenario, such as capturing the background shifts. Furthermore, we identify and address the code implementation issues related to retrieving data from the exemplar memory, which was ignored in previous works. Lastly, we introduce a simple yet competitive memory-based baseline, MiB-AugM, that handles background shifts of current tasks in the exemplar memory. This baseline achieves state-of-the-art results across multiple tasks involving learning numerous new classes. | [
"['Jihwan Kwak' 'Sungmin Cha' 'Taesup Moon']"
]
|
null | null | 2405.09866 | null | null | http://arxiv.org/pdf/2405.09866v1 | 2024-05-16T07:43:15Z | 2024-05-16T07:43:15Z | Rethinking Multi-User Semantic Communications with Deep Generative
Models | In recent years, novel communication strategies have emerged to face the challenges that the increased number of connected devices and the higher quality of transmitted information are posing. Among them, semantic communication obtained promising results especially when combined with state-of-the-art deep generative models, such as large language or diffusion models, able to regenerate content from extremely compressed semantic information. However, most of these approaches focus on single-user scenarios processing the received content at the receiver on top of conventional communication systems. In this paper, we propose to go beyond these methods by developing a novel generative semantic communication framework tailored for multi-user scenarios. This system assigns the channel to users knowing that the lost information can be filled in with a diffusion model at the receivers. Under this innovative perspective, OFDMA systems should not aim to transmit the largest part of information, but solely the bits necessary to the generative model to semantically regenerate the missing ones. The thorough experimental evaluation shows the capabilities of the novel diffusion model and the effectiveness of the proposed framework, leading towards a GenAI-based next generation of communications. | [
"['Eleonora Grassucci' 'Jinho Choi' 'Jihong Park' 'Riccardo F. Gramaccioni'\n 'Giordano Cicchetti' 'Danilo Comminiello']"
]
|
null | null | 2405.09878 | null | null | http://arxiv.org/pdf/2405.09878v2 | 2024-07-14T18:01:28Z | 2024-05-16T07:57:31Z | Hyperplane Arrangements and Fixed Points in Iterated PWL Neural Networks | We leverage the framework of hyperplane arrangements to analyze potential regions of (stable) fixed points. We provide an upper bound on the number of fixed points for multi-layer neural networks equipped with piecewise linear (PWL) activation functions with arbitrary many linear pieces. The theoretical optimality of the exponential growth in the number of layers of the latter bound is shown. Specifically, we also derive a sharper upper bound on the number of stable fixed points for one-hidden-layer networks with hard tanh activation. | [
"['Hans-Peter Beise']"
]
|
null | null | 2405.09886 | null | null | http://arxiv.org/pdf/2405.09886v1 | 2024-05-16T08:07:25Z | 2024-05-16T08:07:25Z | MTLComb: multi-task learning combining regression and classification
tasks for joint feature selection | Multi-task learning (MTL) is a learning paradigm that enables the simultaneous training of multiple communicating algorithms. Although MTL has been successfully applied to ether regression or classification tasks alone, incorporating mixed types of tasks into a unified MTL framework remains challenging, primarily due to variations in the magnitudes of losses associated with different tasks. This challenge, particularly evident in MTL applications with joint feature selection, often results in biased selections. To overcome this obstacle, we propose a provable loss weighting scheme that analytically determines the optimal weights for balancing regression and classification tasks. This scheme significantly mitigates the otherwise biased feature selection. Building upon this scheme, we introduce MTLComb, an MTL algorithm and software package encompassing optimization procedures, training protocols, and hyperparameter estimation procedures. MTLComb is designed for learning shared predictors among tasks of mixed types. To showcase the efficacy of MTLComb, we conduct tests on both simulated data and biomedical studies pertaining to sepsis and schizophrenia. | [
"['Han Cao' 'Sivanesan Rajan' 'Bianka Hahn' 'Ersoy Kocak'\n 'Daniel Durstewitz' 'Emanuel Schwarz' 'Verena Schneider-Lindner']"
]
|
null | null | 2405.09892 | null | null | http://arxiv.org/pdf/2405.09892v1 | 2024-05-16T08:16:19Z | 2024-05-16T08:16:19Z | Balancing Similarity and Complementarity for Federated Learning | In mobile and IoT systems, Federated Learning (FL) is increasingly important for effectively using data while maintaining user privacy. One key challenge in FL is managing statistical heterogeneity, such as non-i.i.d. data, arising from numerous clients and diverse data sources. This requires strategic cooperation, often with clients having similar characteristics. However, we are interested in a fundamental question: does achieving optimal cooperation necessarily entail cooperating with the most similar clients? Typically, significant model performance improvements are often realized not by partnering with the most similar models, but through leveraging complementary data. Our theoretical and empirical analyses suggest that optimal cooperation is achieved by enhancing complementarity in feature distribution while restricting the disparity in the correlation between features and targets. Accordingly, we introduce a novel framework, texttt{FedSaC}, which balances similarity and complementarity in FL cooperation. Our framework aims to approximate an optimal cooperation network for each client by optimizing a weighted sum of model similarity and feature complementarity. The strength of texttt{FedSaC} lies in its adaptability to various levels of data heterogeneity and multimodal scenarios. Our comprehensive unimodal and multimodal experiments demonstrate that texttt{FedSaC} markedly surpasses other state-of-the-art FL methods. | [
"['Kunda Yan' 'Sen Cui' 'Abudukelimu Wuerkaixi' 'Jingfeng Zhang' 'Bo Han'\n 'Gang Niu' 'Masashi Sugiyama' 'Changshui Zhang']"
]
|
null | null | 2405.09901 | null | null | http://arxiv.org/pdf/2405.09901v1 | 2024-05-16T08:48:23Z | 2024-05-16T08:48:23Z | Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded
Diffusion Models | Recent deep music generation studies have put much emphasis on long-term generation with structures. However, we are yet to see high-quality, well-structured whole-song generation. In this paper, we make the first attempt to model a full music piece under the realization of compositional hierarchy. With a focus on symbolic representations of pop songs, we define a hierarchical language, in which each level of hierarchy focuses on the semantics and context dependency at a certain music scope. The high-level languages reveal whole-song form, phrase, and cadence, whereas the low-level languages focus on notes, chords, and their local patterns. A cascaded diffusion model is trained to model the hierarchical language, where each level is conditioned on its upper levels. Experiments and analysis show that our model is capable of generating full-piece music with recognizable global verse-chorus structure and cadences, and the music quality is higher than the baselines. Additionally, we show that the proposed model is controllable in a flexible way. By sampling from the interpretable hierarchical languages or adjusting pre-trained external representations, users can control the music flow via various features such as phrase harmonic structures, rhythmic patterns, and accompaniment texture. | [
"['Ziyu Wang' 'Lejun Min' 'Gus Xia']"
]
|
null | null | 2405.09903 | null | null | http://arxiv.org/pdf/2405.09903v1 | 2024-05-16T08:49:50Z | 2024-05-16T08:49:50Z | Federated Learning for Misbehaviour Detection with Variational
Autoencoders and Gaussian Mixture Models | Federated Learning (FL) has become an attractive approach to collaboratively train Machine Learning (ML) models while data sources' privacy is still preserved. However, most of existing FL approaches are based on supervised techniques, which could require resource-intensive activities and human intervention to obtain labelled datasets. Furthermore, in the scope of cyberattack detection, such techniques are not able to identify previously unknown threats. In this direction, this work proposes a novel unsupervised FL approach for the identification of potential misbehavior in vehicular environments. We leverage the computing capabilities of public cloud services for model aggregation purposes, and also as a central repository of misbehavior events, enabling cross-vehicle learning and collective defense strategies. Our solution integrates the use of Gaussian Mixture Models (GMM) and Variational Autoencoders (VAE) on the VeReMi dataset in a federated environment, where each vehicle is intended to train only with its own data. Furthermore, we use Restricted Boltzmann Machines (RBM) for pre-training purposes, and Fedplus as aggregation function to enhance model's convergence. Our approach provides better performance (more than 80 percent) compared to recent proposals, which are usually based on supervised techniques and artificial divisions of the VeReMi dataset. | [
"['Enrique Mármol Campos' 'Aurora González Vidal'\n 'José Luis Hernández Ramos' 'Antonio Skarmeta']"
]
|
null | null | 2405.09909 | null | null | http://arxiv.org/pdf/2405.09909v1 | 2024-05-16T08:57:34Z | 2024-05-16T08:57:34Z | A Machine Learning Approach for Simultaneous Demapping of QAM and APSK
Constellations | As telecommunication systems evolve to meet increasing demands, integrating deep neural networks (DNNs) has shown promise in enhancing performance. However, the trade-off between accuracy and flexibility remains challenging when replacing traditional receivers with DNNs. This paper introduces a novel probabilistic framework that allows a single DNN demapper to demap multiple QAM and APSK constellations simultaneously. We also demonstrate that our framework allows exploiting hierarchical relationships in families of constellations. The consequence is that we need fewer neural network outputs to encode the same function without an increase in Bit Error Rate (BER). Our simulation results confirm that our approach approaches the optimal demodulation error bound under an Additive White Gaussian Noise (AWGN) channel for multiple constellations. Thereby, we address multiple important issues in making DNNs flexible enough for practical use as receivers. | [
"['Arwin Gansekoele' 'Alexios Balatsoukas-Stimming' 'Tom Brusse'\n 'Mark Hoogendoorn' 'Sandjai Bhulai' 'Rob van der Mei']"
]
|
null | null | 2405.09911 | null | null | http://arxiv.org/pdf/2405.09911v1 | 2024-05-16T08:59:20Z | 2024-05-16T08:59:20Z | Scaling convolutional neural networks achieves expert-level seizure
detection in neonatal EEG | Background: Neonatal seizures are a neurological emergency that require urgent treatment. They are hard to diagnose clinically and can go undetected if EEG monitoring is unavailable. EEG interpretation requires specialised expertise which is not widely available. Algorithms to detect EEG seizures can address this limitation but have yet to reach widespread clinical adoption. Methods: Retrospective EEG data from 332 neonates was used to develop and validate a seizure-detection model. The model was trained and tested with a development dataset ($n=202$) that was annotated with over 12k seizure events on a per-channel basis. This dataset was used to develop a convolutional neural network (CNN) using a modern architecture and training methods. The final model was then validated on two independent multi-reviewer datasets ($n=51$ and $n=79$). Results: Increasing dataset and model size improved model performance: Matthews correlation coefficient (MCC) and Pearson's correlation ($r$) increased by up to 50% with data scaling and up to 15% with model scaling. Over 50k hours of annotated single-channel EEG was used for training a model with 21 million parameters. State-of-the-art was achieved on an open-access dataset (MCC=0.764, $r=0.824$, and AUC=0.982). The CNN attains expert-level performance on both held-out validation sets, with no significant difference in inter-rater agreement among the experts and among experts and algorithm ($Delta kappa < -0.095$, $p>0.05$). Conclusion: With orders of magnitude increases in data and model scale we have produced a new state-of-the-art model for neonatal seizure detection. Expert-level equivalence on completely unseen data, a first in this field, provides a strong indication that the model is ready for further clinical validation. | [
"['Robert Hogan' 'Sean R. Mathieson' 'Aurel Luca' 'Soraia Ventura'\n 'Sean Griffin' 'Geraldine B. Boylan' \"John M. O'Toole\"]"
]
|
null | null | 2405.09927 | null | null | http://arxiv.org/pdf/2405.09927v1 | 2024-05-16T09:33:28Z | 2024-05-16T09:33:28Z | Moreau Envelope for Nonconvex Bi-Level Optimization: A Single-loop and
Hessian-free Solution Strategy | This work focuses on addressing two major challenges in the context of large-scale nonconvex Bi-Level Optimization (BLO) problems, which are increasingly applied in machine learning due to their ability to model nested structures. These challenges involve ensuring computational efficiency and providing theoretical guarantees. While recent advances in scalable BLO algorithms have primarily relied on lower-level convexity simplification, our work specifically tackles large-scale BLO problems involving nonconvexity in both the upper and lower levels. We simultaneously address computational and theoretical challenges by introducing an innovative single-loop gradient-based algorithm, utilizing the Moreau envelope-based reformulation, and providing non-asymptotic convergence analysis for general nonconvex BLO problems. Notably, our algorithm relies solely on first-order gradient information, enhancing its practicality and efficiency, especially for large-scale BLO learning tasks. We validate our approach's effectiveness through experiments on various synthetic problems, two typical hyper-parameter learning tasks, and a real-world neural architecture search application, collectively demonstrating its superior performance. | [
"['Risheng Liu' 'Zhu Liu' 'Wei Yao' 'Shangzhi Zeng' 'Jin Zhang']"
]
|
null | null | 2405.09960 | null | null | http://arxiv.org/pdf/2405.09960v1 | 2024-05-16T10:07:59Z | 2024-05-16T10:07:59Z | A Unified Deep Transfer Learning Model for Accurate IoT Localization in
Diverse Environments | Internet of Things (IoT) is an ever-evolving technological paradigm that is reshaping industries and societies globally. Real-time data collection, analysis, and decision-making facilitated by localization solutions form the foundation for location-based services, enabling them to support critical functions within diverse IoT ecosystems. However, most existing works on localization focus on single environment, resulting in the development of multiple models to support multiple environments. In the context of smart cities, these raise costs and complexity due to the dynamicity of such environments. To address these challenges, this paper presents a unified indoor-outdoor localization solution that leverages transfer learning (TL) schemes to build a single deep learning model. The model accurately predicts the localization of IoT devices in diverse environments. The performance evaluation shows that by adopting an encoder-based TL scheme, we can improve the baseline model by about 17.18% in indoor environments and 9.79% in outdoor environments. | [
"['Abdullahi Isa Ahmed' 'Yaya Etiabi' 'Ali Waqar Azim' 'El Mehdi Amhoud']"
]
|
null | null | 2405.09972 | null | null | http://arxiv.org/pdf/2405.09972v1 | 2024-05-16T10:32:39Z | 2024-05-16T10:32:39Z | Predicting Solar Heat Production to Optimize Renewable Energy Usage | Utilizing solar energy to meet space heating and domestic hot water demand is very efficient (in terms of environmental footprint as well as cost), but in order to ensure that user demand is entirely covered throughout the year needs to be complemented with auxiliary heating systems, typically boilers and heat pumps. Naturally, the optimal control of such a system depends on an accurate prediction of solar thermal production. Experimental testing and physics-based numerical models are used to find a collector's performance curve - the mapping from solar radiation and other external conditions to heat production - but this curve changes over time once the collector is exposed to outdoor conditions. In order to deploy advanced control strategies in small domestic installations, we present an approach that uses machine learning to automatically construct and continuously adapt a model that predicts heat production. Our design is driven by the need to (a) construct and adapt models using supervision that can be extracted from low-cost instrumentation, avoiding extreme accuracy and reliability requirements; and (b) at inference time, use inputs that are typically provided in publicly available weather forecasts. Recent developments in attention-based machine learning, as well as careful adaptation of the training setup to the specifics of the task, have allowed us to design a machine learning-based solution that covers our requirements. We present positive empirical results for the predictive accuracy of our solution, and discuss the impact of these results on the end-to-end system. | [
"['Tatiana Boura' 'Natalia Koliou' 'George Meramveliotakis'\n 'Stasinos Konstantopoulos' 'George Kosmadakis']"
]
|
null | null | 2405.09983 | null | null | http://arxiv.org/pdf/2405.09983v2 | 2024-05-30T15:34:10Z | 2024-05-16T11:01:09Z | Zero-Shot Hierarchical Classification on the Common Procurement
Vocabulary Taxonomy | Classifying public tenders is a useful task for both companies that are invited to participate and for inspecting fraudulent activities. To facilitate the task for both participants and public administrations, the European Union presented a common taxonomy (Common Procurement Vocabulary, CPV) which is mandatory for tenders of certain importance; however, the contracts in which a CPV label is mandatory are the minority compared to all the Public Administrations activities. Classifying over a real-world taxonomy introduces some difficulties that can not be ignored. First of all, some fine-grained classes have an insufficient (if any) number of observations in the training set, while other classes are far more frequent (even thousands of times) than the average. To overcome those difficulties, we present a zero-shot approach, based on a pre-trained language model that relies only on label description and respects the label taxonomy. To train our proposed model, we used industrial data, which comes from contrattipubblici.org, a service by SpazioDati s.r.l. that collects public contracts stipulated in Italy in the last 25 years. Results show that the proposed model achieves better performance in classifying low-frequent classes compared to three different baselines, and is also able to predict never-seen classes. | [
"['Federico Moiraghi' 'Matteo Palmonari' 'Davide Allavena'\n 'Federico Morando']"
]
|
null | null | 2405.09993 | null | null | http://arxiv.org/pdf/2405.09993v1 | 2024-05-16T11:26:20Z | 2024-05-16T11:26:20Z | Learning BPS Spectra and the Gap Conjecture | We explore statistical properties of BPS q-series for 3d N=2 strongly coupled supersymmetric theories that correspond to a particular family of 3-manifolds Y. We discover that gaps between exponents in the q-series are statistically more significant at the beginning of the q-series compared to gaps that appear in higher powers of q. Our observations are obtained by calculating saliencies of q-series features used as input data for principal component analysis, which is a standard example of an explainable machine learning technique that allows for a direct calculation and a better analysis of feature saliencies. | [
"['Sergei Gukov' 'Rak-Kyeong Seong']"
]
|
null | null | 2405.09997 | null | null | http://arxiv.org/abs/2405.09997v1 | 2024-05-16T11:30:08Z | 2024-05-16T11:30:08Z | Generative Design through Quality-Diversity Data Synthesis and Language
Models | Two fundamental challenges face generative models in engineering applications: the acquisition of high-performing, diverse datasets, and the adherence to precise constraints in generated designs. We propose a novel approach combining optimization, constraint satisfaction, and language models to tackle these challenges in architectural design. Our method uses Quality-Diversity (QD) to generate a diverse, high-performing dataset. We then fine-tune a language model with this dataset to generate high-level designs. These designs are then refined into detailed, constraint-compliant layouts using the Wave Function Collapse algorithm. Our system demonstrates reliable adherence to textual guidance, enabling the generation of layouts with targeted architectural and performance features. Crucially, our results indicate that data synthesized through the evolutionary search of QD not only improves overall model performance but is essential for the model's ability to closely adhere to textual guidance. This improvement underscores the pivotal role evolutionary computation can play in creating the datasets key to training generative models for design. Web article at https://tilegpt.github.io | [
"['Adam Gaier' 'James Stoddart' 'Lorenzo Villaggi' 'Shyam Sudhakaran']"
]
|
null | null | 2405.09999 | null | null | http://arxiv.org/pdf/2405.09999v1 | 2024-05-16T11:33:49Z | 2024-05-16T11:33:49Z | Reward Centering | We show that discounted methods for solving continuing reinforcement learning problems can perform significantly better if they center their rewards by subtracting out the rewards' empirical average. The improvement is substantial at commonly used discount factors and increases further as the discount factor approaches one. In addition, we show that if a problem's rewards are shifted by a constant, then standard methods perform much worse, whereas methods with reward centering are unaffected. Estimating the average reward is straightforward in the on-policy setting; we propose a slightly more sophisticated method for the off-policy setting. Reward centering is a general idea, so we expect almost every reinforcement-learning algorithm to benefit by the addition of reward centering. | [
"['Abhishek Naik' 'Yi Wan' 'Manan Tomar' 'Richard S. Sutton']"
]
|
null | null | 2405.10004 | null | null | http://arxiv.org/abs/2405.10004v2 | 2024-06-18T11:58:39Z | 2024-05-16T11:44:35Z | ROCOv2: Radiology Objects in COntext Version 2, an Updated Multimodal
Image Dataset | Automated medical image analysis systems often require large amounts of training data with high quality labels, which are difficult and time consuming to generate. This paper introduces Radiology Object in COntext version 2 (ROCOv2), a multimodal dataset consisting of radiological images and associated medical concepts and captions extracted from the PMC Open Access subset. It is an updated version of the ROCO dataset published in 2018, and adds 35,705 new images added to PMC since 2018. It further provides manually curated concepts for imaging modalities with additional anatomical and directional concepts for X-rays. The dataset consists of 79,789 images and has been used, with minor modifications, in the concept detection and caption prediction tasks of ImageCLEFmedical Caption 2023. The dataset is suitable for training image annotation models based on image-caption pairs, or for multi-label image classification using Unified Medical Language System (UMLS) concepts provided with each image. In addition, it can serve for pre-training of medical domain models, and evaluation of deep learning models for multi-task learning. | [
"['Johannes Rückert' 'Louise Bloch' 'Raphael Brüngel'\n 'Ahmad Idrissi-Yaghir' 'Henning Schäfer' 'Cynthia S. Schmidt'\n 'Sven Koitka' 'Obioma Pelka' 'Asma Ben Abacha' 'Alba G. Seco de Herrera'\n 'Henning Müller' 'Peter A. Horn' 'Felix Nensa' 'Christoph M. Friedrich']"
]
|
null | null | 2405.10006 | null | null | http://arxiv.org/abs/2405.10006v1 | 2024-05-16T11:46:39Z | 2024-05-16T11:46:39Z | Machine Learning-Based Path Loss Modeling with Simplified Features | Propagation modeling is a crucial tool for successful wireless deployments and spectrum planning with the demand for high modeling accuracy continuing to grow. Recognizing that detailed knowledge of the physical environment (terrain and clutter) is essential, we propose a novel approach that uses environmental information for predictions. Instead of relying on complex, detail-intensive models, we explore the use of simplified scalar features involving the total obstruction depth along the direct path from transmitter to receiver. Obstacle depth offers a streamlined, yet surprisingly accurate, method for predicting wireless signal propagation, providing a practical solution for efficient and effective wireless network planning. | [
"['Jonathan Ethier' 'Mathieu Chateauvert']"
]
|
null | null | 2405.10020 | null | null | http://arxiv.org/pdf/2405.10020v2 | 2024-07-02T07:29:04Z | 2024-05-16T12:02:02Z | Natural Language Can Help Bridge the Sim2Real Gap | The main challenge in learning image-conditioned robotic policies is acquiring a visual representation conducive to low-level control. Due to the high dimensionality of the image space, learning a good visual representation requires a considerable amount of visual data. However, when learning in the real world, data is expensive. Sim2Real is a promising paradigm for overcoming data scarcity in the real-world target domain by using a simulator to collect large amounts of cheap data closely related to the target task. However, it is difficult to transfer an image-conditioned policy from sim to real when the domains are very visually dissimilar. To bridge the sim2real visual gap, we propose using natural language descriptions of images as a unifying signal across domains that captures the underlying task-relevant semantics. Our key insight is that if two image observations from different domains are labeled with similar language, the policy should predict similar action distributions for both images. We demonstrate that training the image encoder to predict the language description or the distance between descriptions of a sim or real image serves as a useful, data-efficient pretraining step that helps learn a domain-invariant image representation. We can then use this image encoder as the backbone of an IL policy trained simultaneously on a large amount of simulated and a handful of real demonstrations. Our approach outperforms widely used prior sim2real methods and strong vision-language pretraining baselines like CLIP and R3M by 25 to 40%. See additional videos and materials at https://robin-lab.cs.utexas.edu/lang4sim2real/. | [
"['Albert Yu' 'Adeline Foote' 'Raymond Mooney' 'Roberto Martín-Martín']"
]
|
null | null | 2405.10024 | null | null | http://arxiv.org/pdf/2405.10024v1 | 2024-05-16T12:04:55Z | 2024-05-16T12:04:55Z | $Δ\text{-}{\rm OPE}$: Off-Policy Estimation with Pairs of Policies | The off-policy paradigm casts recommendation as a counterfactual decision-making task, allowing practitioners to unbiasedly estimate online metrics using offline data. This leads to effective evaluation metrics, as well as learning procedures that directly optimise online success. Nevertheless, the high variance that comes with unbiasedness is typically the crux that complicates practical applications. An important insight is that the difference between policy values can often be estimated with significantly reduced variance, if said policies have positive covariance. This allows us to formulate a pairwise off-policy estimation task: $Deltatext{-}{rm OPE}$. $Deltatext{-}{rm OPE}$ subsumes the common use-case of estimating improvements of a learnt policy over a production policy, using data collected by a stochastic logging policy. We introduce $Deltatext{-}{rm OPE}$ methods based on the widely used Inverse Propensity Scoring estimator and its extensions. Moreover, we characterise a variance-optimal additive control variate that further enhances efficiency. Simulated, offline, and online experiments show that our methods significantly improve performance for both evaluation and learning tasks. | [
"['Olivier Jeunen' 'Aleksei Ustimenko']"
]
|
null | null | 2405.10025 | null | null | http://arxiv.org/pdf/2405.10025v1 | 2024-05-16T12:05:45Z | 2024-05-16T12:05:45Z | Listen Again and Choose the Right Answer: A New Paradigm for Automatic
Speech Recognition with Large Language Models | Recent advances in large language models (LLMs) have promoted generative error correction (GER) for automatic speech recognition (ASR), which aims to predict the ground-truth transcription from the decoded N-best hypotheses. Thanks to the strong language generation ability of LLMs and rich information in the N-best list, GER shows great effectiveness in enhancing ASR results. However, it still suffers from two limitations: 1) LLMs are unaware of the source speech during GER, which may lead to results that are grammatically correct but violate the source speech content, 2) N-best hypotheses usually only vary in a few tokens, making it redundant to send all of them for GER, which could confuse LLM about which tokens to focus on and thus lead to increased miscorrection. In this paper, we propose ClozeGER, a new paradigm for ASR generative error correction. First, we introduce a multimodal LLM (i.e., SpeechGPT) to receive source speech as extra input to improve the fidelity of correction output. Then, we reformat GER as a cloze test with logits calibration to remove the input information redundancy and simplify GER with clear instructions. Experiments show that ClozeGER achieves a new breakthrough over vanilla GER on 9 popular ASR datasets. | [
"['Yuchen Hu' 'Chen Chen' 'Chengwei Qin' 'Qiushi Zhu' 'Eng Siong Chng'\n 'Ruizhe Li']"
]
|
null | null | 2405.10027 | null | null | http://arxiv.org/pdf/2405.10027v2 | 2024-06-19T09:20:04Z | 2024-05-16T12:11:09Z | The Real Price of Bandit Information in Multiclass Classification | We revisit the classical problem of multiclass classification with bandit feedback (Kakade, Shalev-Shwartz and Tewari, 2008), where each input classifies to one of $K$ possible labels and feedback is restricted to whether the predicted label is correct or not. Our primary inquiry is with regard to the dependency on the number of labels $K$, and whether $T$-step regret bounds in this setting can be improved beyond the $smash{sqrt{KT}}$ dependence exhibited by existing algorithms. Our main contribution is in showing that the minimax regret of bandit multiclass is in fact more nuanced, and is of the form $smash{widetilde{Theta}left(min left{|H| + sqrt{T}, sqrt{KT log |H|} right} right) }$, where $H$ is the underlying (finite) hypothesis class. In particular, we present a new bandit classification algorithm that guarantees regret $smash{widetilde{O}(|H|+sqrt{T})}$, improving over classical algorithms for moderately-sized hypothesis classes, and give a matching lower bound establishing tightness of the upper bounds (up to log-factors) in all parameter regimes. | [
"['Liad Erez' 'Alon Cohen' 'Tomer Koren' 'Yishay Mansour' 'Shay Moran']"
]
|
null | null | 2405.10040 | null | null | http://arxiv.org/pdf/2405.10040v2 | 2024-07-08T11:20:42Z | 2024-05-16T12:22:41Z | SynthesizRR: Generating Diverse Datasets with Retrieval Augmentation | It is often desirable to distill the capabilities of large language models (LLMs) into smaller student models due to compute and memory constraints. One way to do this for classification tasks is via dataset synthesis, which can be accomplished by generating examples of each label from the LLM. Prior approaches to synthesis use few-shot prompting, which relies on the LLM's parametric knowledge to generate usable examples. However, this leads to issues of repetition, bias towards popular entities, and stylistic differences from human text. In this work, we propose Synthesize by Retrieval and Refinement (SynthesizRR), which uses retrieval augmentation to introduce variety into the dataset synthesis process: as retrieved passages vary, the LLM is seeded with different content to generate its examples. We empirically study the synthesis of six datasets, covering topic classification, sentiment analysis, tone detection, and humor, requiring complex synthesis strategies. We find that SynthesizRR greatly improves lexical and semantic diversity, similarity to human-written text, and distillation performance, when compared to 32-shot prompting and four prior approaches. We release our extensive codebase at https://github.com/amazon-science/synthesizrr | [
"['Abhishek Divekar' 'Greg Durrett']"
]
|
null | null | 2405.10054 | null | null | http://arxiv.org/pdf/2405.10054v3 | 2024-05-21T04:49:55Z | 2024-05-16T12:42:36Z | A finite-sample generalization bound for stable LPV systems | One of the main theoretical challenges in learning dynamical systems from data is providing upper bounds on the generalization error, that is, the difference between the expected prediction error and the empirical prediction error measured on some finite sample. In machine learning, a popular class of such bounds are the so-called Probably Approximately Correct (PAC) bounds. In this paper, we derive a PAC bound for stable continuous-time linear parameter-varying (LPV) systems. Our bound depends on the H2 norm of the chosen class of the LPV systems, but does not depend on the time interval for which the signals are considered. | [
"['Daniel Racz' 'Martin Gonzalez' 'Mihaly Petreczky' 'Andras Benczur'\n 'Balint Daroczy']"
]
|
null | null | 2405.10093 | null | null | http://arxiv.org/pdf/2405.10093v2 | 2024-05-22T15:06:45Z | 2024-05-16T13:44:56Z | LaT-PFN: A Joint Embedding Predictive Architecture for In-context
Time-series Forecasting | We introduce LatentTimePFN (LaT-PFN), a foundational Time Series model with a strong embedding space that enables zero-shot forecasting. To achieve this, we perform in-context learning in latent space utilizing a novel integration of the Prior-data Fitted Networks (PFN) and Joint Embedding Predictive Architecture (JEPA) frameworks. We leverage the JEPA framework to create a prediction-optimized latent representation of the underlying stochastic process that generates time series and combines it with contextual learning, using a PFN. Furthermore, we improve on preceding works by utilizing related time series as a context and introducing a normalized abstract time axis. This reduces training time and increases the versatility of the model by allowing any time granularity and forecast horizon. We show that this results in superior zero-shot predictions compared to established baselines. We also demonstrate our latent space produces informative embeddings of both individual time steps and fixed-length summaries of entire series. Finally, we observe the emergence of multi-step patch embeddings without explicit training, suggesting the model actively learns discrete tokens that encode local structures in the data, analogous to vision transformers. | [
"['Stijn Verdenius' 'Andrea Zerio' 'Roy L. M. Wang']"
]
|
null | null | 2405.10096 | null | null | http://arxiv.org/pdf/2405.10096v1 | 2024-05-16T13:50:46Z | 2024-05-16T13:50:46Z | The Effect of Quantization in Federated Learning: A Rényi Differential
Privacy Perspective | Federated Learning (FL) is an emerging paradigm that holds great promise for privacy-preserving machine learning using distributed data. To enhance privacy, FL can be combined with Differential Privacy (DP), which involves adding Gaussian noise to the model weights. However, FL faces a significant challenge in terms of large communication overhead when transmitting these model weights. To address this issue, quantization is commonly employed. Nevertheless, the presence of quantized Gaussian noise introduces complexities in understanding privacy protection. This research paper investigates the impact of quantization on privacy in FL systems. We examine the privacy guarantees of quantized Gaussian mechanisms using R'enyi Differential Privacy (RDP). By deriving the privacy budget of quantized Gaussian mechanisms, we demonstrate that lower quantization bit levels provide improved privacy protection. To validate our theoretical findings, we employ Membership Inference Attacks (MIA), which gauge the accuracy of privacy leakage. The numerical results align with our theoretical analysis, confirming that quantization can indeed enhance privacy protection. This study not only enhances our understanding of the correlation between privacy and communication in FL but also underscores the advantages of quantization in preserving privacy. | [
"['Tianqu Kang' 'Lumin Liu' 'Hengtao He' 'Jun Zhang' 'S. H. Song'\n 'Khaled B. Letaief']"
]
|
null | null | 2405.10102 | null | null | http://arxiv.org/pdf/2405.10102v1 | 2024-05-16T13:55:53Z | 2024-05-16T13:55:53Z | A novel Reservoir Architecture for Periodic Time Series Prediction | This paper introduces a novel approach to predicting periodic time series using reservoir computing. The model is tailored to deliver precise forecasts of rhythms, a crucial aspect for tasks such as generating musical rhythm. Leveraging reservoir computing, our proposed method is ultimately oriented towards predicting human perception of rhythm. Our network accurately predicts rhythmic signals within the human frequency perception range. The model architecture incorporates primary and intermediate neurons tasked with capturing and transmitting rhythmic information. Two parameter matrices, denoted as c and k, regulate the reservoir's overall dynamics. We propose a loss function to adapt c post-training and introduce a dynamic selection (DS) mechanism that adjusts $k$ to focus on areas with outstanding contributions. Experimental results on a diverse test set showcase accurate predictions, further improved through real-time tuning of the reservoir via c and k. Comparative assessments highlight its superior performance compared to conventional models. | [
"['Zhongju Yuan' 'Geraint Wiggins' 'Dick Botteldooren']"
]
|
null | null | 2405.10123 | null | null | http://arxiv.org/pdf/2405.10123v2 | 2024-05-28T18:27:41Z | 2024-05-16T14:22:49Z | Asynchronous Federated Stochastic Optimization for Heterogeneous
Objectives Under Arbitrary Delays | Federated learning (FL) was recently proposed to securely train models with data held over multiple locations ("clients") under the coordination of a central server. Two major challenges hindering the performance of FL algorithms are long training times caused by straggling clients, and a decline in model accuracy under non-iid local data distributions ("client drift"). In this work, we propose and analyze Asynchronous Exact Averaging (AREA), a new stochastic (sub)gradient algorithm that utilizes asynchronous communication to speed up convergence and enhance scalability, and employs client memory to correct the client drift caused by variations in client update frequencies. Moreover, AREA is, to the best of our knowledge, the first method that is guaranteed to converge under arbitrarily long delays, without the use of delay-adaptive stepsizes, and (i) for strongly convex, smooth functions, asymptotically converges to an error neighborhood whose size depends only on the variance of the stochastic gradients used with respect to the number of iterations, and (ii) for convex, non-smooth functions, matches the convergence rate of the centralized stochastic subgradient method up to a constant factor, which depends on the average of the individual client update frequencies instead of their minimum (or maximum). Our numerical results validate our theoretical analysis and indicate AREA outperforms state-of-the-art methods when local data are highly non-iid, especially as the number of clients grows. | [
"['Charikleia Iakovidou' 'Kibaek Kim']"
]
|
null | null | 2405.10126 | null | null | http://arxiv.org/abs/2405.10126v1 | 2024-05-16T14:24:44Z | 2024-05-16T14:24:44Z | Estimating a Function and Its Derivatives Under a Smoothness Condition | We consider the problem of estimating an unknown function f* and its partial derivatives from a noisy data set of n observations, where we make no assumptions about f* except that it is smooth in the sense that it has square integrable partial derivatives of order m. A natural candidate for the estimator of f* in such a case is the best fit to the data set that satisfies a certain smoothness condition. This estimator can be seen as a least squares estimator subject to an upper bound on some measure of smoothness. Another useful estimator is the one that minimizes the degree of smoothness subject to an upper bound on the average of squared errors. We prove that these two estimators are computable as solutions to quadratic programs, establish the consistency of these estimators and their partial derivatives, and study the convergence rate as n increases to infinity. The effectiveness of the estimators is illustrated numerically in a setting where the value of a stock option and its second derivative are estimated as functions of the underlying stock price. | [
"['Eunji Lim']"
]
|
null | null | 2405.10143 | null | null | http://arxiv.org/pdf/2405.10143v1 | 2024-05-16T14:35:50Z | 2024-05-16T14:35:50Z | Relational DNN Verification With Cross Executional Bound Refinement | We focus on verifying relational properties defined over deep neural networks (DNNs) such as robustness against universal adversarial perturbations (UAP), certified worst-case hamming distance for binary string classifications, etc. Precise verification of these properties requires reasoning about multiple executions of the same DNN. However, most of the existing works in DNN verification only handle properties defined over single executions and as a result, are imprecise for relational properties. Though few recent works for relational DNN verification, capture linear dependencies between the inputs of multiple executions, they do not leverage dependencies between the outputs of hidden layers producing imprecise results. We develop a scalable relational verifier RACoon that utilizes cross-execution dependencies at all layers of the DNN gaining substantial precision over SOTA baselines on a wide range of datasets, networks, and relational properties. | [
"['Debangshu Banerjee' 'Gagandeep Singh']"
]
|
null | null | 2405.10190 | null | null | http://arxiv.org/pdf/2405.10190v2 | 2024-05-23T08:19:28Z | 2024-05-15T17:32:31Z | Comparative Analysis of Predicting Subsequent Steps in Hénon Map | This paper explores the prediction of subsequent steps in H'enon Map using various machine learning techniques. The H'enon map, well known for its chaotic behaviour, finds applications in various fields including cryptography, image encryption, and pattern recognition. Machine learning methods, particularly deep learning, are increasingly essential for understanding and predicting chaotic phenomena. This study evaluates the performance of different machine learning models including Random Forest, Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) networks, Support Vector Machines (SVM), and Feed Forward Neural Networks (FNN) in predicting the evolution of the H'enon map. Results indicate that LSTM network demonstrate superior predictive accuracy, particularly in extreme event prediction. Furthermore, a comparison between LSTM and FNN models reveals the LSTM's advantage, especially for longer prediction horizons and larger datasets. This research underscores the significance of machine learning in elucidating chaotic dynamics and highlights the importance of model selection and dataset size in forecasting subsequent steps in chaotic systems. | [
"['Vismaya V S' 'Alok Hareendran' 'Bharath V Nair' 'Sishu Shankar Muni'\n 'Martin Lellep']"
]
|
null | null | 2405.10210 | null | null | http://arxiv.org/pdf/2405.10210v1 | 2024-05-16T16:00:35Z | 2024-05-16T16:00:35Z | GPT Store Mining and Analysis | As a pivotal extension of the renowned ChatGPT, the GPT Store serves as a dynamic marketplace for various Generative Pre-trained Transformer (GPT) models, shaping the frontier of conversational AI. This paper presents an in-depth measurement study of the GPT Store, with a focus on the categorization of GPTs by topic, factors influencing GPT popularity, and the potential security risks. Our investigation starts with assessing the categorization of GPTs in the GPT Store, analyzing how they are organized by topics, and evaluating the effectiveness of the classification system. We then examine the factors that affect the popularity of specific GPTs, looking into user preferences, algorithmic influences, and market trends. Finally, the study delves into the security risks of the GPT Store, identifying potential threats and evaluating the robustness of existing security measures. This study offers a detailed overview of the GPT Store's current state, shedding light on its operational dynamics and user interaction patterns. Our findings aim to enhance understanding of the GPT ecosystem, providing valuable insights for future research, development, and policy-making in generative AI. | [
"['Dongxun Su' 'Yanjie Zhao' 'Xinyi Hou' 'Shenao Wang' 'Haoyu Wang']"
]
|
null | null | 2405.10215 | null | null | http://arxiv.org/pdf/2405.10215v1 | 2024-05-16T16:05:21Z | 2024-05-16T16:05:21Z | SMLP: Symbolic Machine Learning Prover (User Manual) | SMLP: Symbolic Machine Learning Prover an open source tool for exploration and optimization of systems represented by machine learning models. SMLP uses symbolic reasoning for ML model exploration and optimization under verification and stability constraints, based on SMT, constraint and NN solvers. In addition its exploration methods are guided by probabilistic and statistical methods. SMLP is a general purpose tool that requires only data suitable for ML modelling in the csv format (usually samples of the system's input/output). SMLP has been applied at Intel for analyzing and optimizing hardware designs at the analog level. Currently SMLP supports NNs, polynomial and tree models, and uses SMT solvers for reasoning and optimization at the backend, integration of specialized NN solvers is in progress. | [
"['Franz Brauße' 'Zurab Khasidashvili' 'Konstantin Korovin']"
]
|
null | null | 2405.10216 | null | null | http://arxiv.org/pdf/2405.10216v1 | 2024-05-16T16:05:33Z | 2024-05-16T16:05:33Z | Low-Rank Adaptation of Time Series Foundational Models for Out-of-Domain
Modality Forecasting | Low-Rank Adaptation (LoRA) is a widely used technique for fine-tuning large pre-trained or foundational models across different modalities and tasks. However, its application to time series data, particularly within foundational models, remains underexplored. This paper examines the impact of LoRA on contemporary time series foundational models: Lag-Llama, MOIRAI, and Chronos. We demonstrate LoRA's fine-tuning potential for forecasting the vital signs of sepsis patients in intensive care units (ICUs), emphasizing the models' adaptability to previously unseen, out-of-domain modalities. Integrating LoRA aims to enhance forecasting performance while reducing inefficiencies associated with fine-tuning large models on limited domain-specific data. Our experiments show that LoRA fine-tuning of time series foundational models significantly improves forecasting, achieving results comparable to state-of-the-art models trained from scratch on similar modalities. We conduct comprehensive ablation studies to demonstrate the trade-offs between the number of tunable parameters and forecasting performance and assess the impact of varying LoRA matrix ranks on model performance. | [
"['Divij Gupta' 'Anubhav Bhatti' 'Suraj Parmar' 'Chen Dan' 'Yuwei Liu'\n 'Bingjie Shen' 'San Lee']"
]
|
null | null | 2405.10218 | null | null | http://arxiv.org/pdf/2405.10218v1 | 2024-05-16T16:08:49Z | 2024-05-16T16:08:49Z | ENADPool: The Edge-Node Attention-based Differentiable Pooling for Graph
Neural Networks | Graph Neural Networks (GNNs) are powerful tools for graph classification. One important operation for GNNs is the downsampling or pooling that can learn effective embeddings from the node representations. In this paper, we propose a new hierarchical pooling operation, namely the Edge-Node Attention-based Differentiable Pooling (ENADPool), for GNNs to learn effective graph representations. Unlike the classical hierarchical pooling operation that is based on the unclear node assignment and simply computes the averaged feature over the nodes of each cluster, the proposed ENADPool not only employs a hard clustering strategy to assign each node into an unique cluster, but also compress the node features as well as their edge connectivity strengths into the resulting hierarchical structure based on the attention mechanism after each pooling step. As a result, the proposed ENADPool simultaneously identifies the importance of different nodes within each separated cluster and edges between corresponding clusters, that significantly addresses the shortcomings of the uniform edge-node based structure information aggregation arising in the classical hierarchical pooling operation. Moreover, to mitigate the over-smoothing problem arising in existing GNNs, we propose a Multi-distance GNN (MD-GNN) model associated with the proposed ENADPool operation, allowing the nodes to actively and directly receive the feature information from neighbors at different random walk steps. Experiments demonstrate the effectiveness of the MD-GNN associated with the proposed ENADPool. | [
"['Zhehan Zhao' 'Lu Bai' 'Lixin Cui' 'Ming Li' 'Yue Wang' 'Lixiang Xu'\n 'Edwin R. Hancock']"
]
|
null | null | 2405.10221 | null | null | http://arxiv.org/pdf/2405.10221v2 | 2024-07-15T14:13:13Z | 2024-05-16T16:11:00Z | Scalarisation-based risk concepts for robust multi-objective
optimisation | Robust optimisation is a well-established framework for optimising functions in the presence of uncertainty. The inherent goal of this problem is to identify a collection of inputs whose outputs are both desirable for the decision maker, whilst also being robust to the underlying uncertainties in the problem. In this work, we study the multi-objective case of this problem. We identify that the majority of all robust multi-objective algorithms rely on two key operations: robustification and scalarisation. Robustification refers to the strategy that is used to account for the uncertainty in the problem. Scalarisation refers to the procedure that is used to encode the relative importance of each objective to a scalar-valued reward. As these operations are not necessarily commutative, the order that they are performed in has an impact on the resulting solutions that are identified and the final decisions that are made. The purpose of this work is to give a thorough exposition on the effects of these different orderings and in particular highlight when one should opt for one ordering over the other. As part of our analysis, we showcase how many existing risk concepts can be integrated into the specification and solution of a robust multi-objective optimisation problem. Besides this, we also demonstrate how one can principally define the notion of a robust Pareto front and a robust performance metric based on our ``robustify and scalarise'' methodology. To illustrate the efficacy of these new ideas, we present two insightful case studies which are based on real-world data sets. | [
"['Ben Tu' 'Nikolas Kantas' 'Robert M. Lee' 'Behrang Shafei']"
]
|
null | null | 2405.10229 | null | null | http://arxiv.org/pdf/2405.10229v1 | 2024-05-16T16:28:11Z | 2024-05-16T16:28:11Z | Random ReLU Neural Networks as Non-Gaussian Processes | We consider a large class of shallow neural networks with randomly initialized parameters and rectified linear unit activation functions. We prove that these random neural networks are well-defined non-Gaussian processes. As a by-product, we demonstrate that these networks are solutions to stochastic differential equations driven by impulsive white noise (combinations of random Dirac measures). These processes are parameterized by the law of the weights and biases as well as the density of activation thresholds in each bounded region of the input domain. We prove that these processes are isotropic and wide-sense self-similar with Hurst exponent $3/2$. We also derive a remarkably simple closed-form expression for their autocovariance function. Our results are fundamentally different from prior work in that we consider a non-asymptotic viewpoint: The number of neurons in each bounded region of the input domain (i.e., the width) is itself a random variable with a Poisson law with mean proportional to the density parameter. Finally, we show that, under suitable hypotheses, as the expected width tends to infinity, these processes can converge in law not only to Gaussian processes, but also to non-Gaussian processes depending on the law of the weights. Our asymptotic results provide a new take on several classical results (wide networks converge to Gaussian processes) as well as some new ones (wide networks can converge to non-Gaussian processes). | [
"['Rahul Parhi' 'Pakshal Bohra' 'Ayoub El Biari' 'Mehrsa Pourya'\n 'Michael Unser']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.