categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2404.14777
null
null
http://arxiv.org/pdf/2404.14777v1
2024-04-23T06:30:53Z
2024-04-23T06:30:53Z
CT-Agent: Clinical Trial Multi-Agent with Large Language Model-based Reasoning
Large Language Models (LLMs) and multi-agent systems have shown impressive capabilities in natural language tasks but face challenges in clinical trial applications, primarily due to limited access to external knowledge. Recognizing the potential of advanced clinical trial tools that aggregate and predict based on the latest medical data, we propose an integrated solution to enhance their accessibility and utility. We introduce Clinical Agent System (CT-Agent), a Clinical multi-agent system designed for clinical trial tasks, leveraging GPT-4, multi-agent architectures, LEAST-TO-MOST, and ReAct reasoning technology. This integration not only boosts LLM performance in clinical contexts but also introduces novel functionalities. Our system autonomously manages the entire clinical trial process, demonstrating significant efficiency improvements in our evaluations, which include both computational benchmarks and expert feedback.
[ "['Ling Yue' 'Tianfan Fu']" ]
null
null
2404.14786
null
null
http://arxiv.org/pdf/2404.14786v2
2024-05-26T13:08:00Z
2024-04-23T06:52:40Z
RealTCD: Temporal Causal Discovery from Interventional Data with Large Language Model
In the field of Artificial Intelligence for Information Technology Operations, causal discovery is pivotal for operation and maintenance of graph construction, facilitating downstream industrial tasks such as root cause analysis. Temporal causal discovery, as an emerging method, aims to identify temporal causal relationships between variables directly from observations by utilizing interventional data. However, existing methods mainly focus on synthetic datasets with heavy reliance on intervention targets and ignore the textual information hidden in real-world systems, failing to conduct causal discovery for real industrial scenarios. To tackle this problem, in this paper we propose to investigate temporal causal discovery in industrial scenarios, which faces two critical challenges: 1) how to discover causal relationships without the interventional targets that are costly to obtain in practice, and 2) how to discover causal relations via leveraging the textual information in systems which can be complex yet abundant in industrial contexts. To address these challenges, we propose the RealTCD framework, which is able to leverage domain knowledge to discover temporal causal relationships without interventional targets. Specifically, we first develop a score-based temporal causal discovery method capable of discovering causal relations for root cause analysis without relying on interventional targets through strategic masking and regularization. Furthermore, by employing Large Language Models (LLMs) to handle texts and integrate domain knowledge, we introduce LLM-guided meta-initialization to extract the meta-knowledge from textual information hidden in systems to boost the quality of discovery. We conduct extensive experiments on simulation and real-world datasets to show the superiority of our proposed RealTCD framework over existing baselines in discovering temporal causal structures.
[ "['Peiwen Li' 'Xin Wang' 'Zeyang Zhang' 'Yuan Meng' 'Fang Shen' 'Yue Li'\n 'Jialong Wang' 'Yang Li' 'Wenweu Zhu']" ]
null
null
2404.14795
null
null
http://arxiv.org/pdf/2404.14795v3
2024-05-11T10:40:58Z
2024-04-23T07:19:20Z
Talk Too Much: Poisoning Large Language Models under Token Limit
Mainstream poisoning attacks on large language models (LLMs) typically set a fixed trigger in the input instance and specific responses for triggered queries. However, the fixed trigger setting (e.g., unusual words) may be easily detected by human detection, limiting the effectiveness and practicality in real-world scenarios. To enhance the stealthiness of the trigger, we present a poisoning attack against LLMs that is triggered by a generation/output condition-token limitation, which is a commonly adopted strategy by users for reducing costs. The poisoned model performs normally for output without token limitation, while becomes harmful for output with limited tokens. To achieve this objective, we introduce BrieFool, an efficient attack framework. It leverages the characteristics of generation limitation by efficient instruction sampling and poisoning data generation, thereby influencing the behavior of LLMs under target conditions. Our experiments demonstrate that BrieFool is effective across safety domains and knowledge domains. For instance, with only 20 generated poisoning examples against GPT-3.5-turbo, BrieFool achieves a 100% Attack Success Rate (ASR) and a 9.28/10 average Harmfulness Score (HS) under token limitation conditions while maintaining the benign performance.
[ "['Jiaming He' 'Wenbo Jiang' 'Guanyu Hou' 'Wenshu Fan' 'Rui Zhang'\n 'Hongwei Li']" ]
null
null
2404.14811
null
null
http://arxiv.org/pdf/2404.14811v1
2024-04-23T07:48:17Z
2024-04-23T07:48:17Z
FLARE: A New Federated Learning Framework with Adjustable Learning Rates over Resource-Constrained Wireless Networks
Wireless federated learning (WFL) suffers from heterogeneity prevailing in the data distributions, computing powers, and channel conditions of participating devices. This paper presents a new Federated Learning with Adjusted leaRning ratE (FLARE) framework to mitigate the impact of the heterogeneity. The key idea is to allow the participating devices to adjust their individual learning rates and local training iterations, adapting to their instantaneous computing powers. The convergence upper bound of FLARE is established rigorously under a general setting with non-convex models in the presence of non-i.i.d. datasets and imbalanced computing powers. By minimizing the upper bound, we further optimize the scheduling of FLARE to exploit the channel heterogeneity. A nested problem structure is revealed to facilitate iteratively allocating the bandwidth with binary search and selecting devices with a new greedy method. A linear problem structure is also identified and a low-complexity linear programming scheduling policy is designed when training models have large Lipschitz constants. Experiments demonstrate that FLARE consistently outperforms the baselines in test accuracy, and converges much faster with the proposed scheduling policy.
[ "['Bingnan Xiao' 'Jingjing Zhang' 'Wei Ni' 'Xin Wang']" ]
null
null
2404.14815
null
null
http://arxiv.org/pdf/2404.14815v2
2024-05-10T10:20:57Z
2024-04-23T08:01:30Z
Time-aware Heterogeneous Graph Transformer with Adaptive Attention Merging for Health Event Prediction
The widespread application of Electronic Health Records (EHR) data in the medical field has led to early successes in disease risk prediction using deep learning methods. These methods typically require extensive data for training due to their large parameter sets. However, existing works do not exploit the full potential of EHR data. A significant challenge arises from the infrequent occurrence of many medical codes within EHR data, limiting their clinical applicability. Current research often lacks in critical areas: 1) incorporating disease domain knowledge; 2) heterogeneously learning disease representations with rich meanings; 3) capturing the temporal dynamics of disease progression. To overcome these limitations, we introduce a novel heterogeneous graph learning model designed to assimilate disease domain knowledge and elucidate the intricate relationships between drugs and diseases. This model innovatively incorporates temporal data into visit-level embeddings and leverages a time-aware transformer alongside an adaptive attention mechanism to produce patient representations. When evaluated on two healthcare datasets, our approach demonstrated notable enhancements in both prediction accuracy and interpretability over existing methodologies, signifying a substantial advancement towards personalized and proactive healthcare management.
[ "['Shibo Li' 'Hengliang Cheng' 'Weihua Li']" ]
null
null
2404.14829
null
null
http://arxiv.org/pdf/2404.14829v3
2024-04-28T12:08:26Z
2024-04-23T08:31:55Z
Revisiting Neural Networks for Continual Learning: An Architectural Perspective
Efforts to overcome catastrophic forgetting have primarily centered around developing more effective Continual Learning (CL) methods. In contrast, less attention was devoted to analyzing the role of network architecture design (e.g., network depth, width, and components) in contributing to CL. This paper seeks to bridge this gap between network architecture design and CL, and to present a holistic study on the impact of network architectures on CL. This work considers architecture design at the network scaling level, i.e., width and depth, and also at the network components, i.e., skip connections, global pooling layers, and down-sampling. In both cases, we first derive insights through systematically exploring how architectural designs affect CL. Then, grounded in these insights, we craft a specialized search space for CL and further propose a simple yet effective ArchCraft method to steer a CL-friendly architecture, namely, this method recrafts AlexNet/ResNet into AlexAC/ResAC. Experimental validation across various CL settings and scenarios demonstrates that improved architectures are parameter-efficient, achieving state-of-the-art performance of CL while being 86%, 61%, and 97% more compact in terms of parameters than the naive CL architecture in Task IL and Class IL. Code is available at https://github.com/byyx666/ArchCraft.
[ "['Aojun Lu' 'Tao Feng' 'Hangjie Yuan' 'Xiaotian Song' 'Yanan Sun']" ]
null
null
2404.14836
null
null
http://arxiv.org/pdf/2404.14836v2
2024-04-24T08:53:32Z
2024-04-23T08:42:35Z
Probabilistic forecasting of power system imbalance using neural network-based ensembles
Keeping the balance between electricity generation and consumption is becoming increasingly challenging and costly, mainly due to the rising share of renewables, electric vehicles and heat pumps and electrification of industrial processes. Accurate imbalance forecasts, along with reliable uncertainty estimations, enable transmission system operators (TSOs) to dispatch appropriate reserve volumes, reducing balancing costs. Further, market parties can use these probabilistic forecasts to design strategies that exploit asset flexibility to help balance the grid, generating revenue with known risks. Despite its importance, literature regarding system imbalance (SI) forecasting is limited. Further, existing methods do not focus on situations with high imbalance magnitude, which are crucial to forecast accurately for both TSOs and market parties. Hence, we propose an ensemble of C-VSNs, which are our adaptation of variable selection networks (VSNs). Each minute, our model predicts the imbalance of the current and upcoming two quarter-hours, along with uncertainty estimations on these forecasts. We evaluate our approach by forecasting the imbalance of Belgium, where high imbalance magnitude is defined as $|$SI$| > 500,$MW (occurs 1.3% of the time in Belgium). For high imbalance magnitude situations, our model outperforms the state-of-the-art by 23.4% (in terms of continuous ranked probability score (CRPS), which evaluates probabilistic forecasts), while also attaining a 6.5% improvement in overall CRPS. Similar improvements are achieved in terms of root-mean-squared error. Additionally, we developed a fine-tuning methodology to effectively include new inputs with limited history in our model. This work was performed in collaboration with Elia (the Belgian TSO) to further improve their imbalance forecasts, demonstrating the relevance of our work.
[ "['Jonas Van Gompel' 'Bert Claessens' 'Chris Develder']" ]
null
null
2404.14850
null
null
http://arxiv.org/pdf/2404.14850v1
2024-04-23T09:05:09Z
2024-04-23T09:05:09Z
Simple, Efficient and Scalable Structure-aware Adapter Boosts Protein Language Models
Fine-tuning Pre-trained protein language models (PLMs) has emerged as a prominent strategy for enhancing downstream prediction tasks, often outperforming traditional supervised learning approaches. As a widely applied powerful technique in natural language processing, employing Parameter-Efficient Fine-Tuning techniques could potentially enhance the performance of PLMs. However, the direct transfer to life science tasks is non-trivial due to the different training strategies and data forms. To address this gap, we introduce SES-Adapter, a simple, efficient, and scalable adapter method for enhancing the representation learning of PLMs. SES-Adapter incorporates PLM embeddings with structural sequence embeddings to create structure-aware representations. We show that the proposed method is compatible with different PLM architectures and across diverse tasks. Extensive evaluations are conducted on 2 types of folding structures with notable quality differences, 9 state-of-the-art baselines, and 9 benchmark datasets across distinct downstream tasks. Results show that compared to vanilla PLMs, SES-Adapter improves downstream task performance by a maximum of 11% and an average of 3%, with significantly accelerated training speed by a maximum of 1034% and an average of 362%, the convergence rate is also improved by approximately 2 times. Moreover, positive optimization is observed even with low-quality predicted structures. The source code for SES-Adapter is available at https://github.com/tyang816/SES-Adapter.
[ "['Yang Tan' 'Mingchen Li' 'Bingxin Zhou' 'Bozitao Zhong' 'Lirong Zheng'\n 'Pan Tan' 'Ziyi Zhou' 'Huiqun Yu' 'Guisheng Fan' 'Liang Hong']" ]
null
null
2404.14855
null
null
http://arxiv.org/pdf/2404.14855v1
2024-04-23T09:20:55Z
2024-04-23T09:20:55Z
The Geometry of the Set of Equivalent Linear Neural Networks
We characterize the geometry and topology of the set of all weight vectors for which a linear neural network computes the same linear transformation $W$. This set of weight vectors is called the fiber of $W$ (under the matrix multiplication map), and it is embedded in the Euclidean weight space of all possible weight vectors. The fiber is an algebraic variety that is not necessarily a manifold. We describe a natural way to stratify the fiber--that is, to partition the algebraic variety into a finite set of manifolds of varying dimensions called strata. We call this set of strata the rank stratification. We derive the dimensions of these strata and the relationships by which they adjoin each other. Although the strata are disjoint, their closures are not. Our strata satisfy the frontier condition: if a stratum intersects the closure of another stratum, then the former stratum is a subset of the closure of the latter stratum. Each stratum is a manifold of class $C^infty$ embedded in weight space, so it has a well-defined tangent space and normal space at every point (weight vector). We show how to determine the subspaces tangent to and normal to a specified stratum at a specified point on the stratum, and we construct elegant bases for those subspaces. To help achieve these goals, we first derive what we call a Fundamental Theorem of Linear Neural Networks, analogous to what Strang calls the Fundamental Theorem of Linear Algebra. We show how to decompose each layer of a linear neural network into a set of subspaces that show how information flows through the neural network. Each stratum of the fiber represents a different pattern by which information flows (or fails to flow) through the neural network. The topology of a stratum depends solely on this decomposition. So does its geometry, up to a linear transformation in weight space.
[ "['Jonathan Richard Shewchuk' 'Sagnik Bhattacharya']" ]
null
null
2404.14869
null
null
http://arxiv.org/pdf/2404.14869v2
2024-06-24T08:02:17Z
2024-04-23T09:51:24Z
EEGEncoder: Advancing BCI with Transformer-Based Motor Imagery Classification
Brain-computer interfaces (BCIs) harness electroencephalographic signals for direct neural control of devices, offering a significant benefit for individuals with motor impairments. Traditional machine learning methods for EEG-based motor imagery (MI) classification encounter challenges such as manual feature extraction and susceptibility to noise.This paper introduces EEGEncoder, a deep learning framework that employs modified transformers and TCNs to surmount these limitations. We innovatively propose a fusion architecture, namely Dual-Stream Temporal-Spatial Block (DSTS), to capture temporal and spatial features, improving the accuracy of Motor Imagery classification task. Additionally, we use multiple parallel structures to enhance the performance of the model. When tested on the BCI Competition IV-2a dataset, our model results outperform current state-of-the-art techniques.
[ "['Wangdan Liao' 'Weidong Wang']" ]
null
null
2404.14873
null
null
http://arxiv.org/pdf/2404.14873v1
2024-04-23T10:01:43Z
2024-04-23T10:01:43Z
Estimating the Distribution of Parameters in Differential Equations with Repeated Cross-Sectional Data
Differential equations are pivotal in modeling and understanding the dynamics of various systems, offering insights into their future states through parameter estimation fitted to time series data. In fields such as economy, politics, and biology, the observation data points in the time series are often independently obtained (i.e., Repeated Cross-Sectional (RCS) data). With RCS data, we found that traditional methods for parameter estimation in differential equations, such as using mean values of time trajectories or Gaussian Process-based trajectory generation, have limitations in estimating the shape of parameter distributions, often leading to a significant loss of data information. To address this issue, we introduce a novel method, Estimation of Parameter Distribution (EPD), providing accurate distribution of parameters without loss of data information. EPD operates in three main steps: generating synthetic time trajectories by randomly selecting observed values at each time point, estimating parameters of a differential equation that minimize the discrepancy between these trajectories and the true solution of the equation, and selecting the parameters depending on the scale of discrepancy. We then evaluated the performance of EPD across several models, including exponential growth, logistic population models, and target cell-limited models with delayed virus production, demonstrating its superiority in capturing the shape of parameter distributions. Furthermore, we applied EPD to real-world datasets, capturing various shapes of parameter distributions rather than a normal distribution. These results effectively address the heterogeneity within systems, marking a substantial progression in accurately modeling systems using RCS data.
[ "['Hyeontae Jo' 'Sung Woong Cho' 'Hyung Ju Hwang']" ]
null
null
2404.14875
null
null
http://arxiv.org/pdf/2404.14875v1
2024-04-23T10:02:22Z
2024-04-23T10:02:22Z
Regularized Gauss-Newton for Optimizing Overparameterized Neural Networks
The generalized Gauss-Newton (GGN) optimization method incorporates curvature estimates into its solution steps, and provides a good approximation to the Newton method for large-scale optimization problems. GGN has been found particularly interesting for practical training of deep neural networks, not only for its impressive convergence speed, but also for its close relation with neural tangent kernel regression, which is central to recent studies that aim to understand the optimization and generalization properties of neural networks. This work studies a GGN method for optimizing a two-layer neural network with explicit regularization. In particular, we consider a class of generalized self-concordant (GSC) functions that provide smooth approximations to commonly-used penalty terms in the objective function of the optimization problem. This approach provides an adaptive learning rate selection technique that requires little to no tuning for optimal performance. We study the convergence of the two-layer neural network, considered to be overparameterized, in the optimization loop of the resulting GGN method for a given scaling of the network parameters. Our numerical experiments highlight specific aspects of GSC regularization that help to improve generalization of the optimized neural network. The code to reproduce the experimental results is available at https://github.com/adeyemiadeoye/ggn-score-nn.
[ "['Adeyemi D. Adeoye' 'Philipp Christian Petersen' 'Alberto Bemporad']" ]
null
null
2404.14886
null
null
http://arxiv.org/pdf/2404.14886v1
2024-04-23T10:13:39Z
2024-04-23T10:13:39Z
GCEPNet: Graph Convolution-Enhanced Expectation Propagation for Massive MIMO Detection
Massive MIMO (multiple-input multiple-output) detection is an important topic in wireless communication and various machine learning based methods have been developed recently for this task. Expectation propagation (EP) and its variants are widely used for MIMO detection and have achieved the best performance. However, EP-based solvers fail to capture the correlation between unknown variables, leading to loss of information, and in addition, they are computationally expensive. In this paper, we show that the real-valued system can be modeled as spectral signal convolution on graph, through which the correlation between unknown variables can be captured. Based on this analysis, we propose graph convolution-enhanced expectation propagation (GCEPNet), a graph convolution-enhanced EP detector. GCEPNet incorporates data-dependent attention scores into Chebyshev polynomial for powerful graph convolution with better generalization capacity. It enables a better estimation of the cavity distribution for EP and empirically achieves the state-of-the-art (SOTA) MIMO detection performance with much faster inference speed. To our knowledge, we are the first to shed light on the connection between the system model and graph convolution, and the first to design the data-dependent attention scores for graph convolution.
[ "['Qincheng Lu' 'Sitao Luan' 'Xiao-Wen Chang']" ]
null
null
2404.14901
null
null
http://arxiv.org/pdf/2404.14901v2
2024-05-21T12:53:30Z
2024-04-23T10:34:16Z
Beyond Code Generation: An Observational Study of ChatGPT Usage in Software Engineering Practice
Large Language Models (LLMs) are frequently discussed in academia and the general public as support tools for virtually any use case that relies on the production of text, including software engineering. Currently there is much debate, but little empirical evidence, regarding the practical usefulness of LLM-based tools such as ChatGPT for engineers in industry. We conduct an observational study of 24 professional software engineers who have been using ChatGPT over a period of one week in their jobs, and qualitatively analyse their dialogues with the chatbot as well as their overall experience (as captured by an exit survey). We find that, rather than expecting ChatGPT to generate ready-to-use software artifacts (e.g., code), practitioners more often use ChatGPT to receive guidance on how to solve their tasks or learn about a topic in more abstract terms. We also propose a theoretical framework for how (i) purpose of the interaction, (ii) internal factors (e.g., the user's personality), and (iii) external factors (e.g., company policy) together shape the experience (in terms of perceived usefulness and trust). We envision that our framework can be used by future research to further the academic discussion on LLM usage by software engineering practitioners, and to serve as a reference point for the design of future empirical LLM research in this domain.
[ "['Ranim Khojah' 'Mazen Mohamad' 'Philipp Leitner'\n 'Francisco Gomes de Oliveira Neto']" ]
null
null
2404.14906
null
null
http://arxiv.org/pdf/2404.14906v1
2024-04-23T10:42:24Z
2024-04-23T10:42:24Z
Driver Activity Classification Using Generalizable Representations from Vision-Language Models
Driver activity classification is crucial for ensuring road safety, with applications ranging from driver assistance systems to autonomous vehicle control transitions. In this paper, we present a novel approach leveraging generalizable representations from vision-language models for driver activity classification. Our method employs a Semantic Representation Late Fusion Neural Network (SRLF-Net) to process synchronized video frames from multiple perspectives. Each frame is encoded using a pretrained vision-language encoder, and the resulting embeddings are fused to generate class probability predictions. By leveraging contrastively-learned vision-language representations, our approach achieves robust performance across diverse driver activities. We evaluate our method on the Naturalistic Driving Action Recognition Dataset, demonstrating strong accuracy across many classes. Our results suggest that vision-language representations offer a promising avenue for driver monitoring systems, providing both accuracy and interpretability through natural language descriptors.
[ "['Ross Greer' 'Mathias Viborg Andersen' 'Andreas Møgelmose'\n 'Mohan Trivedi']" ]
null
null
2404.14909
null
null
http://arxiv.org/pdf/2404.14909v1
2024-04-23T10:51:31Z
2024-04-23T10:51:31Z
MultiSTOP: Solving Functional Equations with Reinforcement Learning
We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics. This new methodology produces actual numerical solutions instead of bounds on them. We extend the original BootSTOP algorithm by adding multiple constraints derived from domain-specific knowledge, even in integral form, to improve the accuracy of the solution. We investigate a particular equation in a one-dimensional Conformal Field Theory.
[ "['Alessandro Trenta' 'Davide Bacciu' 'Andrea Cossu' 'Pietro Ferrero']" ]
null
null
2404.14913
null
null
http://arxiv.org/pdf/2404.14913v1
2024-04-23T10:56:58Z
2024-04-23T10:56:58Z
Additive Margin in Contrastive Self-Supervised Frameworks to Learn Discriminative Speaker Representations
Self-Supervised Learning (SSL) frameworks became the standard for learning robust class representations by benefiting from large unlabeled datasets. For Speaker Verification (SV), most SSL systems rely on contrastive-based loss functions. We explore different ways to improve the performance of these techniques by revisiting the NT-Xent contrastive loss. Our main contribution is the definition of the NT-Xent-AM loss and the study of the importance of Additive Margin (AM) in SimCLR and MoCo SSL methods to further separate positive from negative pairs. Despite class collisions, we show that AM enhances the compactness of same-speaker embeddings and reduces the number of false negatives and false positives on SV. Additionally, we demonstrate the effectiveness of the symmetric contrastive loss, which provides more supervision for the SSL task. Implementing these two modifications to SimCLR improves performance and results in 7.85% EER on VoxCeleb1-O, outperforming other equivalent methods.
[ "['Theo Lepage' 'Reda Dehak']" ]
null
null
2404.14928
null
null
http://arxiv.org/pdf/2404.14928v2
2024-06-04T01:31:30Z
2024-04-23T11:13:39Z
Graph Machine Learning in the Era of Large Language Models (LLMs)
Graphs play an important role in representing complex relationships in various domains like social networks, knowledge graphs, and molecular discovery. With the advent of deep learning, Graph Neural Networks (GNNs) have emerged as a cornerstone in Graph Machine Learning (Graph ML), facilitating the representation and processing of graph structures. Recently, LLMs have demonstrated unprecedented capabilities in language tasks and are widely adopted in a variety of applications such as computer vision and recommender systems. This remarkable success has also attracted interest in applying LLMs to the graph domain. Increasing efforts have been made to explore the potential of LLMs in advancing Graph ML's generalization, transferability, and few-shot learning ability. Meanwhile, graphs, especially knowledge graphs, are rich in reliable factual knowledge, which can be utilized to enhance the reasoning capabilities of LLMs and potentially alleviate their limitations such as hallucinations and the lack of explainability. Given the rapid progress of this research direction, a systematic review summarizing the latest advancements for Graph ML in the era of LLMs is necessary to provide an in-depth understanding to researchers and practitioners. Therefore, in this survey, we first review the recent developments in Graph ML. We then explore how LLMs can be utilized to enhance the quality of graph features, alleviate the reliance on labeled data, and address challenges such as graph heterogeneity and out-of-distribution (OOD) generalization. Afterward, we delve into how graphs can enhance LLMs, highlighting their abilities to enhance LLM pre-training and inference. Furthermore, we investigate various applications and discuss the potential future directions in this promising field.
[ "['Wenqi Fan' 'Shijie Wang' 'Jiani Huang' 'Zhikai Chen' 'Yu Song'\n 'Wenzhuo Tang' 'Haitao Mao' 'Hui Liu' 'Xiaorui Liu' 'Dawei Yin' 'Qing Li']" ]
null
null
2404.14933
null
null
http://arxiv.org/pdf/2404.14933v1
2024-04-23T11:22:04Z
2024-04-23T11:22:04Z
Fin-Fed-OD: Federated Outlier Detection on Financial Tabular Data
Anomaly detection in real-world scenarios poses challenges due to dynamic and often unknown anomaly distributions, requiring robust methods that operate under an open-world assumption. This challenge is exacerbated in practical settings, where models are employed by private organizations, precluding data sharing due to privacy and competitive concerns. Despite potential benefits, the sharing of anomaly information across organizations is restricted. This paper addresses the question of enhancing outlier detection within individual organizations without compromising data confidentiality. We propose a novel method leveraging representation learning and federated learning techniques to improve the detection of unknown anomalies. Specifically, our approach utilizes latent representations obtained from client-owned autoencoders to refine the decision boundary of inliers. Notably, only model parameters are shared between organizations, preserving data privacy. The efficacy of our proposed method is evaluated on two standard financial tabular datasets and an image dataset for anomaly detection in a distributed setting. The results demonstrate a strong improvement in the classification of unknown outliers during the inference phase for each organization's model.
[ "['Dayananda Herurkar' 'Sebastian Palacio' 'Ahmed Anwar' 'Joern Hees'\n 'Andreas Dengel']" ]
null
null
2404.14941
null
null
http://arxiv.org/pdf/2404.14941v1
2024-04-23T11:35:35Z
2024-04-23T11:35:35Z
Delayed Bottlenecking: Alleviating Forgetting in Pre-trained Graph Neural Networks
Pre-training GNNs to extract transferable knowledge and apply it to downstream tasks has become the de facto standard of graph representation learning. Recent works focused on designing self-supervised pre-training tasks to extract useful and universal transferable knowledge from large-scale unlabeled data. However, they have to face an inevitable question: traditional pre-training strategies that aim at extracting useful information about pre-training tasks, may not extract all useful information about the downstream task. In this paper, we reexamine the pre-training process within traditional pre-training and fine-tuning frameworks from the perspective of Information Bottleneck (IB) and confirm that the forgetting phenomenon in pre-training phase may cause detrimental effects on downstream tasks. Therefore, we propose a novel underline{D}elayed underline{B}ottlenecking underline{P}re-training (DBP) framework which maintains as much as possible mutual information between latent representations and training data during pre-training phase by suppressing the compression operation and delays the compression operation to fine-tuning phase to make sure the compression can be guided with labeled fine-tuning data and downstream tasks. To achieve this, we design two information control objectives that can be directly optimized and further integrate them into the actual model design. Extensive experiments on both chemistry and biology domains demonstrate the effectiveness of DBP.
[ "['Zhe Zhao' 'Pengkun Wang' 'Xu Wang' 'Haibin Wen' 'Xiaolong Xie'\n 'Zhengyang Zhou' 'Qingfu Zhang' 'Yang Wang']" ]
null
null
2404.14942
null
null
http://arxiv.org/pdf/2404.14942v1
2024-04-23T11:36:36Z
2024-04-23T11:36:36Z
Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures
Recommender systems have become an integral part of online services to help users locate specific information in a sea of data. However, existing studies show that some recommender systems are vulnerable to poisoning attacks, particularly those that involve learning schemes. A poisoning attack is where an adversary injects carefully crafted data into the process of training a model, with the goal of manipulating the system's final recommendations. Based on recent advancements in artificial intelligence, such attacks have gained importance recently. While numerous countermeasures to poisoning attacks have been developed, they have not yet been systematically linked to the properties of the attacks. Consequently, assessing the respective risks and potential success of mitigation strategies is difficult, if not impossible. This survey aims to fill this gap by primarily focusing on poisoning attacks and their countermeasures. This is in contrast to prior surveys that mainly focus on attacks and their detection methods. Through an exhaustive literature review, we provide a novel taxonomy for poisoning attacks, formalise its dimensions, and accordingly organise 30+ attacks described in the literature. Further, we review 40+ countermeasures to detect and/or prevent poisoning attacks, evaluating their effectiveness against specific types of attacks. This comprehensive survey should serve as a point of reference for protecting recommender systems against poisoning attacks. The article concludes with a discussion on open issues in the field and impactful directions for future research. A rich repository of resources associated with poisoning attacks is available at https://github.com/tamlhp/awesome-recsys-poisoning.
[ "['Thanh Toan Nguyen' 'Quoc Viet Hung Nguyen' 'Thanh Tam Nguyen'\n 'Thanh Trung Huynh' 'Thanh Thi Nguyen' 'Matthias Weidlich' 'Hongzhi Yin']" ]
null
null
2404.14943
null
null
http://arxiv.org/pdf/2404.14943v1
2024-04-23T11:40:30Z
2024-04-23T11:40:30Z
Does It Make Sense to Explain a Black Box With Another Black Box?
Although counterfactual explanations are a popular approach to explain ML black-box classifiers, they are less widespread in NLP. Most methods find those explanations by iteratively perturbing the target document until it is classified differently by the black box. We identify two main families of counterfactual explanation methods in the literature, namely, (a) emph{transparent} methods that perturb the target by adding, removing, or replacing words, and (b) emph{opaque} approaches that project the target document into a latent, non-interpretable space where the perturbation is carried out subsequently. This article offers a comparative study of the performance of these two families of methods on three classical NLP tasks. Our empirical evidence shows that opaque approaches can be an overkill for downstream applications such as fake news detection or sentiment analysis since they add an additional level of complexity with no significant performance gain. These observations motivate our discussion, which raises the question of whether it makes sense to explain a black box using another black box.
[ "['Julien Delaunay' 'Luis Galárraga' 'Christine Largouët']" ]
null
null
2404.14953
null
null
http://arxiv.org/pdf/2404.14953v1
2024-04-23T11:55:20Z
2024-04-23T11:55:20Z
Dynamic pricing with Bayesian updates from online reviews
When launching new products, firms face uncertainty about market reception. Online reviews provide valuable information not only to consumers but also to firms, allowing firms to adjust the product characteristics, including its selling price. In this paper, we consider a pricing model with online reviews in which the quality of the product is uncertain, and both the seller and the buyers Bayesianly update their beliefs to make purchasing & pricing decisions. We model the seller's pricing problem as a basic bandits' problem and show a close connection with the celebrated Catalan numbers, allowing us to efficiently compute the overall future discounted reward of the seller. With this tool, we analyze and compare the optimal static and dynamic pricing strategies in terms of the probability of effectively learning the quality of the product.
[ "['José Correa' 'Mathieu Mari' 'Andrew Xia']" ]
null
null
2404.14961
null
null
http://arxiv.org/pdf/2404.14961v1
2024-04-23T12:06:40Z
2024-04-23T12:06:40Z
Cache-Aware Reinforcement Learning in Large-Scale Recommender Systems
Modern large-scale recommender systems are built upon computation-intensive infrastructure and usually suffer from a huge difference in traffic between peak and off-peak periods. In peak periods, it is challenging to perform real-time computation for each request due to the limited budget of computational resources. The recommendation with a cache is a solution to this problem, where a user-wise result cache is used to provide recommendations when the recommender system cannot afford a real-time computation. However, the cached recommendations are usually suboptimal compared to real-time computation, and it is challenging to determine the items in the cache for each user. In this paper, we provide a cache-aware reinforcement learning (CARL) method to jointly optimize the recommendation by real-time computation and by the cache. We formulate the problem as a Markov decision process with user states and a cache state, where the cache state represents whether the recommender system performs recommendations by real-time computation or by the cache. The computational load of the recommender system determines the cache state. We perform reinforcement learning based on such a model to improve user engagement over multiple requests. Moreover, we show that the cache will introduce a challenge called critic dependency, which deteriorates the performance of reinforcement learning. To tackle this challenge, we propose an eigenfunction learning (EL) method to learn independent critics for CARL. Experiments show that CARL can significantly improve the users' engagement when considering the result cache. CARL has been fully launched in Kwai app, serving over 100 million users.
[ "['Xiaoshuang Chen' 'Gengrui Zhang' 'Yao Wang' 'Yulin Wu' 'Shuo Su'\n 'Kaiqiao Zhan' 'Ben Wang']" ]
null
null
2404.14966
null
null
http://arxiv.org/pdf/2404.14966v1
2024-04-23T12:20:27Z
2024-04-23T12:20:27Z
Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State Space Model
Existing Transformer-based models for point cloud analysis suffer from quadratic complexity, leading to compromised point cloud resolution and information loss. In contrast, the newly proposed Mamba model, based on state space models (SSM), outperforms Transformer in multiple areas with only linear complexity. However, the straightforward adoption of Mamba does not achieve satisfactory performance on point cloud tasks. In this work, we present Mamba3D, a state space model tailored for point cloud learning to enhance local feature extraction, achieving superior performance, high efficiency, and scalability potential. Specifically, we propose a simple yet effective Local Norm Pooling (LNP) block to extract local geometric features. Additionally, to obtain better global features, we introduce a bidirectional SSM (bi-SSM) with both a token forward SSM and a novel backward SSM that operates on the feature channel. Extensive experimental results show that Mamba3D surpasses Transformer-based counterparts and concurrent works in multiple tasks, with or without pre-training. Notably, Mamba3D achieves multiple SoTA, including an overall accuracy of 92.6% (train from scratch) on the ScanObjectNN and 95.1% (with single-modal pre-training) on the ModelNet40 classification task, with only linear complexity.
[ "['Xu Han' 'Yuan Tang' 'Zhaoxuan Wang' 'Xianzhi Li']" ]
null
null
2404.14970
null
null
http://arxiv.org/pdf/2404.14970v1
2024-04-23T12:24:53Z
2024-04-23T12:24:53Z
Integrating Heterogeneous Gene Expression Data through Knowledge Graphs for Improving Diabetes Prediction
Diabetes is a worldwide health issue affecting millions of people. Machine learning methods have shown promising results in improving diabetes prediction, particularly through the analysis of diverse data types, namely gene expression data. While gene expression data can provide valuable insights, challenges arise from the fact that the sample sizes in expression datasets are usually limited, and the data from different datasets with different gene expressions cannot be easily combined. This work proposes a novel approach to address these challenges by integrating multiple gene expression datasets and domain-specific knowledge using knowledge graphs, a unique tool for biomedical data integration. KG embedding methods are then employed to generate vector representations, serving as inputs for a classifier. Experiments demonstrated the efficacy of our approach, revealing improvements in diabetes prediction when integrating multiple gene expression datasets and domain-specific knowledge about protein functions and interactions.
[ "['Rita T. Sousa' 'Heiko Paulheim']" ]
null
null
2404.14973
null
null
http://arxiv.org/pdf/2404.14973v1
2024-04-23T12:27:20Z
2024-04-23T12:27:20Z
Symbolic Integration Algorithm Selection with Machine Learning: LSTMs vs Tree LSTMs
Computer Algebra Systems (e.g. Maple) are used in research, education, and industrial settings. One of their key functionalities is symbolic integration, where there are many sub-algorithms to choose from that can affect the form of the output integral, and the runtime. Choosing the right sub-algorithm for a given problem is challenging: we hypothesise that Machine Learning can guide this sub-algorithm choice. A key consideration of our methodology is how to represent the mathematics to the ML model: we hypothesise that a representation which encodes the tree structure of mathematical expressions would be well suited. We trained both an LSTM and a TreeLSTM model for sub-algorithm prediction and compared them to Maple's existing approach. Our TreeLSTM performs much better than the LSTM, highlighting the benefit of using an informed representation of mathematical expressions. It is able to produce better outputs than Maple's current state-of-the-art meta-algorithm, giving a strong basis for further research.
[ "['Rashid Barket' 'Matthew England' 'Jürgen Gerhard']" ]
null
null
2404.14986
null
null
http://arxiv.org/pdf/2404.14986v1
2024-04-23T12:43:15Z
2024-04-23T12:43:15Z
$\texttt{MiniMol}$: A Parameter-Efficient Foundation Model for Molecular Learning
In biological tasks, data is rarely plentiful as it is generated from hard-to-gather measurements. Therefore, pre-training foundation models on large quantities of available data and then transfer to low-data downstream tasks is a promising direction. However, how to design effective foundation models for molecular learning remains an open question, with existing approaches typically focusing on models with large parameter capacities. In this work, we propose $texttt{MiniMol}$, a foundational model for molecular learning with 10 million parameters. $texttt{MiniMol}$ is pre-trained on a mix of roughly 3300 sparsely defined graph- and node-level tasks of both quantum and biological nature. The pre-training dataset includes approximately 6 million molecules and 500 million labels. To demonstrate the generalizability of $texttt{MiniMol}$ across tasks, we evaluate it on downstream tasks from the Therapeutic Data Commons (TDC) ADMET group showing significant improvements over the prior state-of-the-art foundation model across 17 tasks. $texttt{MiniMol}$ will be a public and open-sourced model for future research.
[ "['Kerstin Kläser' 'Błażej Banaszewski' 'Samuel Maddrell-Mander'\n 'Callum McLean' 'Luis Müller' 'Ali Parviz' 'Shenyang Huang'\n 'Andrew Fitzgibbon']" ]
null
null
2404.14994
null
null
http://arxiv.org/pdf/2404.14994v3
2024-06-20T15:21:23Z
2024-04-23T12:51:37Z
Transformers Can Represent $n$-gram Language Models
Existing work has analyzed the representational capacity of the transformer architecture by means of formal models of computation. However, the focus so far has been on analyzing the architecture in terms of language emph{acceptance}. We contend that this is an ill-suited problem in the study of emph{language models} (LMs), which are definitionally emph{probability distributions} over strings. In this paper, we focus on the relationship between transformer LMs and $n$-gram LMs, a simple and historically relevant class of language models. We show that transformer LMs using the hard or sparse attention mechanisms can exactly represent any $n$-gram LM, giving us a concrete lower bound on their probabilistic representational capacity. This provides a first step towards understanding the mechanisms that transformer LMs can use to represent probability distributions over strings.
[ "['Anej Svete' 'Ryan Cotterell']" ]
null
null
2404.14999
null
null
http://arxiv.org/pdf/2404.14999v1
2024-04-23T13:02:11Z
2024-04-23T13:02:11Z
A Unified Replay-based Continuous Learning Framework for Spatio-Temporal Prediction on Streaming Data
The widespread deployment of wireless and mobile devices results in a proliferation of spatio-temporal data that is used in applications, e.g., traffic prediction, human mobility mining, and air quality prediction, where spatio-temporal prediction is often essential to enable safety, predictability, or reliability. Many recent proposals that target deep learning for spatio-temporal prediction suffer from so-called catastrophic forgetting, where previously learned knowledge is entirely forgotten when new data arrives. Such proposals may experience deteriorating prediction performance when applied in settings where data streams into the system. To enable spatio-temporal prediction on streaming data, we propose a unified replay-based continuous learning framework. The framework includes a replay buffer of previously learned samples that are fused with training data using a spatio-temporal mixup mechanism in order to preserve historical knowledge effectively, thus avoiding catastrophic forgetting. To enable holistic representation preservation, the framework also integrates a general spatio-temporal autoencoder with a carefully designed spatio-temporal simple siamese (STSimSiam) network that aims to ensure prediction accuracy and avoid holistic feature loss by means of mutual information maximization. The framework further encompasses five spatio-temporal data augmentation methods to enhance the performance of STSimSiam. Extensive experiments on real data offer insight into the effectiveness of the proposed framework.
[ "['Hao Miao' 'Yan Zhao' 'Chenjuan Guo' 'Bin Yang' 'Kai Zheng'\n 'Feiteng Huang' 'Jiandong Xie' 'Christian S. Jensen']" ]
null
null
2404.15018
null
null
http://arxiv.org/pdf/2404.15018v1
2024-04-23T13:23:27Z
2024-04-23T13:23:27Z
Conformal Predictive Systems Under Covariate Shift
Conformal Predictive Systems (CPS) offer a versatile framework for constructing predictive distributions, allowing for calibrated inference and informative decision-making. However, their applicability has been limited to scenarios adhering to the Independent and Identically Distributed (IID) model assumption. This paper extends CPS to accommodate scenarios characterized by covariate shifts. We therefore propose Weighted CPS (WCPS), akin to Weighted Conformal Prediction (WCP), leveraging likelihood ratios between training and testing covariate distributions. This extension enables the construction of nonparametric predictive distributions capable of handling covariate shifts. We present theoretical underpinnings and conjectures regarding the validity and efficacy of WCPS and demonstrate its utility through empirical evaluations on both synthetic and real-world datasets. Our simulation experiments indicate that WCPS are probabilistically calibrated under covariate shift.
[ "['Jef Jonkers' 'Glenn Van Wallendael' 'Luc Duchateau' 'Sofie Van Hoecke']" ]
null
null
2404.15024
null
null
http://arxiv.org/pdf/2404.15024v1
2024-04-23T13:32:29Z
2024-04-23T13:32:29Z
A Learning Paradigm for Interpretable Gradients
This paper studies interpretability of convolutional networks by means of saliency maps. Most approaches based on Class Activation Maps (CAM) combine information from fully connected layers and gradient through variants of backpropagation. However, it is well understood that gradients are noisy and alternatives like guided backpropagation have been proposed to obtain better visualization at inference. In this work, we present a novel training approach to improve the quality of gradients for interpretability. In particular, we introduce a regularization loss such that the gradient with respect to the input image obtained by standard backpropagation is similar to the gradient obtained by guided backpropagation. We find that the resulting gradient is qualitatively less noisy and improves quantitatively the interpretability properties of different networks, using several interpretability methods.
[ "['Felipe Torres Figueroa' 'Hanwei Zhang' 'Ronan Sicre' 'Yannis Avrithis'\n 'Stephane Ayache']" ]
null
null
2404.15029
null
null
http://arxiv.org/pdf/2404.15029v1
2024-04-23T13:35:22Z
2024-04-23T13:35:22Z
Explainable LightGBM Approach for Predicting Myocardial Infarction Mortality
Myocardial Infarction is a main cause of mortality globally, and accurate risk prediction is crucial for improving patient outcomes. Machine Learning techniques have shown promise in identifying high-risk patients and predicting outcomes. However, patient data often contain vast amounts of information and missing values, posing challenges for feature selection and imputation methods. In this article, we investigate the impact of the data preprocessing task and compare three ensembles boosted tree methods to predict the risk of mortality in patients with myocardial infarction. Further, we use the Tree Shapley Additive Explanations method to identify relationships among all the features for the performed predictions, leveraging the entirety of the available data in the analysis. Notably, our approach achieved a superior performance when compared to other existing machine learning approaches, with an F1-score of 91,2% and an accuracy of 91,8% for LightGBM without data preprocessing.
[ "['Ana Letícia Garcez Vicente' 'Roseval Donisete Malaquias Junior'\n 'Roseli A. F. Romero']" ]
null
null
2404.15034
null
null
http://arxiv.org/pdf/2404.15034v1
2024-04-23T13:39:04Z
2024-04-23T13:39:04Z
Deep Multi-View Channel-Wise Spatio-Temporal Network for Traffic Flow Prediction
Accurately forecasting traffic flows is critically important to many real applications including public safety and intelligent transportation systems. The challenges of this problem include both the dynamic mobility patterns of the people and the complex spatial-temporal correlations of the urban traffic data. Meanwhile, most existing models ignore the diverse impacts of the various traffic observations (e.g. vehicle speed and road occupancy) on the traffic flow prediction, and different traffic observations can be considered as different channels of input features. We argue that the analysis in multiple-channel traffic observations might help to better address this problem. In this paper, we study the novel problem of multi-channel traffic flow prediction, and propose a deep underline{M}ulti-underline{V}iew underline{C}hannel-wise underline{S}patio-underline{T}emporal underline{Net}work (MVC-STNet) model to effectively address it. Specifically, we first construct the localized and globalized spatial graph where the multi-view fusion module is used to effectively extract the local and global spatial dependencies. Then LSTM is used to learn the temporal correlations. To effectively model the different impacts of various traffic observations on traffic flow prediction, a channel-wise graph convolutional network is also designed. Extensive experiments are conducted over the PEMS04 and PEMS08 datasets. The results demonstrate that the proposed MVC-STNet outperforms state-of-the-art methods by a large margin.
[ "['Hao Miao' 'Senzhang Wang' 'Meiyue Zhang' 'Diansheng Guo' 'Funing Sun'\n 'Fan Yang']" ]
null
null
2404.15045
null
null
http://arxiv.org/pdf/2404.15045v1
2024-04-23T13:47:09Z
2024-04-23T13:47:09Z
Multi-Head Mixture-of-Experts
Sparse Mixtures of Experts (SMoE) scales model capacity without significant increases in training and inference costs, but exhibits the following two issues: (1) Low expert activation, where only a small subset of experts are activated for optimization. (2) Lacking fine-grained analytical capabilities for multiple semantic concepts within individual tokens. We propose Multi-Head Mixture-of-Experts (MH-MoE), which employs a multi-head mechanism to split each token into multiple sub-tokens. These sub-tokens are then assigned to and processed by a diverse set of experts in parallel, and seamlessly reintegrated into the original token form. The multi-head mechanism enables the model to collectively attend to information from various representation spaces within different experts, while significantly enhances expert activation, thus deepens context understanding and alleviate overfitting. Moreover, our MH-MoE is straightforward to implement and decouples from other SMoE optimization methods, making it easy to integrate with other SMoE models for enhanced performance. Extensive experimental results across three tasks: English-focused language modeling, Multi-lingual language modeling and Masked multi-modality modeling tasks, demonstrate the effectiveness of MH-MoE.
[ "['Xun Wu' 'Shaohan Huang' 'Wenhui Wang' 'Furu Wei']" ]
null
null
2404.15065
null
null
http://arxiv.org/pdf/2404.15065v1
2024-04-23T14:12:48Z
2024-04-23T14:12:48Z
Formal Verification of Graph Convolutional Networks with Uncertain Node Features and Uncertain Graph Structure
Graph neural networks are becoming increasingly popular in the field of machine learning due to their unique ability to process data structured in graphs. They have also been applied in safety-critical environments where perturbations inherently occur. However, these perturbations require us to formally verify neural networks before their deployment in safety-critical environments as neural networks are prone to adversarial attacks. While there exists research on the formal verification of neural networks, there is no work verifying the robustness of generic graph convolutional network architectures with uncertainty in the node features and in the graph structure over multiple message-passing steps. This work addresses this research gap by explicitly preserving the non-convex dependencies of all elements in the underlying computations through reachability analysis with (matrix) polynomial zonotopes. We demonstrate our approach on three popular benchmark datasets.
[ "['Tobias Ladner' 'Michael Eichelbeck' 'Matthias Althoff']" ]
null
null
2404.15081
null
null
http://arxiv.org/pdf/2404.15081v2
2024-06-14T14:26:38Z
2024-04-23T14:31:15Z
Perturbing Attention Gives You More Bang for the Buck: Subtle Imaging Perturbations That Efficiently Fool Customized Diffusion Models
Diffusion models (DMs) embark a new era of generative modeling and offer more opportunities for efficient generating high-quality and realistic data samples. However, their widespread use has also brought forth new challenges in model security, which motivates the creation of more effective adversarial attackers on DMs to understand its vulnerability. We propose CAAT, a simple but generic and efficient approach that does not require costly training to effectively fool latent diffusion models (LDMs). The approach is based on the observation that cross-attention layers exhibits higher sensitivity to gradient change, allowing for leveraging subtle perturbations on published images to significantly corrupt the generated images. We show that a subtle perturbation on an image can significantly impact the cross-attention layers, thus changing the mapping between text and image during the fine-tuning of customized diffusion models. Extensive experiments demonstrate that CAAT is compatible with diverse diffusion models and outperforms baseline attack methods in a more effective (more noise) and efficient (twice as fast as Anti-DreamBooth and Mist) manner.
[ "['Jingyao Xu' 'Yuetong Lu' 'Yandong Li' 'Siyang Lu' 'Dongdong Wang'\n 'Xiang Wei']" ]
null
null
2404.15084
null
null
http://arxiv.org/pdf/2404.15084v1
2024-04-23T14:34:16Z
2024-04-23T14:34:16Z
Hyperparameter Optimization Can Even be Harmful in Off-Policy Learning and How to Deal with It
There has been a growing interest in off-policy evaluation in the literature such as recommender systems and personalized medicine. We have so far seen significant progress in developing estimators aimed at accurately estimating the effectiveness of counterfactual policies based on biased logged data. However, there are many cases where those estimators are used not only to evaluate the value of decision making policies but also to search for the best hyperparameters from a large candidate space. This work explores the latter hyperparameter optimization (HPO) task for off-policy learning. We empirically show that naively applying an unbiased estimator of the generalization performance as a surrogate objective in HPO can cause an unexpected failure, merely pursuing hyperparameters whose generalization performance is greatly overestimated. We then propose simple and computationally efficient corrections to the typical HPO procedure to deal with the aforementioned issues simultaneously. Empirical investigations demonstrate the effectiveness of our proposed HPO algorithm in situations where the typical procedure fails severely.
[ "['Yuta Saito' 'Masahiro Nomura']" ]
null
null
2404.15095
null
null
http://arxiv.org/abs/2404.15095v1
2024-04-23T14:49:55Z
2024-04-23T14:49:55Z
Using ARIMA to Predict the Expansion of Subscriber Data Consumption
This study discusses how insights retrieved from subscriber data can impact decision-making in telecommunications, focusing on predictive modeling using machine learning techniques such as the ARIMA model. The study explores time series forecasting to predict subscriber usage trends, evaluating the ARIMA model's performance using various metrics. It also compares ARIMA with Convolutional Neural Network (CNN) models, highlighting ARIMA's superiority in accuracy and execution speed. The study suggests future directions for research, including exploring additional forecasting models and considering other factors affecting subscriber data usage.
[ "['Mike Wa Nkongolo']" ]
null
null
2404.15096
null
null
http://arxiv.org/pdf/2404.15096v2
2024-04-30T02:32:42Z
2024-04-23T14:52:09Z
Impedance Matching: Enabling an RL-Based Running Jump in a Quadruped Robot
Replicating the remarkable athleticism seen in animals has long been a challenge in robotics control. Although Reinforcement Learning (RL) has demonstrated significant progress in dynamic legged locomotion control, the substantial sim-to-real gap often hinders the real-world demonstration of truly dynamic movements. We propose a new framework to mitigate this gap through frequency-domain analysis-based impedance matching between simulated and real robots. Our framework offers a structured guideline for parameter selection and the range for dynamics randomization in simulation, thus facilitating a safe sim-to-real transfer. The learned policy using our framework enabled jumps across distances of 55 cm and heights of 38 cm. The results are, to the best of our knowledge, one of the highest and longest running jumps demonstrated by an RL-based control policy in a real quadruped robot. Note that the achieved jumping height is approximately 85% of that obtained from a state-of-the-art trajectory optimization method, which can be seen as the physical limit for the given robot hardware. In addition, our control policy accomplished stable walking at speeds up to 2 m/s in the forward and backward directions, and 1 m/s in the sideway direction.
[ "['Neil Guan' 'Shangqun Yu' 'Shifan Zhu' 'Donghyun Kim']" ]
null
null
2404.15098
null
null
http://arxiv.org/pdf/2404.15098v1
2024-04-23T14:52:14Z
2024-04-23T14:52:14Z
Uncertainty Quantification of Data-Driven Output Predictors in the Output Error Setting
We revisit the problem of predicting the output of an LTI system directly using offline input-output data (and without the use of a parametric model) in the behavioral setting. Existing works calculate the output predictions by projecting the recent samples of the input and output signals onto the column span of a Hankel matrix consisting of the offline input-output data. However, if the offline data is corrupted by noise, the output prediction is no longer exact. While some prior works propose mitigating noisy data through matrix low-ranking approximation heuristics, such as truncated singular value decomposition, the ensuing prediction accuracy remains unquantified. This paper fills these gaps by introducing two upper bounds on the prediction error under the condition that the noise is sufficiently small relative to the offline data's magnitude. The first bound pertains to prediction using the raw offline data directly, while the second one applies to the case of low-ranking approximation heuristic. Notably, the bounds do not require the ground truth about the system output, relying solely on noisy measurements with a known noise level and system order. Extensive numerical simulations show that both bounds decrease monotonically (and linearly) as a function of the noise level. Furthermore, our results demonstrate that applying the de-noising heuristic in the output error setup does not generally lead to a better prediction accuracy as compared to using raw data directly, nor a smaller upper bound on the prediction error. However, it allows for a more general upper bound, as the first upper bound requires a specific condition on the partitioning of the Hankel matrix.
[ "['Farzan Kaviani' 'Ivan Markovsky' 'Hamid R. Ossareh']" ]
null
null
2404.15109
null
null
http://arxiv.org/pdf/2404.15109v1
2024-04-23T15:03:37Z
2024-04-23T15:03:37Z
Compete and Compose: Learning Independent Mechanisms for Modular World Models
We present COmpetitive Mechanisms for Efficient Transfer (COMET), a modular world model which leverages reusable, independent mechanisms across different environments. COMET is trained on multiple environments with varying dynamics via a two-step process: competition and composition. This enables the model to recognise and learn transferable mechanisms. Specifically, in the competition phase, COMET is trained with a winner-takes-all gradient allocation, encouraging the emergence of independent mechanisms. These are then re-used in the composition phase, where COMET learns to re-compose learnt mechanisms in ways that capture the dynamics of intervened environments. In so doing, COMET explicitly reuses prior knowledge, enabling efficient and interpretable adaptation. We evaluate COMET on environments with image-based observations. In contrast to competitive baselines, we demonstrate that COMET captures recognisable mechanisms without supervision. Moreover, we show that COMET is able to adapt to new environments with varying numbers of objects with improved sample efficiency compared to more conventional finetuning approaches.
[ "['Anson Lei' 'Frederik Nolte' 'Bernhard Schölkopf' 'Ingmar Posner']" ]
null
null
2404.15146
null
null
http://arxiv.org/pdf/2404.15146v2
2024-07-01T14:43:11Z
2024-04-23T15:49:37Z
Rethinking LLM Memorization through the Lens of Adversarial Compression
Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage. One major question is whether these models "memorize" all their training data or they integrate many data sources in some way more akin to how a human would learn and synthesize information. The answer hinges, to a large degree, on how we define memorization. In this work, we propose the Adversarial Compression Ratio (ACR) as a metric for assessing memorization in LLMs. A given string from the training data is considered memorized if it can be elicited by a prompt (much) shorter than the string itself -- in other words, if these strings can be "compressed" with the model by computing adversarial prompts of fewer tokens. The ACR overcomes the limitations of existing notions of memorization by (i) offering an adversarial view of measuring memorization, especially for monitoring unlearning and compliance; and (ii) allowing for the flexibility to measure memorization for arbitrary strings at a reasonably low compute. Our definition serves as a practical tool for determining when model owners may be violating terms around data usage, providing a potential legal tool and a critical lens through which to address such scenarios.
[ "['Avi Schwarzschild' 'Zhili Feng' 'Pratyush Maini' 'Zachary C. Lipton'\n 'J. Zico Kolter']" ]
null
null
2404.15149
null
null
http://arxiv.org/pdf/2404.15149v1
2024-04-23T15:52:52Z
2024-04-23T15:52:52Z
Bias patterns in the application of LLMs for clinical decision support: A comprehensive study
Large Language Models (LLMs) have emerged as powerful candidates to inform clinical decision-making processes. While these models play an increasingly prominent role in shaping the digital landscape, two growing concerns emerge in healthcare applications: 1) to what extent do LLMs exhibit social bias based on patients' protected attributes (like race), and 2) how do design choices (like architecture design and prompting strategies) influence the observed biases? To answer these questions rigorously, we evaluated eight popular LLMs across three question-answering (QA) datasets using clinical vignettes (patient descriptions) standardized for bias evaluations. We employ red-teaming strategies to analyze how demographics affect LLM outputs, comparing both general-purpose and clinically-trained models. Our extensive experiments reveal various disparities (some significant) across protected groups. We also observe several counter-intuitive patterns such as larger models not being necessarily less biased and fined-tuned models on medical data not being necessarily better than the general-purpose models. Furthermore, our study demonstrates the impact of prompt design on bias patterns and shows that specific phrasing can influence bias patterns and reflection-type approaches (like Chain of Thought) can reduce biased outcomes effectively. Consistent with prior studies, we call on additional evaluations, scrutiny, and enhancement of LLMs used in clinical decision support applications.
[ "['Raphael Poulain' 'Hamed Fayyaz' 'Rahmatollah Beheshti']" ]
null
null
2404.15155
null
null
http://arxiv.org/pdf/2404.15155v1
2024-04-22T06:30:05Z
2024-04-22T06:30:05Z
Adaptive Collaboration Strategy for LLMs in Medical Decision Making
Foundation models have become invaluable in advancing the medical field. Despite their promise, the strategic deployment of LLMs for effective utility in complex medical tasks remains an open question. Our novel framework, Medical Decision-making Agents (MDAgents) aims to address this gap by automatically assigning the effective collaboration structure for LLMs. Assigned solo or group collaboration structure is tailored to the complexity of the medical task at hand, emulating real-world medical decision making processes. We evaluate our framework and baseline methods with state-of-the-art LLMs across a suite of challenging medical benchmarks: MedQA, MedMCQA, PubMedQA, DDXPlus, PMC-VQA, Path-VQA, and MedVidQA, achieving the best performance in 5 out of 7 benchmarks that require an understanding of multi-modal medical reasoning. Ablation studies reveal that MDAgents excels in adapting the number of collaborating agents to optimize efficiency and accuracy, showcasing its robustness in diverse scenarios. We also explore the dynamics of group consensus, offering insights into how collaborative agents could behave in complex clinical team dynamics. Our code can be found at https://github.com/mitmedialab/MDAgents.
[ "['Yubin Kim' 'Chanwoo Park' 'Hyewon Jeong' 'Yik Siu Chan' 'Xuhai Xu'\n 'Daniel McDuff' 'Cynthia Breazeal' 'Hae Won Park']" ]
null
null
2404.15168
null
null
http://arxiv.org/pdf/2404.15168v1
2024-04-18T10:17:20Z
2024-04-18T10:17:20Z
Artificial Neural Networks to Recognize Speakers Division from Continuous Bengali Speech
Voice based applications are ruling over the era of automation because speech has a lot of factors that determine a speakers information as well as speech. Modern Automatic Speech Recognition (ASR) is a blessing in the field of Human-Computer Interaction (HCI) for efficient communication among humans and devices using Artificial Intelligence technology. Speech is one of the easiest mediums of communication because it has a lot of identical features for different speakers. Nowadays it is possible to determine speakers and their identity using their speech in terms of speaker recognition. In this paper, we presented a method that will provide a speakers geographical identity in a certain region using continuous Bengali speech. We consider eight different divisions of Bangladesh as the geographical region. We applied the Mel Frequency Cepstral Coefficient (MFCC) and Delta features on an Artificial Neural Network to classify speakers division. We performed some preprocessing tasks like noise reduction and 8-10 second segmentation of raw audio before feature extraction. We used our dataset of more than 45 hours of audio data from 633 individual male and female speakers. We recorded the highest accuracy of 85.44%.
[ "['Hasmot Ali' 'Md. Fahad Hossain' 'Md. Mehedi Hasan' 'Sheikh Abujar'\n 'Sheak Rashed Haider Noori']" ]
null
null
2404.15176
null
null
http://arxiv.org/abs/2404.15176v1
2024-04-23T16:15:39Z
2024-04-23T16:15:39Z
Voice Passing : a Non-Binary Voice Gender Prediction System for evaluating Transgender voice transition
This paper presents a software allowing to describe voices using a continuous Voice Femininity Percentage (VFP). This system is intended for transgender speakers during their voice transition and for voice therapists supporting them in this process. A corpus of 41 French cis- and transgender speakers was recorded. A perceptual evaluation allowed 57 participants to estimate the VFP for each voice. Binary gender classification models were trained on external gender-balanced data and used on overlapping windows to obtain average gender prediction estimates, which were calibrated to predict VFP and obtained higher accuracy than $F_0$ or vocal track length-based models. Training data speaking style and DNN architecture were shown to impact VFP estimation. Accuracy of the models was affected by speakers' age. This highlights the importance of style, age, and the conception of gender as binary or not, to build adequate statistical representations of cultural concepts.
[ "['David Doukhan' 'Simon Devauchelle' 'Lucile Girard-Monneron'\n 'Mía Chávez Ruz' 'V. Chaddouk' 'Isabelle Wagner' 'Albert Rilliard']" ]
null
null
2404.15182
null
null
http://arxiv.org/pdf/2404.15182v1
2024-04-12T00:36:43Z
2024-04-12T00:36:43Z
FLoRA: Enhancing Vision-Language Models with Parameter-Efficient Federated Learning
In the rapidly evolving field of artificial intelligence, multimodal models, e.g., integrating vision and language into visual-language models (VLMs), have become pivotal for many applications, ranging from image captioning to multimodal search engines. Among these models, the Contrastive Language-Image Pre-training (CLIP) model has demonstrated remarkable performance in understanding and generating nuanced relationships between text and images. However, the conventional training of such models often requires centralized aggregation of vast datasets, posing significant privacy and data governance challenges. To address these concerns, this paper proposes a novel approach that leverages Federated Learning and parameter-efficient adapters, i.e., Low-Rank Adaptation (LoRA), to train VLMs. This methodology preserves data privacy by training models across decentralized data sources and ensures model adaptability and efficiency through LoRA's parameter-efficient fine-tuning. Our approach accelerates training time by up to 34.72 times and requires 2.47 times less memory usage than full fine-tuning.
[ "['Duy Phuong Nguyen' 'J. Pablo Munoz' 'Ali Jannesari']" ]
null
null
2404.15193
null
null
http://arxiv.org/abs/2404.15193v2
2024-05-17T09:21:03Z
2024-04-06T14:04:14Z
Structurally Flexible Neural Networks: Evolving the Building Blocks for General Agents
Artificial neural networks used for reinforcement learning are structurally rigid, meaning that each optimized parameter of the network is tied to its specific placement in the network structure. It also means that a network only works with pre-defined and fixed input- and output sizes. This is a consequence of having the number of optimized parameters being directly dependent on the structure of the network. Structural rigidity limits the ability to optimize parameters of policies across multiple environments that do not share input and output spaces. Here, we evolve a set of neurons and plastic synapses each represented by a gated recurrent unit (GRU). During optimization, the parameters of these fundamental units of a neural network are optimized in different random structural configurations. Earlier work has shown that parameter sharing between units is important for making structurally flexible neurons We show that it is possible to optimize a set of distinct neuron- and synapse types allowing for a mitigation of the symmetry dilemma. We demonstrate this by optimizing a single set of neurons and synapses to solve multiple reinforcement learning control tasks simultaneously.
[ "['Joachim Winther Pedersen' 'Erwan Plantec' 'Eleni Nisioti'\n 'Milton Montero' 'Sebastian Risi']" ]
null
null
2404.15197
null
null
http://arxiv.org/pdf/2404.15197v1
2024-04-05T21:12:25Z
2024-04-05T21:12:25Z
Multi-Task Learning as enabler for General-Purpose AI-native RAN
The realization of data-driven AI-native architecture envisioned for 6G and beyond networks can eventually lead to multiple machine learning (ML) workloads distributed at the network edges driving downstream tasks like secondary carrier prediction, positioning, channel prediction etc. The independent life-cycle management of these edge-distributed independent multiple workloads sharing a resource-constrained compute node e.g., base station (BS) is a challenge that will scale with denser deployments. This study explores the effectiveness of multi-task learning (MTL) approaches in facilitating a general-purpose AI native Radio Access Network (RAN). The investigation focuses on four RAN tasks: (i) secondary carrier prediction, (ii) user location prediction, (iii) indoor link classification, and (iv) line-of-sight link classification. We validate the performance using realistic simulations considering multi-faceted design aspects of MTL including model architecture, loss and gradient balancing strategies, distributed learning topology, data sparsity and task groupings. The quantification and insights from simulations reveal that for the four RAN tasks considered (i) adoption of customized gate control-based expert architecture with uncertainty-based weighting makes MTL perform either best among all or at par with single task learning (STL) (ii) LoS classification task in MTL setting helps other tasks but its own performance is degraded (iii) for sparse training data, training a single global MTL model is helpful but MTL performance is on par with STL (iv) optimal set of group pairing exists for each task and (v) partial federation is much better than full model federation in MTL setting.
[ "['Hasan Farooq' 'Julien Forgeat' 'Shruti Bothe' 'Kristijonas Cyras'\n 'Md Moin']" ]
null
null
2404.15198
null
null
http://arxiv.org/pdf/2404.15198v1
2024-04-05T16:52:55Z
2024-04-05T16:52:55Z
Lossless and Near-Lossless Compression for Foundation Models
With the growth of model sizes and scale of their deployment, their sheer size burdens the infrastructure requiring more network and more storage to accommodate these. While there is a vast literature about reducing model sizes, we investigate a more traditional type of compression -- one that compresses the model to a smaller form and is coupled with a decompression algorithm that returns it to its original size -- namely lossless compression. Somewhat surprisingly, we show that such lossless compression can gain significant network and storage reduction on popular models, at times reducing over $50%$ of the model size. We investigate the source of model compressibility, introduce compression variants tailored for models and categorize models to compressibility groups. We also introduce a tunable lossy compression technique that can further reduce size even on the less compressible models with little to no effect on the model accuracy. We estimate that these methods could save over an ExaByte per month of network traffic downloaded from a large model hub like HuggingFace.
[ "['Moshik Hershcovitch' 'Leshem Choshen' 'Andrew Wood' 'Ilias Enmouri'\n 'Peter Chin' 'Swaminathan Sundararaman' 'Danny Harnik']" ]
null
null
2404.15199
null
null
http://arxiv.org/pdf/2404.15199v2
2024-05-23T05:12:00Z
2024-04-23T16:35:14Z
Reinforcement Learning with Adaptive Control Regularization for Safe Control of Critical Systems
Reinforcement Learning (RL) is a powerful method for controlling dynamic systems, but its learning mechanism can lead to unpredictable actions that undermine the safety of critical systems. Here, we propose RL with Adaptive Control Regularization (RL-ACR), an algorithm that enables safe RL exploration by combining the RL policy with a policy regularizer that hard-codes safety constraints. We perform policy combination via a "focus network," which determines the appropriate combination depending on the state -- relying more on the safe policy regularizer for less-exploited states while allowing unbiased convergence for well-exploited states. In a series of critical control applications, we demonstrate that RL-ACR ensures safety during training while achieving the performance standards of model-free RL approaches that disregard safety.
[ "['Haozhe Tian' 'Homayoun Hamedmoghadam' 'Robert Shorten' 'Pietro Ferraro']" ]
null
null
2404.15201
null
null
http://arxiv.org/pdf/2404.15201v3
2024-05-22T12:45:42Z
2024-04-23T16:35:59Z
CORE-BEHRT: A Carefully Optimized and Rigorously Evaluated BEHRT
BERT-based models for Electronic Health Records (EHR) have surged in popularity following the release of BEHRT and Med-BERT. Subsequent models have largely built on these foundations despite the fundamental design choices of these pioneering models remaining underexplored. To address this issue, we introduce CORE-BEHRT, a Carefully Optimized and Rigorously Evaluated BEHRT. Through incremental optimization, we isolate the sources of improvement for key design choices, giving us insights into the effect of data representation and individual technical components on performance. Evaluating this across a set of generic tasks (death, pain treatment, and general infection), we showed that improving data representation can increase the average downstream performance from 0.785 to 0.797 AUROC, primarily when including medication and timestamps. Improving the architecture and training protocol on top of this increased average downstream performance to 0.801 AUROC. We then demonstrated the consistency of our optimization through a rigorous evaluation across 25 diverse clinical prediction tasks. We observed significant performance increases in 17 out of 25 tasks and improvements in 24 tasks, highlighting the generalizability of our findings. Our findings provide a strong foundation for future work and aim to increase the trustworthiness of BERT-based EHR models.
[ "['Mikkel Odgaard' 'Kiril Vadimovic Klein' 'Sanne Møller Thysen'\n 'Espen Jimenez-Solem' 'Martin Sillesen' 'Mads Nielsen']" ]
null
null
2404.15204
null
null
http://arxiv.org/pdf/2404.15204v1
2024-04-15T10:35:50Z
2024-04-15T10:35:50Z
Towards a high-performance AI compiler with upstream MLIR
This work proposes a compilation flow using open-source compiler passes to build a framework to achieve ninja performance from a generic linear algebra high-level abstraction. We demonstrate this flow with a proof-of-concept MLIR project that uses input IR in Linalg-on-Tensor from TensorFlow and PyTorch, performs cache-level optimizations and lowering to micro-kernels for efficient vectorization, achieving over 90% of the performance of ninja-written equivalent programs. The contributions of this work include: (1) Packing primitives on the tensor dialect and passes for cache-aware distribution of tensors (single and multi-core) and type-aware instructions (VNNI, BFDOT, BFMMLA), including propagation of shapes across the entire function; (2) A linear algebra pipeline, including tile, fuse and bufferization strategies to get model-level IR into hardware friendly tile calls; (3) A mechanism for micro-kernel lowering to an open source library that supports various CPUs.
[ "['Renato Golin' 'Lorenzo Chelini' 'Adam Siemieniuk' 'Kavitha Madhu'\n 'Niranjan Hasabnis' 'Hans Pabst' 'Evangelos Georganas'\n 'Alexander Heinecke']" ]
null
null
2404.15207
null
null
http://arxiv.org/abs/2404.15207v1
2024-04-07T23:03:23Z
2024-04-07T23:03:23Z
Simulation-Free Determination of Microstructure Representative Volume Element Size via Fisher Scores
A representative volume element (RVE) is a reasonably small unit of microstructure that can be simulated to obtain the same effective properties as the entire microstructure sample. Finite element (FE) simulation of RVEs, as opposed to much larger samples, saves computational expense, especially in multiscale modeling. Therefore, it is desirable to have a framework that determines RVE size prior to FE simulations. Existing methods select the RVE size based on when the FE-simulated properties of samples of increasing size converge with insignificant statistical variations, with the drawback that many samples must be simulated. We propose a simulation-free alternative that determines RVE size based only on a micrograph. The approach utilizes a machine learning model trained to implicitly characterize the stochastic nature of the input micrograph. The underlying rationale is to view RVE size as the smallest moving window size for which the stochastic nature of the microstructure within the window is stationary as the window moves across a large micrograph. For this purpose, we adapt a recently developed Fisher score-based framework for microstructure nonstationarity monitoring. Because the resulting RVE size is based solely on the micrograph and does not involve any FE simulation of specific properties, it constitutes an RVE for any property of interest that solely depends on the microstructure characteristics. Through numerical experiments of simple and complex microstructures, we validate our approach and show that our selected RVE sizes are consistent with when the chosen FE-simulated properties converge.
[ "['Wei Liu' 'Satyajit Mojumder' 'Wing Kam Liu' 'Wei Chen' 'Daniel W. Apley']" ]
null
null
2404.15209
null
null
http://arxiv.org/pdf/2404.15209v1
2024-04-01T02:20:09Z
2024-04-01T02:20:09Z
Data-Driven Knowledge Transfer in Batch $Q^*$ Learning
In data-driven decision-making in marketing, healthcare, and education, it is desirable to utilize a large amount of data from existing ventures to navigate high-dimensional feature spaces and address data scarcity in new ventures. We explore knowledge transfer in dynamic decision-making by concentrating on batch stationary environments and formally defining task discrepancies through the lens of Markov decision processes (MDPs). We propose a framework of Transferred Fitted $Q$-Iteration algorithm with general function approximation, enabling the direct estimation of the optimal action-state function $Q^*$ using both target and source data. We establish the relationship between statistical performance and MDP task discrepancy under sieve approximation, shedding light on the impact of source and target sample sizes and task discrepancy on the effectiveness of knowledge transfer. We show that the final learning error of the $Q^*$ function is significantly improved from the single task rate both theoretically and empirically.
[ "['Elynn Chen' 'Xi Chen' 'Wenbo Jing']" ]
null
null
2404.15211
null
null
http://arxiv.org/abs/2404.15211v2
2024-06-04T04:34:24Z
2024-03-29T04:54:22Z
LACS: Learning-Augmented Algorithms for Carbon-Aware Resource Scaling with Uncertain Demand
Motivated by an imperative to reduce the carbon emissions of cloud data centers, this paper studies the online carbon-aware resource scaling problem with unknown job lengths (OCSU) and applies it to carbon-aware resource scaling for executing computing workloads. The task is to dynamically scale resources (e.g., the number of servers) assigned to a job of unknown length such that it is completed before a deadline, with the objective of reducing the carbon emissions of executing the workload. The total carbon emissions of executing a job originate from the emissions of running the job and excess carbon emitted while switching between different scales (e.g., due to checkpoint and resume). Prior work on carbon-aware resource scaling has assumed accurate job length information, while other approaches have ignored switching losses and require carbon intensity forecasts. These assumptions prohibit the practical deployment of prior work for online carbon-aware execution of scalable computing workload. We propose LACS, a theoretically robust learning-augmented algorithm that solves OCSU. To achieve improved practical average-case performance, LACS integrates machine-learned predictions of job length. To achieve solid theoretical performance, LACS extends the recent theoretical advances on online conversion with switching costs to handle a scenario where the job length is unknown. Our experimental evaluations demonstrate that, on average, the carbon footprint of LACS lies within 1.2% of the online baseline that assumes perfect job length information and within 16% of the offline baseline that, in addition to the job length, also requires accurate carbon intensity forecasts. Furthermore, LACS achieves a 32% reduction in carbon footprint compared to the deadline-aware carbon-agnostic execution of the job.
[ "['Roozbeh Bostandoost' 'Adam Lechowicz' 'Walid A. Hanafy' 'Noman Bashir'\n 'Prashant Shenoy' 'Mohammad Hajiesmaili']" ]
null
null
2404.15213
null
null
http://arxiv.org/pdf/2404.15213v1
2024-03-28T10:15:10Z
2024-03-28T10:15:10Z
Automatic Classification of Subjective Time Perception Using Multi-modal Physiological Data of Air Traffic Controllers
One indicator of well-being can be the person's subjective time perception. In our project ChronoPilot, we aim to develop a device that modulates human subjective time perception. In this study, we present a method to automatically assess the subjective time perception of air traffic controllers, a group often faced with demanding conditions, using their physiological data and eleven state-of-the-art machine learning classifiers. The physiological data consist of photoplethysmogram, electrodermal activity, and temperature data. We find that the support vector classifier works best with an accuracy of 79 % and electrodermal activity provides the most descriptive biomarker. These findings are an important step towards closing the feedback loop of our ChronoPilot-device to automatically modulate the user's subjective time perception. This technological advancement may promise improvements in task management, stress reduction, and overall productivity in high-stakes professions.
[ "['Till Aust' 'Eirini Balta' 'Argiro Vatakis' 'Heiko Hamann']" ]
null
null
2404.15217
null
null
http://arxiv.org/pdf/2404.15217v1
2024-03-24T21:34:36Z
2024-03-24T21:34:36Z
Towards Large-Scale Training of Pathology Foundation Models
Driven by the recent advances in deep learning methods and, in particular, by the development of modern self-supervised learning algorithms, increased interest and efforts have been devoted to build foundation models (FMs) for medical images. In this work, we present our scalable training pipeline for large pathology imaging data, and a comprehensive analysis of various hyperparameter choices and training techniques for building pathology FMs. We release and make publicly available the first batch of our pathology FMs (https://github.com/kaiko-ai/towards_large_pathology_fms) trained on open-access TCGA whole slide images, a commonly used collection of pathology images. The experimental evaluation shows that our models reach state-of-the-art performance on various patch-level downstream tasks, ranging from breast cancer subtyping to colorectal nuclear segmentation. Finally, to unify the evaluation approaches used in the field and to simplify future comparisons of different FMs, we present an open-source framework (https://github.com/kaiko-ai/eva) designed for the consistent evaluation of pathology FMs across various downstream tasks.
[ "['kaiko. ai' 'Nanne Aben' 'Edwin D. de Jong' 'Ioannis Gatopoulos'\n 'Nicolas Känzig' 'Mikhail Karasikov' 'Axel Lagré' 'Roman Moser'\n 'Joost van Doorn' 'Fei Tang']" ]
null
null
2404.15224
null
null
http://arxiv.org/pdf/2404.15224v1
2024-04-23T16:54:31Z
2024-04-23T16:54:31Z
Deep Models for Multi-View 3D Object Recognition: A Review
Human decision-making often relies on visual information from multiple perspectives or views. In contrast, machine learning-based object recognition utilizes information from a single image of the object. However, the information conveyed by a single image may not be sufficient for accurate decision-making, particularly in complex recognition problems. The utilization of multi-view 3D representations for object recognition has thus far demonstrated the most promising results for achieving state-of-the-art performance. This review paper comprehensively covers recent progress in multi-view 3D object recognition methods for 3D classification and retrieval tasks. Specifically, we focus on deep learning-based and transformer-based techniques, as they are widely utilized and have achieved state-of-the-art performance. We provide detailed information about existing deep learning-based and transformer-based multi-view 3D object recognition models, including the most commonly used 3D datasets, camera configurations and number of views, view selection strategies, pre-trained CNN architectures, fusion strategies, and recognition performance on 3D classification and 3D retrieval tasks. Additionally, we examine various computer vision applications that use multi-view classification. Finally, we highlight key findings and future directions for developing multi-view 3D object recognition methods to provide readers with a comprehensive understanding of the field.
[ "['Mona Alzahrani' 'Muhammad Usman' 'Salma Kammoun' 'Saeed Anwar'\n 'Tarek Helmy']" ]
null
null
2404.15225
null
null
http://arxiv.org/pdf/2404.15225v1
2024-04-23T16:54:56Z
2024-04-23T16:54:56Z
PHLP: Sole Persistent Homology for Link Prediction -- Interpretable Feature Extraction
Link prediction (LP), inferring the connectivity between nodes, is a significant research area in graph data, where a link represents essential information on relationships between nodes. Although graph neural network (GNN)-based models have achieved high performance in LP, understanding why they perform well is challenging because most comprise complex neural networks. We employ persistent homology (PH), a topological data analysis method that helps analyze the topological information of graphs, to explain the reasons for the high performance. We propose a novel method that employs PH for LP (PHLP) focusing on how the presence or absence of target links influences the overall topology. The PHLP utilizes the angle hop subgraph and new node labeling called degree double radius node labeling (Degree DRNL), distinguishing the information of graphs better than DRNL. Using only a classifier, PHLP performs similarly to state-of-the-art (SOTA) models on most benchmark datasets. Incorporating the outputs calculated using PHLP into the existing GNN-based SOTA models improves performance across all benchmark datasets. To the best of our knowledge, PHLP is the first method of applying PH to LP without GNNs. The proposed approach, employing PH while not relying on neural networks, enables the identification of crucial factors for improving performance.
[ "['Junwon You' 'Eunwoo Heo' 'Jae-Hun Jung']" ]
null
null
2404.15242
null
null
http://arxiv.org/pdf/2404.15242v1
2024-04-23T17:25:35Z
2024-04-23T17:25:35Z
A Hybrid Kernel-Free Boundary Integral Method with Operator Learning for Solving Parametric Partial Differential Equations In Complex Domains
The Kernel-Free Boundary Integral (KFBI) method presents an iterative solution to boundary integral equations arising from elliptic partial differential equations (PDEs). This method effectively addresses elliptic PDEs on irregular domains, including the modified Helmholtz, Stokes, and elasticity equations. The rapid evolution of neural networks and deep learning has invigorated the exploration of numerical PDEs. An increasing interest is observed in deep learning approaches that seamlessly integrate mathematical principles for investigating numerical PDEs. We propose a hybrid KFBI method, integrating the foundational principles of the KFBI method with the capabilities of deep learning. This approach, within the framework of the boundary integral method, designs a network to approximate the solution operator for the corresponding integral equations by mapping the parameters, inhomogeneous terms and boundary information of PDEs to the boundary density functions, which can be regarded as the solution of the integral equations. The models are trained using data generated by the Cartesian grid-based KFBI algorithm, exhibiting robust generalization capabilities. It accurately predicts density functions across diverse boundary conditions and parameters within the same class of equations. Experimental results demonstrate that the trained model can directly infer the boundary density function with satisfactory precision, obviating the need for iterative steps in solving boundary integral equations. Furthermore, applying the inference results of the model as initial values for iterations is also reasonable; this approach can retain the inherent second-order accuracy of the KFBI method while accelerating the traditional KFBI approach by reducing about 50% iterations.
[ "['Shuo Ling' 'Liwei Tan' 'Wenjun Ying']" ]
null
null
2404.15243
null
null
http://arxiv.org/pdf/2404.15243v1
2024-03-10T09:56:02Z
2024-03-10T09:56:02Z
UCINet0: A Machine Learning based Receiver for 5G NR PUCCH Format 0
Accurate decoding of Uplink Control Information (UCI) on the Physical Uplink Control Channel (PUCCH) is essential for enabling 5G wireless links. This paper explores an AI/ML-based receiver design for PUCCH Format 0. Format 0 signaling encodes the UCI content within the phase of a known base waveform and even supports multiplexing of up to 12 users within the same time-frequency resources. Our first-of-a-kind neural network classifier, which we term UCINet0, is capable of predicting when no user is transmitting on the PUCCH, as well as decoding the UCI content of any number of multiplexed users, up to 12. Inference results with both simulated and hardware-captured field datasets show that the UCINet0 model outperforms conventional DFT-based decoders across all SNR ranges.
[ "['Anil Kumar Yerrapragada' 'Jeeva Keshav Sattianarayanin'\n 'Radha Krishna Ganti']" ]
null
null
2404.15244
null
null
http://arxiv.org/pdf/2404.15244v1
2024-04-23T17:26:34Z
2024-04-23T17:26:34Z
Efficient Transformer Encoders for Mask2Former-style models
Vision transformer based models bring significant improvements for image segmentation tasks. Although these architectures offer powerful capabilities irrespective of specific segmentation tasks, their use of computational resources can be taxing on deployed devices. One way to overcome this challenge is by adapting the computation level to the specific needs of the input image rather than the current one-size-fits-all approach. To this end, we introduce ECO-M2F or EffiCient TransfOrmer Encoders for Mask2Former-style models. Noting that the encoder module of M2F-style models incur high resource-intensive computations, ECO-M2F provides a strategy to self-select the number of hidden layers in the encoder, conditioned on the input image. To enable this self-selection ability for providing a balance between performance and computational efficiency, we present a three step recipe. The first step is to train the parent architecture to enable early exiting from the encoder. The second step is to create an derived dataset of the ideal number of encoder layers required for each training example. The third step is to use the aforementioned derived dataset to train a gating network that predicts the number of encoder layers to be used, conditioned on the input image. Additionally, to change the computational-accuracy tradeoff, only steps two and three need to be repeated which significantly reduces retraining time. Experiments on the public datasets show that the proposed approach reduces expected encoder computational cost while maintaining performance, adapts to various user compute resources, is flexible in architecture configurations, and can be extended beyond the segmentation task to object detection.
[ "['Manyi Yao' 'Abhishek Aich' 'Yumin Suh' 'Amit Roy-Chowdhury'\n 'Christian Shelton' 'Manmohan Chandraker']" ]
null
null
2404.15245
null
null
http://arxiv.org/pdf/2404.15245v2
2024-07-04T02:15:38Z
2024-04-23T17:26:59Z
Mining Invariance from Nonlinear Multi-Environment Data: Binary Classification
Making predictions in an unseen environment given data from multiple training environments is a challenging task. We approach this problem from an invariance perspective, focusing on binary classification to shed light on general nonlinear data generation mechanisms. We identify a unique form of invariance that exists solely in a binary setting that allows us to train models invariant over environments. We provide sufficient conditions for such invariance and show it is robust even when environmental conditions vary greatly. Our formulation admits a causal interpretation, allowing us to compare it with various frameworks. Finally, we propose a heuristic prediction method and conduct experiments using real and synthetic datasets.
[ "['Austin Goddard' 'Kang Du' 'Yu Xiang']" ]
null
null
2404.15247
null
null
http://arxiv.org/pdf/2404.15247v2
2024-06-06T18:18:21Z
2024-04-23T17:32:24Z
XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts
We introduce XFT, a simple yet powerful training scheme, by simply merging upcycled Mixture-of-Experts (MoE) to unleash the performance limit of instruction-tuned code Large Language Models (LLMs). While vanilla sparse upcycling fails to improve instruction tuning, XFT introduces a shared expert mechanism with a novel routing weight normalization strategy into sparse upcycling, which significantly boosts instruction tuning. After fine-tuning the upcycled MoE model, XFT introduces a learnable model merging mechanism to compile the upcycled MoE model back to a dense model, achieving upcycled MoE-level performance with only dense-model compute. By applying XFT to a 1.3B model, we create a new state-of-the-art tiny code LLM (<3B) with 67.1 and 64.6 pass@1 on HumanEval and HumanEval+ respectively. With the same data and model architecture, XFT improves supervised fine-tuning (SFT) by 13% on HumanEval+, along with consistent improvements from 2% to 13% on MBPP+, MultiPL-E, and DS-1000, demonstrating its generalizability. XFT is fully orthogonal to existing techniques such as Evol-Instruct and OSS-Instruct, opening a new dimension for improving code instruction tuning. Codes are available at https://github.com/ise-uiuc/xft.
[ "['Yifeng Ding' 'Jiawei Liu' 'Yuxiang Wei' 'Terry Yue Zhuo'\n 'Lingming Zhang']" ]
null
null
2404.15255
null
null
http://arxiv.org/pdf/2404.15255v1
2024-04-23T17:42:29Z
2024-04-23T17:42:29Z
How to use and interpret activation patching
Activation patching is a popular mechanistic interpretability technique, but has many subtleties regarding how it is applied and how one may interpret the results. We provide a summary of advice and best practices, based on our experience using this technique in practice. We include an overview of the different ways to apply activation patching and a discussion on how to interpret the results. We focus on what evidence patching experiments provide about circuits, and on the choice of metric and associated pitfalls.
[ "['Stefan Heimersheim' 'Neel Nanda']" ]
null
null
2404.15258
null
null
http://arxiv.org/pdf/2404.15258v1
2024-04-23T17:45:53Z
2024-04-23T17:45:53Z
Score matching for sub-Riemannian bridge sampling
Simulation of conditioned diffusion processes is an essential tool in inference for stochastic processes, data imputation, generative modelling, and geometric statistics. Whilst simulating diffusion bridge processes is already difficult on Euclidean spaces, when considering diffusion processes on Riemannian manifolds the geometry brings in further complications. In even higher generality, advancing from Riemannian to sub-Riemannian geometries introduces hypoellipticity, and the possibility of finding appropriate explicit approximations for the score of the diffusion process is removed. We handle these challenges and construct a method for bridge simulation on sub-Riemannian manifolds by demonstrating how recent progress in machine learning can be modified to allow for training of score approximators on sub-Riemannian manifolds. Since gradients dependent on the horizontal distribution, we generalise the usual notion of denoising loss to work with non-holonomic frames using a stochastic Taylor expansion, and we demonstrate the resulting scheme both explicitly on the Heisenberg group and more generally using adapted coordinates. We perform numerical experiments exemplifying samples from the bridge process on the Heisenberg group and the concentration of this process for small time.
[ "['Erlend Grong' 'Karen Habermann' 'Stefan Sommer']" ]
null
null
2404.15261
null
null
http://arxiv.org/pdf/2404.15261v2
2024-04-26T17:25:56Z
2024-04-23T17:50:52Z
All You Need is Resistance: On the Equivalence of Effective Resistance and Certain Optimal Transport Problems on Graphs
The fields of effective resistance and optimal transport on graphs are filled with rich connections to combinatorics, geometry, machine learning, and beyond. In this article we put forth a bold claim: that the two fields should be understood as one and the same, up to a choice of $p$. We make this claim precise by introducing the parameterized family of $p$-Beckmann distances for probability measures on graphs and relate them sharply to certain Wasserstein distances. Then, we break open a suite of results including explicit connections to optimal stopping times and random walks on graphs, graph Sobolev spaces, and a Benamou-Brenier type formula for $2$-Beckmann distance. We further explore empirical implications in the world of unsupervised learning for graph data and propose further study of the usage of these metrics where Wasserstein distance may produce computational bottlenecks.
[ "['Sawyer Robertson' 'Zhengchao Wan' 'Alexander Cloninger']" ]
null
null
2404.15269
null
null
http://arxiv.org/pdf/2404.15269v2
2024-06-09T21:45:09Z
2024-04-23T17:57:47Z
Aligning LLM Agents by Learning Latent Preference from User Edits
We study interactive learning of LLM-based language agents based on user edits made to the agent's output. In a typical setting such as writing assistants, the user interacts with a language agent to generate a response given a context, and may optionally edit the agent response to personalize it based on their latent preference, in addition to improving the correctness. The edit feedback is naturally generated, making it a suitable candidate for improving the agent's alignment with the user's preference, and for reducing the cost of user edits over time. We propose a learning framework, PRELUDE that infers a description of the user's latent preference based on historic edit data. The inferred user preference descriptions are used to define prompts for generating responses in the future. This avoids fine-tuning the agent, which is costly, challenging to scale with the number of users, and may even degrade its performance on other tasks. Furthermore, learning descriptive preference improves interpretability, allowing the user to view and modify the learned preference. However, user preference can be complex, subtle, and vary based on context, making it challenging to learn. To address this, we propose a simple yet effective algorithm named CIPHER that leverages the LLM to infer the user preference for a given context based on user edits. In the future, CIPHER retrieves inferred preferences from the k-closest contexts in the history, and forms an aggregate preference for response generation. We introduce two interactive environments -- summarization and email writing, and use a GPT-4 simulated user for evaluation. On both tasks, CIPHER outperforms several baselines by achieving the lowest edit distance cost while only having a small overhead in LLM query cost. Our analysis reports that user preferences learned by CIPHER show significant similarity to the ground truth latent preferences.
[ "['Ge Gao' 'Alexey Taymanov' 'Eduardo Salinas' 'Paul Mineiro'\n 'Dipendra Misra']" ]
null
null
2404.15273
null
null
http://arxiv.org/pdf/2404.15273v1
2024-04-23T17:59:09Z
2024-04-23T17:59:09Z
Estimation Network Design framework for efficient distributed optimization
Distributed decision problems features a group of agents that can only communicate over a peer-to-peer network, without a central memory. In applications such as network control and data ranking, each agent is only affected by a small portion of the decision vector: this sparsity is typically ignored in distributed algorithms, while it could be leveraged to improve efficiency and scalability. To address this issue, our recent paper introduces Estimation Network Design (END), a graph theoretical language for the analysis and design of distributed iterations. END algorithms can be tuned to exploit the sparsity of specific problem instances, reducing communication overhead and minimizing redundancy, yet without requiring case-by-case convergence analysis. In this paper, we showcase the flexility of END in the context of distributed optimization. In particular, we study the sparsity-aware version of many established methods, including ADMM, AugDGM and Push-Sum DGD. Simulations on an estimation problem in sensor networks demonstrate that END algorithms can boost convergence speed and greatly reduce the communication and memory cost.
[ "['Mattia Bianchi' 'Sergio Grammatico']" ]
null
null
2404.15274
null
null
http://arxiv.org/pdf/2404.15274v2
2024-07-02T03:31:16Z
2024-04-23T17:59:12Z
Metric-guided Image Reconstruction Bounds via Conformal Prediction
Recent advancements in machine learning have led to the development of novel medical imaging systems and algorithms that address ill-posed problems. Assessing their trustworthiness and understanding how to deploy them safely at test time remains an important and open problem. In this work, we propose using conformal prediction to compute valid and distribution-free bounds on downstream metrics given reconstructions generated by one algorithm, and retrieve upper/lower bounds and inlier/outlier reconstructions according to the adjusted bounds. Our work offers 1) test time image reconstruction evaluation without ground truth, 2) downstream performance guarantees, 3) meaningful upper/lower bound reconstructions, and 4) meaningful statistical inliers/outlier reconstructions. We demonstrate our method on post-mastectomy radiotherapy planning using 3D breast CT reconstructions, and show 1) that metric-guided bounds have valid coverage for downstream metrics while conventional pixel-wise bounds do not and 2) anatomical differences of upper/lower bounds between metric-guided and pixel-wise methods. Our work paves way for more meaningful and trustworthy test-time evaluation of medical image reconstructions. Code available at https://github.com/matthewyccheung/conformal-metric
[ "['Matt Y Cheung' 'Tucker J Netherton' 'Laurence E Court'\n 'Ashok Veeraraghavan' 'Guha Balakrishnan']" ]
null
null
2404.15276
null
null
http://arxiv.org/pdf/2404.15276v1
2024-04-23T17:59:59Z
2024-04-23T17:59:59Z
SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose Estimation
Existing Transformers for monocular 3D human shape and pose estimation typically have a quadratic computation and memory complexity with respect to the feature length, which hinders the exploitation of fine-grained information in high-resolution features that is beneficial for accurate reconstruction. In this work, we propose an SMPL-based Transformer framework (SMPLer) to address this issue. SMPLer incorporates two key ingredients: a decoupled attention operation and an SMPL-based target representation, which allow effective utilization of high-resolution features in the Transformer. In addition, based on these two designs, we also introduce several novel modules including a multi-scale attention and a joint-aware attention to further boost the reconstruction performance. Extensive experiments demonstrate the effectiveness of SMPLer against existing 3D human shape and pose estimation methods both quantitatively and qualitatively. Notably, the proposed algorithm achieves an MPJPE of 45.2 mm on the Human3.6M dataset, improving upon Mesh Graphormer by more than 10% with fewer than one-third of the parameters. Code and pretrained models are available at https://github.com/xuxy09/SMPLer.
[ "['Xiangyu Xu' 'Lijuan Liu' 'Shuicheng Yan']" ]
null
null
2404.15289
null
null
http://arxiv.org/abs/2404.15289v2
2024-05-20T08:54:19Z
2024-03-20T15:04:21Z
EEGDiR: Electroencephalogram denoising network for temporal information storage and global modeling through Retentive Network
Electroencephalogram (EEG) signals play a pivotal role in clinical medicine, brain research, and neurological disease studies. However, susceptibility to various physiological and environmental artifacts introduces noise in recorded EEG data, impeding accurate analysis of underlying brain activity. Denoising techniques are crucial to mitigate this challenge. Recent advancements in deep learningbased approaches exhibit substantial potential for enhancing the signal-to-noise ratio of EEG data compared to traditional methods. In the realm of large-scale language models (LLMs), the Retentive Network (Retnet) infrastructure, prevalent for some models, demonstrates robust feature extraction and global modeling capabilities. Recognizing the temporal similarities between EEG signals and natural language, we introduce the Retnet from natural language processing to EEG denoising. This integration presents a novel approach to EEG denoising, opening avenues for a profound understanding of brain activities and accurate diagnosis of neurological diseases. Nonetheless, direct application of Retnet to EEG denoising is unfeasible due to the one-dimensional nature of EEG signals, while natural language processing deals with two-dimensional data. To facilitate Retnet application to EEG denoising, we propose the signal embedding method, transforming one-dimensional EEG signals into two dimensions for use as network inputs. Experimental results validate the substantial improvement in denoising effectiveness achieved by the proposed method.
[ "['Bin Wang' 'Fei Deng' 'Peifan Jiang']" ]
null
null
2404.15294
null
null
http://arxiv.org/pdf/2404.15294v1
2024-03-25T16:23:43Z
2024-03-25T16:23:43Z
Multimodal Physical Fitness Monitoring (PFM) Framework Based on TimeMAE-PFM in Wearable Scenarios
Physical function monitoring (PFM) plays a crucial role in healthcare especially for the elderly. Traditional assessment methods such as the Short Physical Performance Battery (SPPB) have failed to capture the full dynamic characteristics of physical function. Wearable sensors such as smart wristbands offer a promising solution to this issue. However, challenges exist, such as the computational complexity of machine learning methods and inadequate information capture. This paper proposes a multi-modal PFM framework based on an improved TimeMAE, which compresses time-series data into a low-dimensional latent space and integrates a self-enhanced attention module. This framework achieves effective monitoring of physical health, providing a solution for real-time and personalized assessment. The method is validated using the NHATS dataset, and the results demonstrate an accuracy of 70.6% and an AUC of 82.20%, surpassing other state-of-the-art time-series classification models.
[ "['Junjie Zhang' 'Zheming Zhang' 'Huachen Xiang' 'Yangquan Tan'\n 'Linnan Huo' 'Fengyi Wang']" ]
null
null
2404.15296
null
null
http://arxiv.org/pdf/2404.15296v1
2024-03-26T15:16:01Z
2024-03-26T15:16:01Z
Maximum Discrepancy Generative Regularization and Non-Negative Matrix Factorization for Single Channel Source Separation
The idea of adversarial learning of regularization functionals has recently been introduced in the wider context of inverse problems. The intuition behind this method is the realization that it is not only necessary to learn the basic features that make up a class of signals one wants to represent, but also, or even more so, which features to avoid in the representation. In this paper, we will apply this approach to the training of generative models, leading to what we call Maximum Discrepancy Generative Regularization. In particular, we apply this to problem of source separation by means of Non-negative Matrix Factorization (NMF) and present a new method for the adversarial training of NMF bases. We show in numerical experiments, both for image and audio separation, that this leads to a clear improvement of the reconstructed signals, in particular in the case where little or no strong supervision data is available.
[ "['Martin Ludvigsen' 'Markus Grasmair']" ]
null
null
2404.15297
null
null
http://arxiv.org/pdf/2404.15297v2
2024-04-28T07:58:27Z
2024-03-26T15:30:56Z
Multi-stream Transmission for Directional Modulation Network via Distributed Multi-UAV-aided Multi-active-IRS
Active intelligent reflecting surface (IRS) is a revolutionary technique for the future 6G networks. The conventional far-field single-IRS-aided directional modulation(DM) networks have only one (no direct path) or two (existing direct path) degrees of freedom (DoFs). This means that there are only one or two streams transmitted simultaneously from base station to user and will seriously limit its rate gain achieved by IRS. How to create multiple DoFs more than two for DM? In this paper, single large-scale IRS is divided to multiple small IRSs and a novel multi-IRS-aided multi-stream DM network is proposed to achieve a point-to-point multi-stream transmission by creating $K$ ($geq3$) DoFs, where multiple small IRSs are placed distributively via multiple unmanned aerial vehicles (UAVs). The null-space projection, zero-forcing (ZF) and phase alignment are adopted to design the transmit beamforming vector, receive beamforming vector and phase shift matrix (PSM), respectively, called NSP-ZF-PA. Here, $K$ PSMs and their corresponding beamforming vectors are independently optimized. The weighted minimum mean-square error (WMMSE) algorithm is involved in alternating iteration for the optimization variables by introducing the power constraint on IRS, named WMMSE-PC, where the majorization-minimization (MM) algorithm is used to solve the total PSM. To achieve a lower computational complexity, a maximum trace method, called Max-TR-SVD, is proposed by optimize the PSM of all IRSs. Numerical simulation results has shown that the proposed NSP-ZF-PA performs much better than Max-TR-SVD in terms of rate. In particular, the rate of NSP-ZF-PA with sixteen small IRSs is about five times that of NSP-ZF-PA with combining all small IRSs as a single large IRS. Thus, a dramatic rate enhancement may be achieved by multiple distributed IRSs.
[ "['Ke Yang' 'Rongen Dong' 'Wei Gao' 'Feng Shu' 'Weiping Shi' 'Yan Wang'\n 'Xuehui Wang' 'Jiangzhou Wang']" ]
null
null
2404.15305
null
null
http://arxiv.org/pdf/2404.15305v1
2024-03-29T08:48:07Z
2024-03-29T08:48:07Z
ADAPT^2: Adapting Pre-Trained Sensing Models to End-Users via Self-Supervision Replay
Self-supervised learning has emerged as a method for utilizing massive unlabeled data for pre-training models, providing an effective feature extractor for various mobile sensing applications. However, when deployed to end-users, these models encounter significant domain shifts attributed to user diversity. We investigate the performance degradation that occurs when self-supervised models are fine-tuned in heterogeneous domains. To address the issue, we propose ADAPT^2, a few-shot domain adaptation framework for personalizing self-supervised models. ADAPT2 proposes self-supervised meta-learning for initial model pre-training, followed by a user-side model adaptation by replaying the self-supervision with user-specific data. This allows models to adjust their pre-trained representations to the user with only a few samples. Evaluation with four benchmarks demonstrates that ADAPT^2 outperforms existing baselines by an average F1-score of 8.8%p. Our on-device computational overhead analysis on a commodity off-the-shelf (COTS) smartphone shows that ADAPT2 completes adaptation within an unobtrusive latency (in three minutes) with only a 9.54% memory consumption, demonstrating the computational efficiency of the proposed method.
[ "['Hyungjun Yoon' 'Jaehyun Kwak' 'Biniyam Aschalew Tolera' 'Gaole Dai'\n 'Mo Li' 'Taesik Gong' 'Kimin Lee' 'Sung-Ju Lee']" ]
null
null
2404.15308
null
null
http://arxiv.org/pdf/2404.15308v1
2024-03-29T23:22:30Z
2024-03-29T23:22:30Z
Label-Efficient Sleep Staging Using Transformers Pre-trained with Position Prediction
Sleep staging is a clinically important task for diagnosing various sleep disorders, but remains challenging to deploy at scale because it because it is both labor-intensive and time-consuming. Supervised deep learning-based approaches can automate sleep staging but at the expense of large labeled datasets, which can be unfeasible to procure for various settings, e.g., uncommon sleep disorders. While self-supervised learning (SSL) can mitigate this need, recent studies on SSL for sleep staging have shown performance gains saturate after training with labeled data from only tens of subjects, hence are unable to match peak performance attained with larger datasets. We hypothesize that the rapid saturation stems from applying a sub-optimal pretraining scheme that pretrains only a portion of the architecture, i.e., the feature encoder, but not the temporal encoder; therefore, we propose adopting an architecture that seamlessly couples the feature and temporal encoding and a suitable pretraining scheme that pretrains the entire model. On a sample sleep staging dataset, we find that the proposed scheme offers performance gains that do not saturate with amount of labeled training data (e.g., 3-5% improvement in balanced sleep staging accuracy across low- to high-labeled data settings), reducing the amount of labeled training data needed for high performance (e.g., by 800 subjects). Based on our findings, we recommend adopting this SSL paradigm for subsequent work on SSL for sleep staging.
[ "['Sayeri Lala' 'Hanlin Goh' 'Christopher Sandino']" ]
null
null
2404.15309
null
null
http://arxiv.org/pdf/2404.15309v1
2024-04-01T08:16:15Z
2024-04-01T08:16:15Z
Sparse Bayesian Correntropy Learning for Robust Muscle Activity Reconstruction from Noisy Brain Recordings
Sparse Bayesian learning has promoted many effective frameworks for brain activity decoding, especially for the reconstruction of muscle activity. However, existing sparse Bayesian learning mainly employs Gaussian distribution as error assumption in the reconstruction task, which is not necessarily the truth in the real-world application. On the other hand, brain recording is known to be highly noisy and contains many non-Gaussian noises, which could lead to significant performance degradation for sparse Bayesian learning method. The goal of this paper is to propose a new robust implementation for sparse Bayesian learning, so that robustness and sparseness can be realized simultaneously. Motivated by the great robustness of maximum correntropy criterion (MCC), we proposed an integration of MCC into the sparse Bayesian learning regime. To be specific, we derived the explicit error assumption inherent in the MCC and then leveraged it for the likelihood function. Meanwhile, we used the automatic relevance determination (ARD) technique for the sparse prior distribution. To fully evaluate the proposed method, a synthetic dataset and a real-world muscle activity reconstruction task with two different brain modalities were employed. Experimental results showed that our proposed sparse Bayesian correntropy learning framework improves significantly the robustness in a noisy regression task. The proposed method can realize higher correlation coefficient and lower root mean squared error in the real-world muscle activity reconstruction tasks. Sparse Bayesian correntropy learning provides a powerful tool for neural decoding which can promote the development of brain-computer interfaces.
[ "['Yuanhao Li' 'Badong Chen' 'Natsue Yoshimura' 'Yasuharu Koike'\n 'Okito Yamashita']" ]
null
null
2404.15310
null
null
http://arxiv.org/abs/2404.15310v1
2024-04-01T16:58:09Z
2024-04-01T16:58:09Z
Automated Assessment of Encouragement and Warmth in Classrooms Leveraging Multimodal Emotional Features and ChatGPT
Classroom observation protocols standardize the assessment of teaching effectiveness and facilitate comprehension of classroom interactions. Whereas these protocols offer teachers specific feedback on their teaching practices, the manual coding by human raters is resource-intensive and often unreliable. This has sparked interest in developing AI-driven, cost-effective methods for automating such holistic coding. Our work explores a multimodal approach to automatically estimating encouragement and warmth in classrooms, a key component of the Global Teaching Insights (GTI) study's observation protocol. To this end, we employed facial and speech emotion recognition with sentiment analysis to extract interpretable features from video, audio, and transcript data. The prediction task involved both classification and regression methods. Additionally, in light of recent large language models' remarkable text annotation capabilities, we evaluated ChatGPT's zero-shot performance on this scoring task based on transcripts. We demonstrated our approach on the GTI dataset, comprising 367 16-minute video segments from 92 authentic lesson recordings. The inferences of GPT-4 and the best-trained model yielded correlations of r = .341 and r = .441 with human ratings, respectively. Combining estimates from both models through averaging, an ensemble approach achieved a correlation of r = .513, comparable to human inter-rater reliability. Our model explanation analysis indicated that text sentiment features were the primary contributors to the trained model's decisions. Moreover, GPT-4 could deliver logical and concrete reasoning as potential teacher guidelines. Our findings provide insights into using advanced, multimodal techniques for automated classroom observation, aiming to foster teacher training through frequent and valuable feedback.
[ "['Ruikun Hou' 'Tim Fütterer' 'Babette Bühler' 'Efe Bozkir' 'Peter Gerjets'\n 'Ulrich Trautwein' 'Enkelejda Kasneci']" ]
null
null
2404.15311
null
null
http://arxiv.org/pdf/2404.15311v1
2024-04-02T17:01:51Z
2024-04-02T17:01:51Z
Fusing Pretrained ViTs with TCNet for Enhanced EEG Regression
The task of Electroencephalogram (EEG) analysis is paramount to the development of Brain-Computer Interfaces (BCIs). However, to reach the goal of developing robust, useful BCIs depends heavily on the speed and the accuracy at which BCIs can understand neural dynamics. In response to that goal, this paper details the integration of pre-trained Vision Transformers (ViTs) with Temporal Convolutional Networks (TCNet) to enhance the precision of EEG regression. The core of this approach lies in harnessing the sequential data processing strengths of ViTs along with the superior feature extraction capabilities of TCNet, to significantly improve EEG analysis accuracy. In addition, we analyze the importance of how to construct optimal patches for the attention mechanism to analyze, balancing both speed and accuracy tradeoffs. Our results showcase a substantial improvement in regression accuracy, as evidenced by the reduction of Root Mean Square Error (RMSE) from 55.4 to 51.8 on EEGEyeNet's Absolute Position Task, outperforming existing state-of-the-art models. Without sacrificing performance, we increase the speed of this model by an order of magnitude (up to 4.32x faster). This breakthrough not only sets a new benchmark in EEG regression analysis but also opens new avenues for future research in the integration of transformer architectures with specialized feature extraction methods for diverse EEG datasets.
[ "['Eric Modesitt' 'Haicheng Yin' 'Williams Huang Wang' 'Brian Lu']" ]
null
null
2404.15314
null
null
http://arxiv.org/abs/2404.15314v1
2024-04-02T23:20:57Z
2024-04-02T23:20:57Z
Detection of direct path component absence in NLOS UWB channel
In this paper a novel NLOS (Non-Line-of-Sight) identification technique is proposed. In comparison to other methods described in the literature, it discerns a situation when the delayed direct path component is available from when it's totally blocked and introduced biases are much higher and harder to mitigate. In the method, NLOS identification is performed using Support Vector Machine (SVM) algorithm based on various signal features. The paper includes description of the method and the results of performed experiment.
[ "['Marcin Kolakowski' 'Jozef Modelski']" ]
null
null
2404.15317
null
null
http://arxiv.org/pdf/2404.15317v1
2024-04-03T11:37:01Z
2024-04-03T11:37:01Z
Concept-Guided LLM Agents for Human-AI Safety Codesign
Generative AI is increasingly important in software engineering, including safety engineering, where its use ensures that software does not cause harm to people. This also leads to high quality requirements for generative AI. Therefore, the simplistic use of Large Language Models (LLMs) alone will not meet these quality demands. It is crucial to develop more advanced and sophisticated approaches that can effectively address the complexities and safety concerns of software systems. Ultimately, humans must understand and take responsibility for the suggestions provided by generative AI to ensure system safety. To this end, we present an efficient, hybrid strategy to leverage LLMs for safety analysis and Human-AI codesign. In particular, we develop a customized LLM agent that uses elements of prompt engineering, heuristic reasoning, and retrieval-augmented generation to solve tasks associated with predefined safety concepts, in interaction with a system model graph. The reasoning is guided by a cascade of micro-decisions that help preserve structured information. We further suggest a graph verbalization which acts as an intermediate representation of the system model to facilitate LLM-graph interactions. Selected pairs of prompts and responses relevant for safety analytics illustrate our method for the use case of a simplified automated driving system.
[ "['Florian Geissler' 'Karsten Roscher' 'Mario Trapp']" ]
null
null
2404.15319
null
null
http://arxiv.org/pdf/2404.15319v1
2024-04-03T15:18:50Z
2024-04-03T15:18:50Z
The largest EEG-based BCI reproducibility study for open science: the MOABB benchmark
Objective. This study conduct an extensive Brain-computer interfaces (BCI) reproducibility analysis on open electroencephalography datasets, aiming to assess existing solutions and establish open and reproducible benchmarks for effective comparison within the field. The need for such benchmark lies in the rapid industrial progress that has given rise to undisclosed proprietary solutions. Furthermore, the scientific literature is dense, often featuring challenging-to-reproduce evaluations, making comparisons between existing approaches arduous. Approach. Within an open framework, 30 machine learning pipelines (separated into raw signal: 11, Riemannian: 13, deep learning: 6) are meticulously re-implemented and evaluated across 36 publicly available datasets, including motor imagery (14), P300 (15), and SSVEP (7). The analysis incorporates statistical meta-analysis techniques for results assessment, encompassing execution time and environmental impact considerations. Main results. The study yields principled and robust results applicable to various BCI paradigms, emphasizing motor imagery, P300, and SSVEP. Notably, Riemannian approaches utilizing spatial covariance matrices exhibit superior performance, underscoring the necessity for significant data volumes to achieve competitive outcomes with deep learning techniques. The comprehensive results are openly accessible, paving the way for future research to further enhance reproducibility in the BCI domain. Significance. The significance of this study lies in its contribution to establishing a rigorous and transparent benchmark for BCI research, offering insights into optimal methodologies and highlighting the importance of reproducibility in driving advancements within the field.
[ "['Sylvain Chevallier' 'Igor Carrara' 'Bruno Aristimunha'\n 'Pierre Guetschel' 'Sara Sedlar' 'Bruna Lopes' 'Sebastien Velut'\n 'Salim Khazem' 'Thomas Moreau']" ]
null
null
2404.15323
null
null
http://arxiv.org/abs/2404.15323v1
2024-04-05T11:49:48Z
2024-04-05T11:49:48Z
Transportation mode recognition based on low-rate acceleration and location signals with an attention-based multiple-instance learning network
Transportation mode recognition (TMR) is a critical component of human activity recognition (HAR) that focuses on understanding and identifying how people move within transportation systems. It is commonly based on leveraging inertial, location, or both types of signals, captured by modern smartphone devices. Each type has benefits (such as increased effectiveness) and drawbacks (such as increased battery consumption) depending on the transportation mode (TM). Combining the two types is challenging as they exhibit significant differences such as very different sampling rates. This paper focuses on the TMR task and proposes an approach for combining the two types of signals in an effective and robust classifier. Our network includes two sub-networks for processing acceleration and location signals separately, using different window sizes for each signal. The two sub-networks are designed to also embed the two types of signals into the same space so that we can then apply an attention-based multiple-instance learning classifier to recognize TM. We use very low sampling rates for both signal types to reduce battery consumption. We evaluate the proposed methodology on a publicly available dataset and compare against other well known algorithms.
[ "['Christos Siargkas' 'Vasileios Papapanagiotou' 'Anastasios Delopoulos']" ]
null
null
2404.15328
null
null
http://arxiv.org/pdf/2404.15328v1
2024-04-06T21:11:41Z
2024-04-06T21:11:41Z
Time topological analysis of EEG using signature theory
Anomaly detection in multivariate signals is a task of paramount importance in many disciplines (epidemiology, finance, cognitive sciences and neurosciences, oncology, etc.). In this perspective, Topological Data Analysis (TDA) offers a battery of "shape" invariants that can be exploited for the implementation of an effective detection scheme. Our contribution consists of extending the constructions presented in cite{chretienleveraging} on the construction of simplicial complexes from the Signatures of signals and their predictive capacities, rather than the use of a generic distance as in cite{petri2014homological}. Signature theory is a new theme in Machine Learning arXiv:1603.03788 stemming from recent work on the notions of Rough Paths developed by Terry Lyons and his team cite{lyons2002system} based on the formalism introduced by Chen cite{chen1957integration}. We explore in particular the detection of changes in topology, based on tracking the evolution of homological persistence and the Betti numbers associated with the complex introduced in cite{chretienleveraging}. We apply our tools for the analysis of brain signals such as EEG to detect precursor phenomena to epileptic seizures.
[ "['Stéphane Chrétien' 'Ben Gao' 'Astrid Thebault-Guiochon' 'Rémi Vaucher']" ]
null
null
2404.15330
null
null
http://arxiv.org/abs/2404.15330v1
2024-04-07T20:50:50Z
2024-04-07T20:50:50Z
Anchor Pair Selection in TDOA Positioning Systems by Door Transition Error Minimization
This paper presents an adaptive anchor pairs selection algorithm for UWB (ultra-wideband) TDOA-based (Time Difference of Arrival) indoor positioning systems. The method assumes dividing the system operation area into zones. The most favorable anchor pairs are selected by minimizing the positioning errors in doorways leading to these zones where possible users' locations are limited to small, narrow areas. The sets are determined separately for going in and out of the zone to take users' body shadowing into account. The determined anchor pairs are then used to calculate TDOA values and localize the user moving around the apartment with an Extended Kalman Filter based algorithm. The method was tested experimentally in a furnished apartment. The results have shown that the adaptive selection of the anchor pairs leads to an increase in the user's localization accuracy. The median trajectory error was about 0.32 m.
[ "['Marcin Kolakowski' 'Jozef Modelski']" ]
null
null
2404.15332
null
null
http://arxiv.org/pdf/2404.15332v1
2024-04-08T11:19:28Z
2024-04-08T11:19:28Z
Clinical translation of machine learning algorithms for seizure detection in scalp electroencephalography: a systematic review
Machine learning algorithms for seizure detection have shown great diagnostic potential, with recent reported accuracies reaching 100%. However, few published algorithms have fully addressed the requirements for successful clinical translation. For example, the properties of training data may critically limit the generalisability of algorithms, algorithms may be sensitive to variability across EEG acquisition hardware, and run-time processing costs may render them unfeasible for real-time clinical use cases. Here, we systematically review machine learning seizure detection algorithms with a focus on clinical translatability, assessed by criteria including generalisability, run-time costs, explainability, and clinically-relevant performance metrics. For non-specialists, we provide domain-specific knowledge necessary to contextualise model development and evaluation. Our critical evaluation of machine learning algorithms with respect to their potential real-world effectiveness can help accelerate clinical translation and identify gaps in the current seizure detection literature.
[ "['Nina Moutonnet' 'Steven White' 'Benjamin P Campbell' 'Danilo Mandic'\n 'Gregory Scott']" ]
null
null
2404.15333
null
null
http://arxiv.org/pdf/2404.15333v1
2024-04-08T13:01:59Z
2024-04-08T13:01:59Z
EB-GAME: A Game-Changer in ECG Heartbeat Anomaly Detection
Cardiologists use electrocardiograms (ECG) for the detection of arrhythmias. However, continuous monitoring of ECG signals to detect cardiac abnormal-ities requires significant time and human resources. As a result, several deep learning studies have been conducted in advance for the automatic detection of arrhythmia. These models show relatively high performance in supervised learning, but are not applicable in cases with few training examples. This is because abnormal ECG data is scarce compared to normal data in most real-world clinical settings. Therefore, in this study, GAN-based anomaly detec-tion, i.e., unsupervised learning, was employed to address the issue of data imbalance. This paper focuses on detecting abnormal signals in electrocardi-ograms (ECGs) using only labels from normal signals as training data. In-spired by self-supervised vision transformers, which learn by dividing images into patches, and masked auto-encoders, known for their effectiveness in patch reconstruction and solving information redundancy, we introduce the ECG Heartbeat Anomaly Detection model, EB-GAME. EB-GAME was trained and validated on the MIT-BIH Arrhythmia Dataset, where it achieved state-of-the-art performance on this benchmark.
[ "['JuneYoung Park' 'Da Young Kim' 'Yunsoo Kim' 'Jisu Yoo' 'Tae Joon Kim']" ]
null
null
2404.15335
null
null
http://arxiv.org/pdf/2404.15335v1
2024-04-09T15:19:13Z
2024-04-09T15:19:13Z
Integrative Deep Learning Framework for Parkinson's Disease Early Detection using Gait Cycle Data Measured by Wearable Sensors: A CNN-GRU-GNN Approach
Efficient early diagnosis is paramount in addressing the complexities of Parkinson's disease because timely intervention can substantially mitigate symptom progression and improve patient outcomes. In this paper, we present a pioneering deep learning architecture tailored for the binary classification of subjects, utilizing gait cycle datasets to facilitate early detection of Parkinson's disease. Our model harnesses the power of 1D-Convolutional Neural Networks (CNN), Gated Recurrent Units (GRU), and Graph Neural Network (GNN) layers, synergistically capturing temporal dynamics and spatial relationships within the data. In this work, 16 wearable sensors located at the end of subjects' shoes for measuring the vertical Ground Reaction Force (vGRF) are considered as the vertices of a graph, their adjacencies are modelled as edges of this graph, and finally, the measured data of each sensor is considered as the feature vector of its corresponding vertex. Therefore, The GNN layers can extract the relations among these sensors by learning proper representations. Regarding the dynamic nature of these measurements, GRU and CNN are used to analyze them spatially and temporally and map them to an embedding space. Remarkably, our proposed model achieves exceptional performance metrics, boasting accuracy, precision, recall, and F1 score values of 99.51%, 99.57%, 99.71%, and 99.64%, respectively.
[ "['Alireza Rashnu' 'Armin Salimi-Badr']" ]
null
null
2404.15337
null
null
http://arxiv.org/pdf/2404.15337v1
2024-04-10T02:48:13Z
2024-04-10T02:48:13Z
RSSI Estimation for Constrained Indoor Wireless Networks using ANN
In the expanding field of the Internet of Things (IoT), wireless channel estimation is a significant challenge. This is specifically true for low-power IoT (LP-IoT) communication, where efficiency and accuracy are extremely important. This research establishes two distinct LP-IoT wireless channel estimation models using Artificial Neural Networks (ANN): a Feature-based ANN model and a Sequence-based ANN model. Both models have been constructed to enhance LP-IoT communication by lowering the estimation error in the LP-IoT wireless channel. The Feature-based model aims to capture complex patterns of measured Received Signal Strength Indicator (RSSI) data using environmental characteristics. The Sequence-based approach utilises predetermined categorisation techniques to estimate the RSSI sequence of specifically selected environment characteristics. The findings demonstrate that our suggested approaches attain remarkable precision in channel estimation, with an improvement in MSE of $88.29%$ of the Feature-based model and $97.46%$ of the Sequence-based model over existing research. Additionally, the comparative analysis of these techniques with traditional and other Deep Learning (DL)-based techniques also highlights the superior performance of our developed models and their potential in real-world IoT applications.
[ "['Samrah Arif' 'M. Arif Khan' 'Sabih Ur Rehman']" ]
null
null
2404.15341
null
null
http://arxiv.org/pdf/2404.15341v1
2024-04-11T03:17:05Z
2024-04-11T03:17:05Z
Classifier-guided neural blind deconvolution: a physics-informed denoising module for bearing fault diagnosis under heavy noise
Blind deconvolution (BD) has been demonstrated as an efficacious approach for extracting bearing fault-specific features from vibration signals under strong background noise. Despite BD's desirable feature in adaptability and mathematical interpretability, a significant challenge persists: How to effectively integrate BD with fault-diagnosing classifiers? This issue arises because the traditional BD method is solely designed for feature extraction with its own optimizer and objective function. When BD is combined with downstream deep learning classifiers, the different learning objectives will be in conflict. To address this problem, this paper introduces classifier-guided BD (ClassBD) for joint learning of BD-based feature extraction and deep learning-based fault classification. Firstly, we present a time and frequency neural BD that employs neural networks to implement conventional BD, thereby facilitating the seamless integration of BD and the deep learning classifier for co-optimization of model parameters. Subsequently, we develop a unified framework to use a deep learning classifier to guide the learning of BD filters. In addition, we devise a physics-informed loss function composed of kurtosis, $l_2/l_4$ norm, and a cross-entropy loss to jointly optimize the BD filters and deep learning classifier. Consequently, the fault labels provide useful information to direct BD to extract features that distinguish classes amidst strong noise. To the best of our knowledge, this is the first of its kind that BD is successfully applied to bearing fault diagnosis. Experimental results from three datasets demonstrate that ClassBD outperforms other state-of-the-art methods under noisy conditions.
[ "['Jing-Xiao Liao' 'Chao He' 'Jipu Li' 'Jinwei Sun' 'Shiping Zhang'\n 'Xiaoge Zhang']" ]
null
null
2404.15342
null
null
http://arxiv.org/pdf/2404.15342v1
2024-04-11T03:47:58Z
2024-04-11T03:47:58Z
WaveSleepNet: An Interpretable Network for Expert-like Sleep Staging
Although deep learning algorithms have proven their efficiency in automatic sleep staging, the widespread skepticism about their "black-box" nature has limited its clinical acceptance. In this study, we propose WaveSleepNet, an interpretable neural network for sleep staging that reasons in a similar way to sleep experts. In this network, we utilize the latent space representations generated during training to identify characteristic wave prototypes corresponding to different sleep stages. The feature representation of an input signal is segmented into patches within the latent space, each of which is compared against the learned wave prototypes. The proximity between these patches and the wave prototypes is quantified through scores, indicating the prototypes' presence and relative proportion within the signal. The scores are served as the decision-making criteria for final sleep staging. During training, an ensemble of loss functions is employed for the prototypes' diversity and robustness. Furthermore, the learned wave prototypes are visualized by analysing occlusion sensitivity. The efficacy of WaveSleepNet is validated across three public datasets, achieving sleep staging performance that are on par with the state-of-the-art models when several WaveSleepNets are combine into a larger network. A detailed case study examined the decision-making process of the WaveSleepNet which aligns closely with American Academy of Sleep Medicine (AASM) manual guidelines. Another case study systematically explained the misidentified reason behind each sleep stage. WaveSleepNet's transparent process provides specialists with direct access to the physiological significance of its criteria, allowing for future adaptation or enrichment by sleep experts.
[ "['Yan Pei' 'Wei Luo']" ]
null
null
2404.15343
null
null
http://arxiv.org/pdf/2404.15343v1
2024-04-11T06:08:23Z
2024-04-11T06:08:23Z
Edge-Efficient Deep Learning Models for Automatic Modulation Classification: A Performance Analysis
The recent advancement in deep learning (DL) for automatic modulation classification (AMC) of wireless signals has encouraged numerous possible applications on resource-constrained edge devices. However, developing optimized DL models suitable for edge applications of wireless communications is yet to be studied in depth. In this work, we perform a thorough investigation of optimized convolutional neural networks (CNNs) developed for AMC using the three most commonly used model optimization techniques: a) pruning, b) quantization, and c) knowledge distillation. Furthermore, we have proposed optimized models with the combinations of these techniques to fuse the complementary optimization benefits. The performances of all the proposed methods are evaluated in terms of sparsity, storage compression for network parameters, and the effect on classification accuracy with a reduction in parameters. The experimental results show that the proposed individual and combined optimization techniques are highly effective for developing models with significantly less complexity while maintaining or even improving classification performance compared to the benchmark CNNs.
[ "['Nayan Moni Baishya' 'B. R. Manoj' 'Prabin K. Bora']" ]
null
null
2404.15344
null
null
http://arxiv.org/pdf/2404.15344v1
2024-04-11T06:15:01Z
2024-04-11T06:15:01Z
Adversarial Robustness of Distilled and Pruned Deep Learning-based Wireless Classifiers
Data-driven deep learning (DL) techniques developed for automatic modulation classification (AMC) of wireless signals are vulnerable to adversarial attacks. This poses a severe security threat to the DL-based wireless systems, specifically for edge applications of AMC. In this work, we address the joint problem of developing optimized DL models that are also robust against adversarial attacks. This enables efficient and reliable deployment of DL-based AMC on edge devices. We first propose two optimized models using knowledge distillation and network pruning, followed by a computationally efficient adversarial training process to improve the robustness. Experimental results on five white-box attacks show that the proposed optimized and adversarially trained models can achieve better robustness than the standard (unoptimized) model. The two optimized models also achieve higher accuracy on clean (unattacked) samples, which is essential for the reliability of DL-based solutions at edge applications.
[ "['Nayan Moni Baishya' 'B. R. Manoj']" ]
null
null
2404.15346
null
null
http://arxiv.org/abs/2404.15346v1
2024-04-12T08:11:07Z
2024-04-12T08:11:07Z
A Novel Micro-Doppler Coherence Loss for Deep Learning Radar Applications
Deep learning techniques are subject to increasing adoption for a wide range of micro-Doppler applications, where predictions need to be made based on time-frequency signal representations. Most, if not all, of the reported applications focus on translating an existing deep learning framework to this new domain with no adjustment made to the objective function. This practice results in a missed opportunity to encourage the model to prioritize features that are particularly relevant for micro-Doppler applications. Thus the paper introduces a micro-Doppler coherence loss, minimized when the normalized power of micro-Doppler oscillatory components between input and output is matched. The experiments conducted on real data show that the application of the introduced loss results in models more resilient to noise.
[ "['Mikolaj Czerkawski' 'Christos Ilioudis' 'Carmine Clemente'\n 'Craig Michie' 'Ivan Andonovic' 'Christos Tachtatzis']" ]
null
null
2404.15347
null
null
http://arxiv.org/abs/2404.15347v1
2024-04-13T19:56:15Z
2024-04-13T19:56:15Z
Advanced Neural Network Architecture for Enhanced Multi-Lead ECG Arrhythmia Detection through Optimized Feature Extraction
Cardiovascular diseases are a pervasive global health concern, contributing significantly to morbidity and mortality rates worldwide. Among these conditions, arrhythmia, characterized by irregular heart rhythms, presents formidable diagnostic challenges. This study introduces an innovative approach utilizing deep learning techniques, specifically Convolutional Neural Networks (CNNs), to address the complexities of arrhythmia classification. Leveraging multi-lead Electrocardiogram (ECG) data, our CNN model, comprising six layers with a residual block, demonstrates promising outcomes in identifying five distinct heartbeat types: Left Bundle Branch Block (LBBB), Right Bundle Branch Block (RBBB), Atrial Premature Contraction (APC), Premature Ventricular Contraction (PVC), and Normal Beat. Through rigorous experimentation, we highlight the transformative potential of our methodology in enhancing diagnostic accuracy for cardiovascular arrhythmias. Arrhythmia diagnosis remains a critical challenge in cardiovascular care, often relying on manual interpretation of ECG signals, which can be time-consuming and prone to subjectivity. To address these limitations, we propose a novel approach that leverages deep learning algorithms to automate arrhythmia classification. By employing advanced CNN architectures and multi-lead ECG data, our methodology offers a robust solution for precise and efficient arrhythmia detection. Through comprehensive evaluation, we demonstrate the effectiveness of our approach in facilitating more accurate clinical decision-making, thereby improving patient outcomes in managing cardiovascular arrhythmias.
[ "['Bhavith Chandra Challagundla']" ]
null
null
2404.15349
null
null
http://arxiv.org/pdf/2404.15349v1
2024-04-14T18:43:16Z
2024-04-14T18:43:16Z
A Survey on Multimodal Wearable Sensor-based Human Action Recognition
The combination of increased life expectancy and falling birth rates is resulting in an aging population. Wearable Sensor-based Human Activity Recognition (WSHAR) emerges as a promising assistive technology to support the daily lives of older individuals, unlocking vast potential for human-centric applications. However, recent surveys in WSHAR have been limited, focusing either solely on deep learning approaches or on a single sensor modality. In real life, our human interact with the world in a multi-sensory way, where diverse information sources are intricately processed and interpreted to accomplish a complex and unified sensing system. To give machines similar intelligence, multimodal machine learning, which merges data from various sources, has become a popular research area with recent advancements. In this study, we present a comprehensive survey from a novel perspective on how to leverage multimodal learning to WSHAR domain for newcomers and researchers. We begin by presenting the recent sensor modalities as well as deep learning approaches in HAR. Subsequently, we explore the techniques used in present multimodal systems for WSHAR. This includes inter-multimodal systems which utilize sensor modalities from both visual and non-visual systems and intra-multimodal systems that simply take modalities from non-visual systems. After that, we focus on current multimodal learning approaches that have applied to solve some of the challenges existing in WSHAR. Specifically, we make extra efforts by connecting the existing multimodal literature from other domains, such as computer vision and natural language processing, with current WSHAR area. Finally, we identify the corresponding challenges and potential research direction in current WSHAR area for further improvement.
[ "['Jianyuan Ni' 'Hao Tang' 'Syed Tousiful Haque' 'Yan Yan' 'Anne H. H. Ngu']" ]
null
null
2404.15350
null
null
http://arxiv.org/pdf/2404.15350v1
2024-04-14T22:36:53Z
2024-04-14T22:36:53Z
Evaluating Fast Adaptability of Neural Networks for Brain-Computer Interface
Electroencephalography (EEG) classification is a versatile and portable technique for building non-invasive Brain-computer Interfaces (BCI). However, the classifiers that decode cognitive states from EEG brain data perform poorly when tested on newer domains, such as tasks or individuals absent during model training. Researchers have recently used complex strategies like Model-agnostic meta-learning (MAML) for domain adaptation. Nevertheless, there is a need for an evaluation strategy to evaluate the fast adaptability of the models, as this characteristic is essential for real-life BCI applications for quick calibration. We used motor movement and imaginary signals as input to Convolutional Neural Networks (CNN) based classifier for the experiments. Datasets with EEG signals typically have fewer examples and higher time resolution. Even though batch-normalization is preferred for Convolutional Neural Networks (CNN), we empirically show that layer-normalization can improve the adaptability of CNN-based EEG classifiers with not more than ten fine-tuning steps. In summary, the present work (i) proposes a simple strategy to evaluate fast adaptability, and (ii) empirically demonstrate fast adaptability across individuals as well as across tasks with simple transfer learning as compared to MAML approach.
[ "['Anupam Sharma' 'Krishna Miyapuram']" ]