categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2402.09063
null
null
http://arxiv.org/pdf/2402.09063v1
2024-02-14T10:20:03Z
2024-02-14T10:20:03Z
Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space
Current research in adversarial robustness of LLMs focuses on discrete input manipulations in the natural language space, which can be directly transferred to closed-source models. However, this approach neglects the steady progression of open-source models. As open-source models advance in capability, ensuring their safety also becomes increasingly imperative. Yet, attacks tailored to open-source LLMs that exploit full model access remain largely unexplored. We address this research gap and propose the embedding space attack, which directly attacks the continuous embedding representation of input tokens. We find that embedding space attacks circumvent model alignments and trigger harmful behaviors more efficiently than discrete attacks or model fine-tuning. Furthermore, we present a novel threat model in the context of unlearning and show that embedding space attacks can extract supposedly deleted information from unlearned LLMs across multiple datasets and models. Our findings highlight embedding space attacks as an important threat model in open-source LLMs. Trigger Warning: the appendix contains LLM-generated text with violence and harassment.
[ "['Leo Schwinn' 'David Dobre' 'Sophie Xhonneux' 'Gauthier Gidel'\n 'Stephan Gunnemann']" ]
null
null
2402.09066
null
null
http://arxiv.org/pdf/2402.09066v1
2024-02-14T10:24:04Z
2024-02-14T10:24:04Z
Solid Waste Detection in Remote Sensing Images: A Survey
The detection and characterization of illegal solid waste disposal sites are essential for environmental protection, particularly for mitigating pollution and health hazards. Improperly managed landfills contaminate soil and groundwater via rainwater infiltration, posing threats to both animals and humans. Traditional landfill identification approaches, such as on-site inspections, are time-consuming and expensive. Remote sensing is a cost-effective solution for the identification and monitoring of solid waste disposal sites that enables broad coverage and repeated acquisitions over time. Earth Observation (EO) satellites, equipped with an array of sensors and imaging capabilities, have been providing high-resolution data for several decades. Researchers proposed specialized techniques that leverage remote sensing imagery to perform a range of tasks such as waste site detection, dumping site monitoring, and assessment of suitable locations for new landfills. This review aims to provide a detailed illustration of the most relevant proposals for the detection and monitoring of solid waste sites by describing and comparing the approaches, the implemented techniques, and the employed data. Furthermore, since the data sources are of the utmost importance for developing an effective solid waste detection model, a comprehensive overview of the satellites and publicly available data sets is presented. Finally, this paper identifies the open issues in the state-of-the-art and discusses the relevant research directions for reducing the costs and improving the effectiveness of novel solid waste detection methods.
[ "['Piero Fraternali' 'Luca Morandini' 'Sergio Luis Herrera González']" ]
null
null
2402.09075
null
null
http://arxiv.org/pdf/2402.09075v2
2024-04-01T02:09:32Z
2024-02-14T10:35:26Z
Steady-State Error Compensation for Reinforcement Learning with Quadratic Rewards
The selection of a reward function in Reinforcement Learning (RL) has garnered significant attention because of its impact on system performance. Issues of significant steady-state errors often manifest when quadratic reward functions are employed. Although absolute-value-type reward functions alleviate this problem, they tend to induce substantial fluctuations in specific system states, leading to abrupt changes. In response to this challenge, this study proposes an approach that introduces an integral term. By integrating this integral term into quadratic-type reward functions, the RL algorithm is adeptly tuned, augmenting the system's consideration of reward history, and consequently alleviates concerns related to steady-state errors. Through experiments and performance evaluations on the Adaptive Cruise Control (ACC) and lane change models, we validate that the proposed method effectively diminishes steady-state errors and does not cause significant spikes in some system states.
[ "['Liyao Wang' 'Zishun Zheng' 'Yuan Lin']" ]
null
null
2402.09077
null
null
http://arxiv.org/pdf/2402.09077v1
2024-02-14T10:40:09Z
2024-02-14T10:40:09Z
DisGNet: A Distance Graph Neural Network for Forward Kinematics Learning of Gough-Stewart Platform
In this paper, we propose a graph neural network, DisGNet, for learning the graph distance matrix to address the forward kinematics problem of the Gough-Stewart platform. DisGNet employs the k-FWL algorithm for message-passing, providing high expressiveness with a small parameter count, making it suitable for practical deployment. Additionally, we introduce the GPU-friendly Newton-Raphson method, an efficient parallelized optimization method executed on the GPU to refine DisGNet's output poses, achieving ultra-high-precision pose. This novel two-stage approach delivers ultra-high precision output while meeting real-time requirements. Our results indicate that on our dataset, DisGNet can achieves error accuracys below 1mm and 1deg at 79.8% and 98.2%, respectively. As executed on a GPU, our two-stage method can ensure the requirement for real-time computation. Codes are released at https://github.com/FLAMEZZ5201/DisGNet.
[ "['Huizhi Zhu' 'Wenxia Xu' 'Jian Huang' 'Jiaxin Li']" ]
null
null
2402.09078
null
null
http://arxiv.org/pdf/2402.09078v1
2024-02-14T10:44:03Z
2024-02-14T10:44:03Z
Exploiting Estimation Bias in Deep Double Q-Learning for Actor-Critic Methods
This paper introduces innovative methods in Reinforcement Learning (RL), focusing on addressing and exploiting estimation biases in Actor-Critic methods for continuous control tasks, using Deep Double Q-Learning. We propose two novel algorithms: Expectile Delayed Deep Deterministic Policy Gradient (ExpD3) and Bias Exploiting - Twin Delayed Deep Deterministic Policy Gradient (BE-TD3). ExpD3 aims to reduce overestimation bias with a single $Q$ estimate, offering a balance between computational efficiency and performance, while BE-TD3 is designed to dynamically select the most advantageous estimation bias during training. Our extensive experiments across various continuous control tasks demonstrate the effectiveness of our approaches. We show that these algorithms can either match or surpass existing methods like TD3, particularly in environments where estimation biases significantly impact learning. The results underline the importance of bias exploitation in improving policy learning in RL.
[ "['Alberto Sinigaglia' 'Niccolò Turcato' 'Alberto Dalla Libera'\n 'Ruggero Carli' 'Gian Antonio Susto']" ]
null
null
2402.09081
null
null
http://arxiv.org/pdf/2402.09081v1
2024-02-14T10:48:00Z
2024-02-14T10:48:00Z
Low-Rank Extragradient Methods for Scalable Semidefinite Optimization
We consider several classes of highly important semidefinite optimization problems that involve both a convex objective function (smooth or nonsmooth) and additional linear or nonlinear smooth and convex constraints, which are ubiquitous in statistics, machine learning, combinatorial optimization, and other domains. We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution which also satisfies a low-rank complementarity condition. We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method, when initialized in the proximity of an optimal primal-dual solution, converges to a solution of the constrained optimization problem with its standard convergence rates guarantees, using only low-rank singular value decompositions (SVD) to project onto the positive semidefinite cone, as opposed to computationally-prohibitive full-rank SVDs required in worst-case. Our approach is supported by numerical experiments conducted with a dataset of Max-Cut instances.
[ "['Dan Garber. Atara Kaplan']" ]
null
null
2402.09082
null
null
http://arxiv.org/pdf/2402.09082v1
2024-02-14T10:52:39Z
2024-02-14T10:52:39Z
Detection Latencies of Anomaly Detectors: An Overlooked Perspective ?
The ever-evolving landscape of attacks, coupled with the growing complexity of ICT systems, makes crafting anomaly-based intrusion detectors (ID) and error detectors (ED) a difficult task: they must accurately detect attacks, and they should promptly perform detections. Although improving and comparing the detection capability is the focus of most research works, the timeliness of the detection is less considered and often insufficiently evaluated or discussed. In this paper, we argue the relevance of measuring the temporal latency of attacks and errors, and we propose an evaluation approach for detectors to ensure a pragmatic trade-off between correct and in-time detection. Briefly, the approach relates the false positive rate with the temporal latency of attacks and errors, and this ultimately leads to guidelines for configuring a detector. We apply our approach by evaluating different ED and ID solutions in two industrial cases: i) an embedded railway on-board system that optimizes public mobility, and ii) an edge device for the Industrial Internet of Things. Our results show that considering latency in addition to traditional metrics like the false positive rate, precision, and coverage gives an additional fundamental perspective on the actual performance of the detector and should be considered when assessing and configuring anomaly detectors.
[ "['Tommaso Puccetti' 'Andrea Ceccarelli']" ]
null
null
2402.09084
null
null
http://arxiv.org/pdf/2402.09084v1
2024-02-14T10:57:29Z
2024-02-14T10:57:29Z
Sobolev Training for Operator Learning
This study investigates the impact of Sobolev Training on operator learning frameworks for improving model performance. Our research reveals that integrating derivative information into the loss function enhances the training process, and we propose a novel framework to approximate derivatives on irregular meshes in operator learning. Our findings are supported by both experimental evidence and theoretical analysis. This demonstrates the effectiveness of Sobolev Training in approximating the solution operators between infinite-dimensional spaces.
[ "['Namkyeong Cho' 'Junseung Ryu' 'Hyung Ju Hwang']" ]
null
null
2402.09092
null
null
http://arxiv.org/pdf/2402.09092v1
2024-02-14T11:13:33Z
2024-02-14T11:13:33Z
Three Decades of Activations: A Comprehensive Survey of 400 Activation Functions for Neural Networks
Neural networks have proven to be a highly effective tool for solving complex problems in many areas of life. Recently, their importance and practical usability have further been reinforced with the advent of deep learning. One of the important conditions for the success of neural networks is the choice of an appropriate activation function introducing non-linearity into the model. Many types of these functions have been proposed in the literature in the past, but there is no single comprehensive source containing their exhaustive overview. The absence of this overview, even in our experience, leads to redundancy and the unintentional rediscovery of already existing activation functions. To bridge this gap, our paper presents an extensive survey involving 400 activation functions, which is several times larger in scale than previous surveys. Our comprehensive compilation also references these surveys; however, its main goal is to provide the most comprehensive overview and systematization of previously published activation functions with links to their original sources. The secondary aim is to update the current understanding of this family of functions.
[ "['Vladimír Kunc' 'Jiří Kléma']" ]
null
null
2402.09095
null
null
http://arxiv.org/pdf/2402.09095v1
2024-02-14T11:16:50Z
2024-02-14T11:16:50Z
FedSiKD: Clients Similarity and Knowledge Distillation: Addressing Non-i.i.d. and Constraints in Federated Learning
In recent years, federated learning (FL) has emerged as a promising technique for training machine learning models in a decentralized manner while also preserving data privacy. The non-independent and identically distributed (non-i.i.d.) nature of client data, coupled with constraints on client or edge devices, presents significant challenges in FL. Furthermore, learning across a high number of communication rounds can be risky and potentially unsafe for model exploitation. Traditional FL approaches may suffer from these challenges. Therefore, we introduce FedSiKD, which incorporates knowledge distillation (KD) within a similarity-based federated learning framework. As clients join the system, they securely share relevant statistics about their data distribution, promoting intra-cluster homogeneity. This enhances optimization efficiency and accelerates the learning process, effectively transferring knowledge between teacher and student models and addressing device constraints. FedSiKD outperforms state-of-the-art algorithms by achieving higher accuracy, exceeding by 25% and 18% for highly skewed data at $alpha = {0.1,0.5}$ on the HAR and MNIST datasets, respectively. Its faster convergence is illustrated by a 17% and 20% increase in accuracy within the first five rounds on the HAR and MNIST datasets, respectively, highlighting its early-stage learning proficiency. Code is publicly available and hosted on GitHub (https://github.com/SimuEnv/FedSiKD)
[ "['Yousef Alsenani' 'Rahul Mishra' 'Khaled R. Ahmed' 'Atta Ur Rahman']" ]
null
null
2402.09105
null
null
http://arxiv.org/pdf/2402.09105v1
2024-02-14T11:26:30Z
2024-02-14T11:26:30Z
Scheduling for On-Board Federated Learning with Satellite Clusters
Mega-constellations of small satellites have evolved into a source of massive amount of valuable data. To manage this data efficiently, on-board federated learning (FL) enables satellites to train a machine learning (ML) model collaboratively without having to share the raw data. This paper introduces a scheme for scheduling on-board FL for constellations connected with intra-orbit inter-satellite links. The proposed scheme utilizes the predictable visibility pattern between satellites and ground station (GS), both at the individual satellite level and cumulatively within the entire orbit, to mitigate intermittent connectivity and best use of available time. To this end, two distinct schedulers are employed: one for coordinating the FL procedures among orbits, and the other for controlling those within each orbit. These two schedulers cooperatively determine the appropriate time to perform global updates in GS and then allocate suitable duration to satellites within each orbit for local training, proportional to usable time until next global update. This scheme leads to improved test accuracy within a shorter time.
[ "['Nasrin Razmi' 'Bho Matthiesen' 'Armin Dekorsy' 'Petar Popovski']" ]
null
null
2402.09109
null
null
http://arxiv.org/pdf/2402.09109v1
2024-02-14T11:47:19Z
2024-02-14T11:47:19Z
Stochastic Spiking Attention: Accelerating Attention with Stochastic Computing in Spiking Networks
Spiking Neural Networks (SNNs) have been recently integrated into Transformer architectures due to their potential to reduce computational demands and to improve power efficiency. Yet, the implementation of the attention mechanism using spiking signals on general-purpose computing platforms remains inefficient. In this paper, we propose a novel framework leveraging stochastic computing (SC) to effectively execute the dot-product attention for SNN-based Transformers. We demonstrate that our approach can achieve high classification accuracy ($83.53%$) on CIFAR-10 within 10 time steps, which is comparable to the performance of a baseline artificial neural network implementation ($83.66%$). We estimate that the proposed SC approach can lead to over $6.3times$ reduction in computing energy and $1.7times$ reduction in memory access costs for a digital CMOS-based ASIC design. We experimentally validate our stochastic attention block design through an FPGA implementation, which is shown to achieve $48times$ lower latency as compared to a GPU implementation, while consuming $15times$ less power.
[ "['Zihang Song' 'Prabodh Katti' 'Osvaldo Simeone' 'Bipin Rajendran']" ]
null
null
2402.09113
null
null
http://arxiv.org/pdf/2402.09113v1
2024-02-14T11:55:50Z
2024-02-14T11:55:50Z
Measuring Exploration in Reinforcement Learning via Optimal Transport in Policy Space
Exploration is the key ingredient of reinforcement learning (RL) that determines the speed and success of learning. Here, we quantify and compare the amount of exploration and learning accomplished by a Reinforcement Learning (RL) algorithm. Specifically, we propose a novel measure, named Exploration Index, that quantifies the relative effort of knowledge transfer (transferability) by an RL algorithm in comparison to supervised learning (SL) that transforms the initial data distribution of RL to the corresponding final data distribution. The comparison is established by formulating learning in RL as a sequence of SL tasks, and using optimal transport based metrics to compare the total path traversed by the RL and SL algorithms in the data distribution space. We perform extensive empirical analysis on various environments and with multiple algorithms to demonstrate that the exploration index yields insights about the exploration behaviour of any RL algorithm, and also allows us to compare the exploratory behaviours of different RL algorithms.
[ "['Reabetswe M. Nkhumise' 'Debabrota Basu' 'Tony J. Prescott'\n 'Aditya Gilra']" ]
null
null
2402.09122
null
null
http://arxiv.org/pdf/2402.09122v1
2024-02-14T12:18:23Z
2024-02-14T12:18:23Z
Mixed-Output Gaussian Process Latent Variable Models
This work develops a Bayesian non-parametric approach to signal separation where the signals may vary according to latent variables. Our key contribution is to augment Gaussian Process Latent Variable Models (GPLVMs) to incorporate the case where each data point comprises the weighted sum of a known number of pure component signals, observed across several input locations. Our framework allows the use of a range of priors for the weights of each observation. This flexibility enables us to represent use cases including sum-to-one constraints for estimating fractional makeup, and binary weights for classification. Our contributions are particularly relevant to spectroscopy, where changing conditions may cause the underlying pure component signals to vary from sample to sample. To demonstrate the applicability to both spectroscopy and other domains, we consider several applications: a near-infrared spectroscopy data set with varying temperatures, a simulated data set for identifying flow configuration through a pipe, and a data set for determining the type of rock from its reflectance.
[ "['James Odgers' 'Chrysoula Kappatou' 'Ruth Misener' 'Sarah Filippi']" ]
null
null
2402.09126
null
null
http://arxiv.org/pdf/2402.09126v2
2024-04-23T16:59:46Z
2024-02-14T12:24:21Z
MPIrigen: MPI Code Generation through Domain-Specific Language Models
The imperative need to scale computation across numerous nodes highlights the significance of efficient parallel computing, particularly in the realm of Message Passing Interface (MPI) integration. The challenging parallel programming task of generating MPI-based parallel programs has remained unexplored. This study first investigates the performance of state-of-the-art language models in generating MPI-based parallel programs. Findings reveal that widely used models such as GPT-3.5 and PolyCoder (specialized multi-lingual code models) exhibit notable performance degradation, when generating MPI-based programs compared to general-purpose programs. In contrast, domain-specific models such as MonoCoder, which are pretrained on MPI-related programming languages of C and C++, outperform larger models. Subsequently, we introduce a dedicated downstream task of MPI-based program generation by fine-tuning MonoCoder on HPCorpusMPI. We call the resulting model as MPIrigen. We propose an innovative preprocessing for completion only after observing the whole code, thus enabling better completion with a wider context. Comparative analysis against GPT-3.5 zero-shot performance, using a novel HPC-oriented evaluation method, demonstrates that MPIrigen excels in generating accurate MPI functions up to 0.8 accuracy in location and function predictions, and with more than 0.9 accuracy for argument predictions. The success of this tailored solution underscores the importance of domain-specific fine-tuning in optimizing language models for parallel computing code generation, paving the way for a new generation of automatic parallelization tools. The sources of this work are available at our GitHub MPIrigen repository: https://github.com/Scientific-Computing-Lab-NRCN/MPI-rigen
[ "['Nadav Schneider' 'Niranjan Hasabnis' 'Vy A. Vo' 'Tal Kadosh'\n 'Neva Krien' 'Mihai Capotă' 'Guy Tamir' 'Ted Willke' 'Nesreen Ahmed'\n 'Yuval Pinter' 'Timothy Mattson' 'Gal Oren']" ]
null
null
2402.09132
null
null
http://arxiv.org/pdf/2402.09132v4
2024-07-08T12:10:58Z
2024-02-14T12:28:38Z
Exploring the Adversarial Capabilities of Large Language Models
The proliferation of large language models (LLMs) has sparked widespread and general interest due to their strong language generation capabilities, offering great potential for both industry and research. While previous research delved into the security and privacy issues of LLMs, the extent to which these models can exhibit adversarial behavior remains largely unexplored. Addressing this gap, we investigate whether common publicly available LLMs have inherent capabilities to perturb text samples to fool safety measures, so-called adversarial examples resp.~attacks. More specifically, we investigate whether LLMs are inherently able to craft adversarial examples out of benign samples to fool existing safe rails. Our experiments, which focus on hate speech detection, reveal that LLMs succeed in finding adversarial perturbations, effectively undermining hate speech detection systems. Our findings carry significant implications for (semi-)autonomous systems relying on LLMs, highlighting potential challenges in their interaction with existing systems and safety measures.
[ "['Lukas Struppek' 'Minh Hieu Le' 'Dominik Hintersdorf' 'Kristian Kersting']" ]
null
null
2402.09135
null
null
http://arxiv.org/pdf/2402.09135v1
2024-02-14T12:34:38Z
2024-02-14T12:34:38Z
Unconventional Computing based on Four Wave Mixing in Highly Nonlinear Waveguides
In this work we numerically analyze a photonic unconventional accelerator based on the four-wave mixing effect in highly nonlinear waveguides. The proposed scheme can act as a fully analogue system for nonlinear signal processing directly in the optical domain. By exploiting the rich Kerr-induced nonlinearities, multiple nonlinear transformations of an input signal can be generated and used for solving complex nonlinear tasks. We first evaluate the performance of our scheme in the Santa-Fe chaotic time-series prediction. The true power of this processor is revealed in the all-optical nonlinearity compensation in an optical communication scenario where we provide results superior to those offered by strong machine learning algorithms with reduced power consumption and computational complexity. Finally, we showcase how the FWM module can be used as a reconfigurable nonlinear activation module being capable of reproducing characteristic functions such as sigmoid or rectified linear unit.
[ "['Kostas Sozos' 'Stavros Deligiannidis' 'Charis Mesaritakis'\n 'Adonis Bogris']" ]
null
null
2402.09142
null
null
http://arxiv.org/pdf/2402.09142v2
2024-07-05T09:42:01Z
2024-02-14T12:48:17Z
When Representations Align: Universality in Representation Learning Dynamics
Deep neural networks come in many sizes and architectures. The choice of architecture, in conjunction with the dataset and learning algorithm, is commonly understood to affect the learned neural representations. Yet, recent results have shown that different architectures learn representations with striking qualitative similarities. Here we derive an effective theory of representation learning under the assumption that the encoding map from input to hidden representation and the decoding map from representation to output are arbitrary smooth functions. This theory schematizes representation learning dynamics in the regime of complex, large architectures, where hidden representations are not strongly constrained by the parametrization. We show through experiments that the effective theory describes aspects of representation learning dynamics across a range of deep networks with different activation functions and architectures, and exhibits phenomena similar to the "rich" and "lazy" regime. While many network behaviors depend quantitatively on architecture, our findings point to certain behaviors that are widely conserved once models are sufficiently flexible.
[ "['Loek van Rossem' 'Andrew M. Saxe']" ]
null
null
2402.09146
null
null
http://arxiv.org/pdf/2402.09146v3
2024-05-19T18:32:15Z
2024-02-14T12:55:28Z
ResQuNNs:Towards Enabling Deep Learning in Quantum Convolution Neural Networks
In this paper, we present a novel framework for enhancing the performance of Quanvolutional Neural Networks (QuNNs) by introducing trainable quanvolutional layers and addressing the critical challenges associated with them. Traditional quanvolutional layers, although beneficial for feature extraction, have largely been static, offering limited adaptability. Unlike state-of-the-art, our research overcomes this limitation by enabling training within these layers, significantly increasing the flexibility and potential of QuNNs. However, the introduction of multiple trainable quanvolutional layers induces complexities in gradient-based optimization, primarily due to the difficulty in accessing gradients across these layers. To resolve this, we propose a novel architecture, Residual Quanvolutional Neural Networks (ResQuNNs), leveraging the concept of residual learning, which facilitates the flow of gradients by adding skip connections between layers. By inserting residual blocks between quanvolutional layers, we ensure enhanced gradient access throughout the network, leading to improved training performance. Moreover, we provide empirical evidence on the strategic placement of these residual blocks within QuNNs. Through extensive experimentation, we identify an efficient configuration of residual blocks, which enables gradients across all the layers in the network that eventually results in efficient training. Our findings suggest that the precise location of residual blocks plays a crucial role in maximizing the performance gains in QuNNs. Our results mark a substantial step forward in the evolution of quantum deep learning, offering new avenues for both theoretical development and practical quantum computing applications.
[ "['Muhammad Kashif' 'Muhammad Shafique']" ]
null
null
2402.09151
null
null
http://arxiv.org/pdf/2402.09151v2
2024-06-12T16:03:31Z
2024-02-14T13:08:25Z
Chinese MentalBERT: Domain-Adaptive Pre-training on Social Media for Chinese Mental Health Text Analysis
In the current environment, psychological issues are prevalent and widespread, with social media serving as a key outlet for individuals to share their feelings. This results in the generation of vast quantities of data daily, where negative emotions have the potential to precipitate crisis situations. There is a recognized need for models capable of efficient analysis. While pre-trained language models have demonstrated their effectiveness broadly, there's a noticeable gap in pre-trained models tailored for specialized domains like psychology. To address this, we have collected a huge dataset from Chinese social media platforms and enriched it with publicly available datasets to create a comprehensive database encompassing 3.36 million text entries. To enhance the model's applicability to psychological text analysis, we integrated psychological lexicons into the pre-training masking mechanism. Building on an existing Chinese language model, we performed adaptive training to develop a model specialized for the psychological domain. We evaluated our model's performance across six public datasets, where it demonstrated improvements compared to eight other models. Additionally, in the qualitative comparison experiment, our model provided psychologically relevant predictions given the masked sentences. Due to concerns regarding data privacy, the dataset will not be made publicly available. However, we have made the pre-trained models and codes publicly accessible to the community via: https://github.com/zwzzzQAQ/Chinese-MentalBERT.
[ "['Wei Zhai' 'Hongzhi Qi' 'Qing Zhao' 'Jianqiang Li' 'Ziqi Wang' 'Han Wang'\n 'Bing Xiang Yang' 'Guanghui Fu']" ]
null
null
2402.09152
null
null
http://arxiv.org/pdf/2402.09152v2
2024-06-23T15:12:02Z
2024-02-14T13:08:26Z
Improved Regret for Bandit Convex Optimization with Delayed Feedback
We investigate bandit convex optimization (BCO) with delayed feedback, where only the loss value of the action is revealed under an arbitrary delay. Let $n,T,bar{d}$ denote the dimensionality, time horizon, and average delay, respectively. Previous studies have achieved an $O(sqrt{n}T^{3/4}+(nbar{d})^{1/3}T^{2/3})$ regret bound for this problem, whose delay-independent part matches the regret of the classical non-delayed bandit gradient descent algorithm. However, there is a large gap between its delay-dependent part, i.e., $O((nbar{d})^{1/3}T^{2/3})$, and an existing $Omega(sqrt{bar{d}T})$ lower bound. In this paper, we illustrate that this gap can be filled in the worst case, where $bar{d}$ is very close to the maximum delay $d$. Specifically, we first develop a novel algorithm, and prove that it enjoys a regret bound of $O(sqrt{n}T^{3/4}+sqrt{dT})$ in general. Compared with the previous result, our regret bound is better for $d=O((nbar{d})^{2/3}T^{1/3})$, and the delay-dependent part is tight in the worst case. The primary idea is to decouple the joint effect of the delays and the bandit feedback on the regret by carefully incorporating the delayed bandit feedback with a blocking update mechanism. Furthermore, we show that the proposed algorithm can improve the regret bound to $O((nT)^{2/3}log^{1/3}T+dlog T)$ for strongly convex functions. Finally, if the action sets are unconstrained, we demonstrate that it can be simply extended to achieve an $O(nsqrt{Tlog T}+dlog T)$ regret bound for strongly convex and smooth functions.
[ "['Yuanyu Wan' 'Chang Yao' 'Mingli Song' 'Lijun Zhang']" ]
null
null
2402.09154
null
null
http://arxiv.org/pdf/2402.09154v1
2024-02-14T13:13:26Z
2024-02-14T13:13:26Z
Attacking Large Language Models with Projected Gradient Descent
Current LLM alignment methods are readily broken through specifically crafted adversarial prompts. While crafting adversarial prompts using discrete optimization is highly effective, such attacks typically use more than 100,000 LLM calls. This high computational cost makes them unsuitable for, e.g., quantitative analyses and adversarial training. To remedy this, we revisit Projected Gradient Descent (PGD) on the continuously relaxed input prompt. Although previous attempts with ordinary gradient-based attacks largely failed, we show that carefully controlling the error introduced by the continuous relaxation tremendously boosts their efficacy. Our PGD for LLMs is up to one order of magnitude faster than state-of-the-art discrete optimization to achieve the same devastating attack results.
[ "['Simon Geisler' 'Tom Wollschläger' 'M. H. I. Abdalla'\n 'Johannes Gasteiger' 'Stephan Günnemann']" ]
null
null
2402.09164
null
null
http://arxiv.org/pdf/2402.09164v2
2024-02-29T03:29:41Z
2024-02-14T13:30:02Z
Less is More: Fewer Interpretable Region via Submodular Subset Selection
Image attribution algorithms aim to identify important regions that are highly relevant to model decisions. Although existing attribution solutions can effectively assign importance to target elements, they still face the following challenges: 1) existing attribution methods generate inaccurate small regions thus misleading the direction of correct attribution, and 2) the model cannot produce good attribution results for samples with wrong predictions. To address the above challenges, this paper re-models the above image attribution problem as a submodular subset selection problem, aiming to enhance model interpretability using fewer regions. To address the lack of attention to local regions, we construct a novel submodular function to discover more accurate small interpretation regions. To enhance the attribution effect for all samples, we also impose four different constraints on the selection of sub-regions, i.e., confidence, effectiveness, consistency, and collaboration scores, to assess the importance of various subsets. Moreover, our theoretical analysis substantiates that the proposed function is in fact submodular. Extensive experiments show that the proposed method outperforms SOTA methods on two face datasets (Celeb-A and VGG-Face2) and one fine-grained dataset (CUB-200-2011). For correctly predicted samples, the proposed method improves the Deletion and Insertion scores with an average of 4.9% and 2.5% gain relative to HSIC-Attribution. For incorrectly predicted samples, our method achieves gains of 81.0% and 18.4% compared to the HSIC-Attribution algorithm in the average highest confidence and Insertion score respectively. The code is released at https://github.com/RuoyuChen10/SMDL-Attribution.
[ "['Ruoyu Chen' 'Hua Zhang' 'Siyuan Liang' 'Jingzhi Li' 'Xiaochun Cao']" ]
null
null
2402.09165
null
null
http://arxiv.org/pdf/2402.09165v1
2024-02-14T13:31:53Z
2024-02-14T13:31:53Z
Unifying Invariance and Spuriousity for Graph Out-of-Distribution via Probability of Necessity and Sufficiency
Graph Out-of-Distribution (OOD), requiring that models trained on biased data generalize to the unseen test data, has a massive of real-world applications. One of the most mainstream methods is to extract the invariant subgraph by aligning the original and augmented data with the help of environment augmentation. However, these solutions might lead to the loss or redundancy of semantic subgraph and further result in suboptimal generalization. To address this challenge, we propose a unified framework to exploit the Probability of Necessity and Sufficiency to extract the Invariant Substructure (PNSIS). Beyond that, this framework further leverages the spurious subgraph to boost the generalization performance in an ensemble manner to enhance the robustness on the noise data. Specificially, we first consider the data generation process for graph data. Under mild conditions, we show that the invariant subgraph can be extracted by minimizing an upper bound, which is built on the theoretical advance of probability of necessity and sufficiency. To further bridge the theory and algorithm, we devise the PNSIS model, which involves an invariant subgraph extractor for invariant graph learning as well invariant and spurious subgraph classifiers for generalization enhancement. Experimental results demonstrate that our textbf{PNSIS} model outperforms the state-of-the-art techniques on graph OOD on several benchmarks, highlighting the effectiveness in real-world scenarios.
[ "['Xuexin Chen' 'Ruichu Cai' 'Kaitao Zheng' 'Zhifan Jiang'\n 'Zhengting Huang' 'Zhifeng Hao' 'Zijian Li']" ]
null
null
2402.09166
null
null
http://arxiv.org/pdf/2402.09166v1
2024-02-14T13:32:23Z
2024-02-14T13:32:23Z
Deinterleaving of Discrete Renewal Process Mixtures with Application to Electronic Support Measures
In this paper, we propose a new deinterleaving method for mixtures of discrete renewal Markov chains. This method relies on the maximization of a penalized likelihood score. It exploits all available information about both the sequence of the different symbols and their arrival times. A theoretical analysis is carried out to prove that minimizing this score allows to recover the true partition of symbols in the large sample limit, under mild conditions on the component processes. This theoretical analysis is then validated by experiments on synthetic data. Finally, the method is applied to deinterleave pulse trains received from different emitters in a RESM (Radar Electronic Support Measurements) context and we show that the proposed method competes favorably with state-of-the-art methods on simulated warfare datasets.
[ "['Jean Pinsolle' 'Olivier Goudet' 'Cyrille Enderli' 'Sylvain Lamprier'\n 'Jin-Kao Hao']" ]
null
null
2402.09167
null
null
http://arxiv.org/pdf/2402.09167v1
2024-02-14T13:36:20Z
2024-02-14T13:36:20Z
Evolving Restricted Boltzmann Machine-Kohonen Network for Online Clustering
A novel online clustering algorithm is presented where an Evolving Restricted Boltzmann Machine (ERBM) is embedded with a Kohonen Network called ERBM-KNet. The proposed ERBM-KNet efficiently handles streaming data in a single-pass mode using the ERBM, employing a bias-variance strategy for neuron growing and pruning, as well as online clustering based on a cluster update strategy for cluster prediction and cluster center update using KNet. Initially, ERBM evolves its architecture while processing unlabeled image data, effectively disentangling the data distribution in the latent space. Subsequently, the KNet utilizes the feature extracted from ERBM to predict the number of clusters and updates the cluster centers. By overcoming the common challenges associated with clustering algorithms, such as prior initialization of the number of clusters and subpar clustering accuracy, the proposed ERBM-KNet offers significant improvements. Extensive experimental evaluations on four benchmarks and one industry dataset demonstrate the superiority of ERBM-KNet compared to state-of-the-art approaches.
[ "['J. Senthilnath' 'Adithya Bhattiprolu' 'Ankur Singh' 'Bangjian Zhou'\n 'Min Wu' 'Jón Atli Benediktsson' 'Xiaoli Li']" ]
null
null
2402.09173
null
null
http://arxiv.org/pdf/2402.09173v2
2024-06-23T13:52:49Z
2024-02-14T13:44:16Z
Nearly Optimal Regret for Decentralized Online Convex Optimization
We investigate decentralized online convex optimization (D-OCO), in which a set of local learners are required to minimize a sequence of global loss functions using only local computations and communications. Previous studies have established $O(n^{5/4}rho^{-1/2}sqrt{T})$ and ${O}(n^{3/2}rho^{-1}log T)$ regret bounds for convex and strongly convex functions respectively, where $n$ is the number of local learners, $rho<1$ is the spectral gap of the communication matrix, and $T$ is the time horizon. However, there exist large gaps from the existing lower bounds, i.e., $Omega(nsqrt{T})$ for convex functions and $Omega(n)$ for strongly convex functions. To fill these gaps, in this paper, we first develop novel D-OCO algorithms that can respectively reduce the regret bounds for convex and strongly convex functions to $tilde{O}(nrho^{-1/4}sqrt{T})$ and $tilde{O}(nrho^{-1/2}log T)$. The primary technique is to design an online accelerated gossip strategy that enjoys a faster average consensus among local learners. Furthermore, by carefully exploiting the spectral properties of a specific network topology, we enhance the lower bounds for convex and strongly convex functions to $Omega(nrho^{-1/4}sqrt{T})$ and $Omega(nrho^{-1/2})$, respectively. These lower bounds suggest that our algorithms are nearly optimal in terms of $T$, $n$, and $rho$.
[ "['Yuanyu Wan' 'Tong Wei' 'Mingli Song' 'Lijun Zhang']" ]
null
null
2402.09177
null
null
http://arxiv.org/pdf/2402.09177v1
2024-02-14T13:45:19Z
2024-02-14T13:45:19Z
Leveraging the Context through Multi-Round Interactions for Jailbreaking Attacks
Large Language Models (LLMs) are susceptible to Jailbreaking attacks, which aim to extract harmful information by subtly modifying the attack query. As defense mechanisms evolve, directly obtaining harmful information becomes increasingly challenging for Jailbreaking attacks. In this work, inspired by human practices of indirect context to elicit harmful information, we focus on a new attack form called Contextual Interaction Attack. The idea relies on the autoregressive nature of the generation process in LLMs. We contend that the prior context--the information preceding the attack query--plays a pivotal role in enabling potent Jailbreaking attacks. Specifically, we propose an approach that leverages preliminary question-answer pairs to interact with the LLM. By doing so, we guide the responses of the model toward revealing the 'desired' harmful information. We conduct experiments on four different LLMs and demonstrate the efficacy of this attack, which is black-box and can also transfer across LLMs. We believe this can lead to further developments and understanding of the context vector in LLMs.
[ "['Yixin Cheng' 'Markos Georgopoulos' 'Volkan Cevher'\n 'Grigorios G. Chrysos']" ]
null
null
2402.09179
null
null
http://arxiv.org/pdf/2402.09179v3
2024-05-28T11:36:00Z
2024-02-14T13:47:35Z
Instruction Backdoor Attacks Against Customized LLMs
The increasing demand for customized Large Language Models (LLMs) has led to the development of solutions like GPTs. These solutions facilitate tailored LLM creation via natural language prompts without coding. However, the trustworthiness of third-party custom versions of LLMs remains an essential concern. In this paper, we propose the first instruction backdoor attacks against applications integrated with untrusted customized LLMs (e.g., GPTs). Specifically, these attacks embed the backdoor into the custom version of LLMs by designing prompts with backdoor instructions, outputting the attacker's desired result when inputs contain the pre-defined triggers. Our attack includes 3 levels of attacks: word-level, syntax-level, and semantic-level, which adopt different types of triggers with progressive stealthiness. We stress that our attacks do not require fine-tuning or any modification to the backend LLMs, adhering strictly to GPTs development guidelines. We conduct extensive experiments on 6 prominent LLMs and 5 benchmark text classification datasets. The results show that our instruction backdoor attacks achieve the desired attack performance without compromising utility. Additionally, we propose two defense strategies and demonstrate their effectiveness in reducing such attacks. Our findings highlight the vulnerability and the potential risks of LLM customization such as GPTs.
[ "['Rui Zhang' 'Hongwei Li' 'Rui Wen' 'Wenbo Jiang' 'Yuan Zhang'\n 'Michael Backes' 'Yun Shen' 'Yang Zhang']" ]
null
null
2402.09197
null
null
http://arxiv.org/abs/2402.09197v1
2024-02-14T14:27:52Z
2024-02-14T14:27:52Z
Implementing local-explainability in Gradient Boosting Trees: Feature Contribution
Gradient Boost Decision Trees (GBDT) is a powerful additive model based on tree ensembles. Its nature makes GBDT a black-box model even though there are multiple explainable artificial intelligence (XAI) models obtaining information by reinterpreting the model globally and locally. Each tree of the ensemble is a transparent model itself but the final outcome is the result of a sum of these trees and it is not easy to clarify. In this paper, a feature contribution method for GBDT is developed. The proposed method takes advantage of the GBDT architecture to calculate the contribution of each feature using the residue of each node. This algorithm allows to calculate the sequence of node decisions given a prediction. Theoretical proofs and multiple experiments have been carried out to demonstrate the performance of our method which is not only a local explicability model for the GBDT algorithm but also a unique option that reflects GBDTs internal behavior. The proposal is aligned to the contribution of characteristics having impact in some artificial intelligence problems such as ethical analysis of Artificial Intelligence (AI) and comply with the new European laws such as the General Data Protection Regulation (GDPR) about the right to explain and nondiscrimination.
[ "['Ángel Delgado-Panadero' 'Beatriz Hernández-Lorca'\n 'María Teresa García-Ordás' 'José Alberto Benítez-Andrades']" ]
null
null
2402.09199
null
null
http://arxiv.org/pdf/2402.09199v1
2024-02-14T14:32:16Z
2024-02-14T14:32:16Z
Ten Words Only Still Help: Improving Black-Box AI-Generated Text Detection via Proxy-Guided Efficient Re-Sampling
With the rapidly increasing application of large language models (LLMs), their abuse has caused many undesirable societal problems such as fake news, academic dishonesty, and information pollution. This makes AI-generated text (AIGT) detection of great importance. Among existing methods, white-box methods are generally superior to black-box methods in terms of performance and generalizability, but they require access to LLMs' internal states and are not applicable to black-box settings. In this paper, we propose to estimate word generation probabilities as pseudo white-box features via multiple re-sampling to help improve AIGT detection under the black-box setting. Specifically, we design POGER, a proxy-guided efficient re-sampling method, which selects a small subset of representative words (e.g., 10 words) for performing multiple re-sampling in black-box AIGT detection. Experiments on datasets containing texts from humans and seven LLMs show that POGER outperforms all baselines in macro F1 under black-box, partial white-box, and out-of-distribution settings and maintains lower re-sampling costs than its existing counterparts.
[ "['Yuhui Shi' 'Qiang Sheng' 'Juan Cao' 'Hao Mi' 'Beizhe Hu' 'Danding Wang']" ]
null
null
2402.09201
null
null
http://arxiv.org/pdf/2402.09201v2
2024-04-04T11:22:20Z
2024-02-14T14:33:39Z
Better-than-KL PAC-Bayes Bounds
Let $f(theta, X_1),$ $ dots,$ $ f(theta, X_n)$ be a sequence of random elements, where $f$ is a fixed scalar function, $X_1, dots, X_n$ are independent random variables (data), and $theta$ is a random parameter distributed according to some data-dependent posterior distribution $P_n$. In this paper, we consider the problem of proving concentration inequalities to estimate the mean of the sequence. An example of such a problem is the estimation of the generalization error of some predictor trained by a stochastic algorithm, such as a neural network where $f$ is a loss function. Classically, this problem is approached through a PAC-Bayes analysis where, in addition to the posterior, we choose a prior distribution which captures our belief about the inductive bias of the learning problem. Then, the key quantity in PAC-Bayes concentration bounds is a divergence that captures the complexity of the learning problem where the de facto standard choice is the KL divergence. However, the tightness of this choice has rarely been questioned. In this paper, we challenge the tightness of the KL-divergence-based bounds by showing that it is possible to achieve a strictly tighter bound. In particular, we demonstrate new high-probability PAC-Bayes bounds with a novel and better-than-KL divergence that is inspired by Zhang et al. (2022). Our proof is inspired by recent advances in regret analysis of gambling algorithms, and its use to derive concentration inequalities. Our result is first-of-its-kind in that existing PAC-Bayes bounds with non-KL divergences are not known to be strictly better than KL. Thus, we believe our work marks the first step towards identifying optimal rates of PAC-Bayes bounds.
[ "['Ilja Kuzborskij' 'Kwang-Sung Jun' 'Yulian Wu' 'Kyoungseok Jang'\n 'Francesco Orabona']" ]
null
null
2402.09226
null
null
http://arxiv.org/pdf/2402.09226v2
2024-06-20T18:53:09Z
2024-02-14T15:10:37Z
Directional Convergence Near Small Initializations and Saddles in Two-Homogeneous Neural Networks
This paper examines gradient flow dynamics of two-homogeneous neural networks for small initializations, where all weights are initialized near the origin. For both square and logistic losses, it is shown that for sufficiently small initializations, the gradient flow dynamics spend sufficient time in the neighborhood of the origin to allow the weights of the neural network to approximately converge in direction to the Karush-Kuhn-Tucker (KKT) points of a neural correlation function that quantifies the correlation between the output of the neural network and corresponding labels in the training data set. For square loss, it has been observed that neural networks undergo saddle-to-saddle dynamics when initialized close to the origin. Motivated by this, this paper also shows a similar directional convergence among weights of small magnitude in the neighborhood of certain saddle points.
[ "['Akshay Kumar' 'Jarvis Haupt']" ]
null
null
2402.09230
null
null
http://arxiv.org/abs/2402.09230v1
2024-02-14T15:17:37Z
2024-02-14T15:17:37Z
Context Composing for Full Line Code Completion
Code Completion is one of the most used Integrated Development Environment (IDE) features, which affects the everyday life of a software developer. Modern code completion approaches moved from the composition of several static analysis-based contributors to pipelines that involve neural networks. This change allows the proposal of longer code suggestions while maintaining the relatively short time spent on generation itself. At JetBrains, we put a lot of effort into perfecting the code completion workflow so it can be both helpful and non-distracting for a programmer. We managed to ship the Full Line Code Completion feature to PyCharm Pro IDE and proved its usefulness in A/B testing on hundreds of real Python users. The paper describes our approach to context composing for the Transformer model that is a core of the feature's implementation. In addition to that, we share our next steps to improve the feature and emphasize the importance of several research aspects in the area.
[ "['Anton Semenkin' 'Yaroslav Sokolov' 'Evgeniia Vu']" ]
null
null
2402.09234
null
null
http://arxiv.org/pdf/2402.09234v2
2024-02-15T08:10:45Z
2024-02-14T15:22:59Z
Multi-Hierarchical Surrogate Learning for Structural Dynamical Crash Simulations Using Graph Convolutional Neural Networks
Crash simulations play an essential role in improving vehicle safety, design optimization, and injury risk estimation. Unfortunately, numerical solutions of such problems using state-of-the-art high-fidelity models require significant computational effort. Conventional data-driven surrogate modeling approaches create low-dimensional embeddings for evolving the dynamics in order to circumvent this computational effort. Most approaches directly operate on high-resolution data obtained from numerical discretization, which is both costly and complicated for mapping the flow of information over large spatial distances. Furthermore, working with a fixed resolution prevents the adaptation of surrogate models to environments with variable computing capacities, different visualization resolutions, and different accuracy requirements. We thus propose a multi-hierarchical framework for structurally creating a series of surrogate models for a kart frame, which is a good proxy for industrial-relevant crash simulations, at different levels of resolution. For multiscale phenomena, macroscale features are captured on a coarse surrogate, whereas microscale effects are resolved by finer ones. The learned behavior of the individual surrogates is passed from coarse to finer levels through transfer learning. In detail, we perform a mesh simplification on the kart model to obtain multi-resolution representations of it. We then train a graph-convolutional neural network-based surrogate that learns parameter-dependent low-dimensional latent dynamics on the coarsest representation. Subsequently, another, similarly structured surrogate is trained on the residual of the first surrogate using a finer resolution. This step can be repeated multiple times. By doing so, we construct multiple surrogates for the same system with varying hardware requirements and increasing accuracy.
[ "['Jonas Kneifl' 'Jörg Fehr' 'Steven L. Brunton' 'J. Nathan Kutz']" ]
null
null
2402.09236
null
null
http://arxiv.org/pdf/2402.09236v1
2024-02-14T15:23:59Z
2024-02-14T15:23:59Z
Learning Interpretable Concepts: Unifying Causal Representation Learning and Foundation Models
To build intelligent machine learning systems, there are two broad approaches. One approach is to build inherently interpretable models, as endeavored by the growing field of causal representation learning. The other approach is to build highly-performant foundation models and then invest efforts into understanding how they work. In this work, we relate these two approaches and study how to learn human-interpretable concepts from data. Weaving together ideas from both fields, we formally define a notion of concepts and show that they can be provably recovered from diverse data. Experiments on synthetic data and large language models show the utility of our unified approach.
[ "['Goutham Rajendran' 'Simon Buchholz' 'Bryon Aragam' 'Bernhard Schölkopf'\n 'Pradeep Ravikumar']" ]
null
null
2402.09239
null
null
http://arxiv.org/abs/2402.09239v1
2024-02-14T15:27:53Z
2024-02-14T15:27:53Z
Robust Training of Temporal GNNs using Nearest Neighbours based Hard Negatives
Temporal graph neural networks Tgnn have exhibited state-of-art performance in future-link prediction tasks. Training of these TGNNs is enumerated by uniform random sampling based unsupervised loss. During training, in the context of a positive example, the loss is computed over uninformative negatives, which introduces redundancy and sub-optimal performance. In this paper, we propose modified unsupervised learning of Tgnn, by replacing the uniform negative sampling with importance-based negative sampling. We theoretically motivate and define the dynamically computed distribution for a sampling of negative examples. Finally, using empirical evaluations over three real-world datasets, we show that Tgnn trained using loss based on proposed negative sampling provides consistent superior performance.
[ "['Shubham Gupta' 'Srikanta Bedathur']" ]
null
null
2402.09240
null
null
http://arxiv.org/pdf/2402.09240v1
2024-02-14T15:28:42Z
2024-02-14T15:28:42Z
Switch EMA: A Free Lunch for Better Flatness and Sharpness
Exponential Moving Average (EMA) is a widely used weight averaging (WA) regularization to learn flat optima for better generalizations without extra cost in deep neural network (DNN) optimization. Despite achieving better flatness, existing WA methods might fall into worse final performances or require extra test-time computations. This work unveils the full potential of EMA with a single line of modification, i.e., switching the EMA parameters to the original model after each epoch, dubbed as Switch EMA (SEMA). From both theoretical and empirical aspects, we demonstrate that SEMA can help DNNs to reach generalization optima that better trade-off between flatness and sharpness. To verify the effectiveness of SEMA, we conduct comparison experiments with discriminative, generative, and regression tasks on vision and language datasets, including image classification, self-supervised learning, object detection and segmentation, image generation, video prediction, attribute regression, and language modeling. Comprehensive results with popular optimizers and networks show that SEMA is a free lunch for DNN training by improving performances and boosting convergence speeds.
[ "['Siyuan Li' 'Zicheng Liu' 'Juanxi Tian' 'Ge Wang' 'Zedong Wang'\n 'Weiyang Jin' 'Di Wu' 'Cheng Tan' 'Tao Lin' 'Yang Liu' 'Baigui Sun'\n 'Stan Z. Li']" ]
null
null
2402.09245
null
null
http://arxiv.org/pdf/2402.09245v1
2024-02-14T15:34:28Z
2024-02-14T15:34:28Z
Overview of the L3DAS23 Challenge on Audio-Visual Extended Reality
The primary goal of the L3DAS23 Signal Processing Grand Challenge at ICASSP 2023 is to promote and support collaborative research on machine learning for 3D audio signal processing, with a specific emphasis on 3D speech enhancement and 3D Sound Event Localization and Detection in Extended Reality applications. As part of our latest competition, we provide a brand-new dataset, which maintains the same general characteristics of the L3DAS21 and L3DAS22 datasets, but with first-order Ambisonics recordings from multiple reverberant simulated environments. Moreover, we start exploring an audio-visual scenario by providing images of these environments, as perceived by the different microphone positions and orientations. We also propose updated baseline models for both tasks that can now support audio-image couples as input and a supporting API to replicate our results. Finally, we present the results of the participants. Further details about the challenge are available at https://www.l3das.com/icassp2023.
[ "['Christian Marinoni' 'Riccardo Fosco Gramaccioni' 'Changan Chen'\n 'Aurelio Uncini' 'Danilo Comminiello']" ]
null
null
2402.09247
null
null
http://arxiv.org/pdf/2402.09247v1
2024-02-14T15:35:53Z
2024-02-14T15:35:53Z
Momentum Approximation in Asynchronous Private Federated Learning
Asynchronous protocols have been shown to improve the scalability of federated learning (FL) with a massive number of clients. Meanwhile, momentum-based methods can achieve the best model quality in synchronous FL. However, naively applying momentum in asynchronous FL algorithms leads to slower convergence and degraded model performance. It is still unclear how to effective combinie these two techniques together to achieve a win-win. In this paper, we find that asynchrony introduces implicit bias to momentum updates. In order to address this problem, we propose momentum approximation that minimizes the bias by finding an optimal weighted average of all historical model updates. Momentum approximation is compatible with secure aggregation as well as differential privacy, and can be easily integrated in production FL systems with a minor communication and storage cost. We empirically demonstrate that on benchmark FL datasets, momentum approximation can achieve $1.15 textrm{--}4times$ speed up in convergence compared to existing asynchronous FL optimizers with momentum.
[ "['Tao Yu' 'Congzheng Song' 'Jianyu Wang' 'Mona Chitnis']" ]
null
null
2402.09249
null
null
http://arxiv.org/pdf/2402.09249v1
2024-02-14T15:37:58Z
2024-02-14T15:37:58Z
Exploring the Relationship: Transformative Adaptive Activation Functions in Comparison to Other Activation Functions
Neural networks are the state-of-the-art approach for many tasks and the activation function is one of the main building blocks that allow such performance. Recently, a novel transformative adaptive activation function (TAAF) allowing for any vertical and horizontal translation and scaling was proposed. This work sets the TAAF into the context of other activation functions. It shows that the TAAFs generalize over 50 existing activation functions and utilize similar concepts as over 70 other activation functions, underscoring the versatility of TAAFs. This comprehensive exploration positions TAAFs as a promising and adaptable addition to neural networks.
[ "['Vladimír Kunc']" ]
null
null
2402.09264
null
null
http://arxiv.org/pdf/2402.09264v3
2024-03-12T23:25:10Z
2024-02-14T15:51:28Z
UR2M: Uncertainty and Resource-Aware Event Detection on Microcontrollers
Traditional machine learning techniques are prone to generating inaccurate predictions when confronted with shifts in the distribution of data between the training and testing phases. This vulnerability can lead to severe consequences, especially in applications such as mobile healthcare. Uncertainty estimation has the potential to mitigate this issue by assessing the reliability of a model's output. However, existing uncertainty estimation techniques often require substantial computational resources and memory, making them impractical for implementation on microcontrollers (MCUs). This limitation hinders the feasibility of many important on-device wearable event detection (WED) applications, such as heart attack detection. In this paper, we present UR2M, a novel Uncertainty and Resource-aware event detection framework for MCUs. Specifically, we (i) develop an uncertainty-aware WED based on evidential theory for accurate event detection and reliable uncertainty estimation; (ii) introduce a cascade ML framework to achieve efficient model inference via early exits, by sharing shallower model layers among different event models; (iii) optimize the deployment of the model and MCU library for system efficiency. We conducted extensive experiments and compared UR2M to traditional uncertainty baselines using three wearable datasets. Our results demonstrate that UR2M achieves up to 864% faster inference speed, 857% energy-saving for uncertainty estimation, 55% memory saving on two popular MCUs, and a 22% improvement in uncertainty quantification performance. UR2M can be deployed on a wide range of MCUs, significantly expanding real-time and reliable WED applications.
[ "['Hong Jia' 'Young D. Kwon' 'Dong Ma' 'Nhat Pham' 'Lorena Qendro' 'Tam Vu'\n 'Cecilia Mascolo']" ]
null
null
2402.09268
null
null
http://arxiv.org/pdf/2402.09268v1
2024-02-14T15:54:55Z
2024-02-14T15:54:55Z
Transformers, parallel computation, and logarithmic depth
We show that a constant number of self-attention layers can efficiently simulate, and be simulated by, a constant number of communication rounds of Massively Parallel Computation. As a consequence, we show that logarithmic depth is sufficient for transformers to solve basic computational tasks that cannot be efficiently solved by several other neural sequence models and sub-quadratic transformer approximations. We thus establish parallelism as a key distinguishing property of transformers.
[ "['Clayton Sanford' 'Daniel Hsu' 'Matus Telgarsky']" ]
null
null
2402.09271
null
null
http://arxiv.org/abs/2402.09271v1
2024-02-14T15:59:22Z
2024-02-14T15:59:22Z
Hybrid Machine Learning techniques in the management of harmful algal blooms impact
Harmful algal blooms (HABs) are episodes of high concentrations of algae that are potentially toxic for human consumption. Mollusc farming can be affected by HABs because, as filter feeders, they can accumulate high concentrations of marine biotoxins in their tissues. To avoid the risk to human consumption, harvesting is prohibited when toxicity is detected. At present, the closure of production areas is based on expert knowledge and the existence of a predictive model would help when conditions are complex and sampling is not possible. Although the concentration of toxin in meat is the method most commonly used by experts in the control of shellfish production areas, it is rarely used as a target by automatic prediction models. This is largely due to the irregularity of the data due to the established sampling programs. As an alternative, the activity status of production areas has been proposed as a target variable based on whether mollusc meat has a toxicity level below or above the legal limit. This new option is the most similar to the actual functioning of the control of shellfish production areas. For this purpose, we have made a comparison between hybrid machine learning models like Neural-Network-Adding Bootstrap (BAGNET) and Discriminative Nearest Neighbor Classification (SVM-KNN) when estimating the state of production areas. The study has been carried out in several estuaries with different levels of complexity in the episodes of algal blooms to demonstrate the generalization capacity of the models in bloom detection. As a result, we could observe that, with an average recall value of 93.41% and without dropping below 90% in any of the estuaries, BAGNET outperforms the other models both in terms of results and robustness.
[ "['Andres Molares-Ulloa' 'Daniel Rivero' 'Jesus Gil Ruiz'\n 'Enrique Fernandez-Blanco' 'Luis de-la-Fuente-Valentín']" ]
null
null
2402.09281
null
null
http://arxiv.org/pdf/2402.09281v1
2024-02-14T16:10:42Z
2024-02-14T16:10:42Z
Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification
Covariance and Hessian matrices have been analyzed separately in the literature for classification problems. However, integrating these matrices has the potential to enhance their combined power in improving classification performance. We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model to achieve optimal class separability in binary classification tasks. Our approach is substantiated by formal proofs that establish its capability to maximize between-class mean distance and minimize within-class variances. By projecting data into the combined space of the most relevant eigendirections from both matrices, we achieve optimal class separability as per the linear discriminant analysis (LDA) criteria. Empirical validation across neural and health datasets consistently supports our theoretical framework and demonstrates that our method outperforms established methods. Our method stands out by addressing both LDA criteria, unlike PCA and the Hessian method, which predominantly emphasize one criterion each. This comprehensive approach captures intricate patterns and relationships, enhancing classification performance. Furthermore, through the utilization of both LDA criteria, our method outperforms LDA itself by leveraging higher-dimensional feature spaces, in accordance with Cover's theorem, which favors linear separability in higher dimensions. Our method also surpasses kernel-based methods and manifold learning techniques in performance. Additionally, our approach sheds light on complex DNN decision-making, rendering them comprehensible within a 2D space.
[ "['Agus Hartoyo' 'Jan Argasiński' 'Aleksandra Trenk' 'Kinga Przybylska'\n 'Anna Błasiak' 'Alessandro Crimi']" ]
null
null
2402.09283
null
null
http://arxiv.org/pdf/2402.09283v3
2024-03-27T13:55:14Z
2024-02-14T16:14:03Z
Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey
Large Language Models (LLMs) are now commonplace in conversation applications. However, their risks of misuse for generating harmful responses have raised serious societal concerns and spurred recent research on LLM conversation safety. Therefore, in this survey, we provide a comprehensive overview of recent studies, covering three critical aspects of LLM conversation safety: attacks, defenses, and evaluations. Our goal is to provide a structured summary that enhances understanding of LLM conversation safety and encourages further investigation into this important subject. For easy reference, we have categorized all the studies mentioned in this survey according to our taxonomy, available at: https://github.com/niconi19/LLM-conversation-safety.
[ "['Zhichen Dong' 'Zhanhui Zhou' 'Chao Yang' 'Jing Shao' 'Yu Qiao']" ]
null
null
2402.09286
null
null
http://arxiv.org/abs/2402.09286v1
2024-02-14T16:19:09Z
2024-02-14T16:19:09Z
Nutrition Facts, Drug Facts, and Model Facts: Putting AI Ethics into Practice in Gun Violence Research
Objective: Firearm injury research necessitates using data from often-exploited vulnerable populations of Black and Brown Americans. In order to minimize distrust, this study provides a framework for establishing AI trust and transparency with the general population. Methods: We propose a Model Facts template that is easily extendable and decomposes accuracy and demographics into standardized and minimally complex values. This framework allows general users to assess the validity and biases of a model without diving into technical model documentation. Examples: We apply the Model Facts template on two previously published models, a violence risk identification model and a suicide risk prediction model. We demonstrate the ease of accessing the appropriate information when the data is structured appropriately. Discussion: The Model Facts template is limited in its current form to human based data and biases. Like nutrition facts, it also will require some educational resources for users to grasp its full utility. Human computer interaction experiments should be conducted to ensure that the interaction between user interface and model interface is as desired. Conclusion: The Model Facts label is the first framework dedicated to establishing trust with end users and general population consumers. Implementation of Model Facts into firearm injury research will provide public health practitioners and those impacted by firearm injury greater faith in the tools the research provides.
[ "['Jessica Zhu' 'Dr. Michel Cukier' 'Dr. Joseph Richardson Jr']" ]
null
null
2402.09288
null
null
http://arxiv.org/pdf/2402.09288v5
2024-07-09T08:59:44Z
2024-02-14T16:21:47Z
EcoVal: An Efficient Data Valuation Framework for Machine Learning
Quantifying the value of data within a machine learning workflow can play a pivotal role in making more strategic decisions in machine learning initiatives. The existing Shapley value based frameworks for data valuation in machine learning are computationally expensive as they require considerable amount of repeated training of the model to obtain the Shapley value. In this paper, we introduce an efficient data valuation framework EcoVal, to estimate the value of data for machine learning models in a fast and practical manner. Instead of directly working with individual data sample, we determine the value of a cluster of similar data points. This value is further propagated amongst all the member cluster points. We show that the overall value of the data can be determined by estimating the intrinsic and extrinsic value of each data. This is enabled by formulating the performance of a model as a textit{production function}, a concept which is popularly used to estimate the amount of output based on factors like labor and capital in a traditional free economic market. We provide a formal proof of our valuation technique and elucidate the principles and mechanisms that enable its accelerated performance. We demonstrate the real-world applicability of our method by showcasing its effectiveness for both in-distribution and out-of-sample data. This work addresses one of the core challenges of efficient data valuation at scale in machine learning models. The code is available at underline{https://github.com/respai-lab/ecoval}.
[ "['Ayush K Tarun' 'Vikram S Chundawat' 'Murari Mandal' 'Hong Ming Tan'\n 'Bowei Chen' 'Mohan Kankanhalli']" ]
null
null
2402.09290
null
null
http://arxiv.org/pdf/2402.09290v1
2024-02-14T16:23:23Z
2024-02-14T16:23:23Z
Learning Interpretable Policies in Hindsight-Observable POMDPs through Partially Supervised Reinforcement Learning
Deep reinforcement learning has demonstrated remarkable achievements across diverse domains such as video games, robotic control, autonomous driving, and drug discovery. Common methodologies in partially-observable domains largely lean on end-to-end learning from high-dimensional observations, such as images, without explicitly reasoning about true state. We suggest an alternative direction, introducing the Partially Supervised Reinforcement Learning (PSRL) framework. At the heart of PSRL is the fusion of both supervised and unsupervised learning. The approach leverages a state estimator to distill supervised semantic state information from high-dimensional observations which are often fully observable at training time. This yields more interpretable policies that compose state predictions with control. In parallel, it captures an unsupervised latent representation. These two-the semantic state and the latent state-are then fused and utilized as inputs to a policy network. This juxtaposition offers practitioners a flexible and dynamic spectrum: from emphasizing supervised state information to integrating richer, latent insights. Extensive experimental results indicate that by merging these dual representations, PSRL offers a potent balance, enhancing model interpretability while preserving, and often significantly outperforming, the performance benchmarks set by traditional methods in terms of reward and convergence speed.
[ "['Michael Lanier' 'Ying Xu' 'Nathan Jacobs' 'Chongjie Zhang'\n 'Yevgeniy Vorobeychik']" ]
null
null
2402.09299
null
null
http://arxiv.org/pdf/2402.09299v1
2024-02-14T16:41:35Z
2024-02-14T16:41:35Z
Trained Without My Consent: Detecting Code Inclusion In Language Models Trained on Code
Code auditing ensures that the developed code adheres to standards, regulations, and copyright protection by verifying that it does not contain code from protected sources. The recent advent of Large Language Models (LLMs) as coding assistants in the software development process poses new challenges for code auditing. The dataset for training these models is mainly collected from publicly available sources. This raises the issue of intellectual property infringement as developers' codes are already included in the dataset. Therefore, auditing code developed using LLMs is challenging, as it is difficult to reliably assert if an LLM used during development has been trained on specific copyrighted codes, given that we do not have access to the training datasets of these models. Given the non-disclosure of the training datasets, traditional approaches such as code clone detection are insufficient for asserting copyright infringement. To address this challenge, we propose a new approach, TraWiC; a model-agnostic and interpretable method based on membership inference for detecting code inclusion in an LLM's training dataset. We extract syntactic and semantic identifiers unique to each program to train a classifier for detecting code inclusion. In our experiments, we observe that TraWiC is capable of detecting 83.87% of codes that were used to train an LLM. In comparison, the prevalent clone detection tool NiCad is only capable of detecting 47.64%. In addition to its remarkable performance, TraWiC has low resource overhead in contrast to pair-wise clone detection that is conducted during the auditing process of tools like CodeWhisperer reference tracker, across thousands of code snippets.
[ "['Vahid Majdinasab' 'Amin Nikanjam' 'Foutse Khomh']" ]
null
null
2402.09303
null
null
http://arxiv.org/pdf/2402.09303v3
2024-07-12T12:47:19Z
2024-02-14T16:47:20Z
Comparing supervised learning dynamics: Deep neural networks match human data efficiency but show a generalisation lag
Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification. Often, comparison studies focus on the end-result of the learning process by measuring and comparing the similarities in the representations of object categories once they have been formed. However, the process of how these representations emerge -- that is, the behavioral changes and intermediate stages observed during the acquisition -- is less often directly and empirically compared. Here we report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs. We develop a constrained supervised learning environment to align learning-relevant conditions such as starting point, input modality, available input data and the feedback provided. Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data. Comparisons across the entire learning process indicate that DNNs demonstrate a level of data efficiency comparable to human learners, challenging some prevailing assumptions in the field. However, our results also reveal representational differences: while DNNs' learning is characterized by a pronounced generalisation lag, humans appear to immediately acquire generalizable representations without a preliminary phase of learning training set-specific information that is only later transferred to novel data.
[ "['Lukas S. Huber' 'Fred W. Mast' 'Felix A. Wichmann']" ]
null
null
2402.09305
null
null
http://arxiv.org/pdf/2402.09305v1
2024-02-14T16:49:13Z
2024-02-14T16:49:13Z
Embracing the black box: Heading towards foundation models for causal discovery from time series data
Causal discovery from time series data encompasses many existing solutions, including those based on deep learning techniques. However, these methods typically do not endorse one of the most prevalent paradigms in deep learning: End-to-end learning. To address this gap, we explore what we call Causal Pretraining. A methodology that aims to learn a direct mapping from multivariate time series to the underlying causal graphs in a supervised manner. Our empirical findings suggest that causal discovery in a supervised manner is possible, assuming that the training and test time series samples share most of their dynamics. More importantly, we found evidence that the performance of Causal Pretraining can increase with data and model size, even if the additional data do not share the same dynamics. Further, we provide examples where causal discovery for real-world data with causally pretrained neural networks is possible within limits. We argue that this hints at the possibility of a foundation model for causal discovery.
[ "['Gideon Stein' 'Maha Shadaydeh' 'Joachim Denzler']" ]
null
null
2402.09316
null
null
http://arxiv.org/pdf/2402.09316v1
2024-02-14T17:11:52Z
2024-02-14T17:11:52Z
Only My Model On My Data: A Privacy Preserving Approach Protecting one Model and Deceiving Unauthorized Black-Box Models
Deep neural networks are extensively applied to real-world tasks, such as face recognition and medical image classification, where privacy and data protection are critical. Image data, if not protected, can be exploited to infer personal or contextual information. Existing privacy preservation methods, like encryption, generate perturbed images that are unrecognizable to even humans. Adversarial attack approaches prohibit automated inference even for authorized stakeholders, limiting practical incentives for commercial and widespread adaptation. This pioneering study tackles an unexplored practical privacy preservation use case by generating human-perceivable images that maintain accurate inference by an authorized model while evading other unauthorized black-box models of similar or dissimilar objectives, and addresses the previous research gaps. The datasets employed are ImageNet, for image classification, Celeba-HQ dataset, for identity classification, and AffectNet, for emotion classification. Our results show that the generated images can successfully maintain the accuracy of a protected model and degrade the average accuracy of the unauthorized black-box models to 11.97%, 6.63%, and 55.51% on ImageNet, Celeba-HQ, and AffectNet datasets, respectively.
[ "['Weiheng Chai' 'Brian Testa' 'Huantao Ren' 'Asif Salekin'\n 'Senem Velipasalar']" ]
null
null
2402.09326
null
null
http://arxiv.org/pdf/2402.09326v1
2024-02-14T17:17:05Z
2024-02-14T17:17:05Z
Stability and Multigroup Fairness in Ranking with Uncertain Predictions
Rankings are ubiquitous across many applications, from search engines to hiring committees. In practice, many rankings are derived from the output of predictors. However, when predictors trained for classification tasks have intrinsic uncertainty, it is not obvious how this uncertainty should be represented in the derived rankings. Our work considers ranking functions: maps from individual predictions for a classification task to distributions over rankings. We focus on two aspects of ranking functions: stability to perturbations in predictions and fairness towards both individuals and subgroups. Not only is stability an important requirement for its own sake, but -- as we show -- it composes harmoniously with individual fairness in the sense of Dwork et al. (2012). While deterministic ranking functions cannot be stable aside from trivial scenarios, we show that the recently proposed uncertainty aware (UA) ranking functions of Singh et al. (2021) are stable. Our main result is that UA rankings also achieve multigroup fairness through successful composition with multiaccurate or multicalibrated predictors. Our work demonstrates that UA rankings naturally interpolate between group and individual level fairness guarantees, while simultaneously satisfying stability guarantees important whenever machine-learned predictions are used.
[ "['Siddartha Devic' 'Aleksandra Korolova' 'David Kempe' 'Vatsal Sharan']" ]
null
null
2402.09327
null
null
http://arxiv.org/pdf/2402.09327v1
2024-02-14T17:17:30Z
2024-02-14T17:17:30Z
Information Complexity of Stochastic Convex Optimization: Applications to Generalization and Memorization
In this work, we investigate the interplay between memorization and learning in the context of emph{stochastic convex optimization} (SCO). We define memorization via the information a learning algorithm reveals about its training data points. We then quantify this information using the framework of conditional mutual information (CMI) proposed by Steinke and Zakynthinou (2020). Our main result is a precise characterization of the tradeoff between the accuracy of a learning algorithm and its CMI, answering an open question posed by Livni (2023). We show that, in the $L^2$ Lipschitz--bounded setting and under strong convexity, every learner with an excess error $varepsilon$ has CMI bounded below by $Omega(1/varepsilon^2)$ and $Omega(1/varepsilon)$, respectively. We further demonstrate the essential role of memorization in learning problems in SCO by designing an adversary capable of accurately identifying a significant fraction of the training samples in specific SCO problems. Finally, we enumerate several implications of our results, such as a limitation of generalization bounds based on CMI and the incompressibility of samples in SCO problems.
[ "['Idan Attias' 'Gintare Karolina Dziugaite' 'Mahdi Haghifam' 'Roi Livni'\n 'Daniel M. Roy']" ]
null
null
2402.09328
null
null
http://arxiv.org/pdf/2402.09328v1
2024-02-14T17:18:03Z
2024-02-14T17:18:03Z
Connecting Algorithmic Fairness to Quality Dimensions in Machine Learning in Official Statistics and Survey Production
National Statistical Organizations (NSOs) increasingly draw on Machine Learning (ML) to improve the timeliness and cost-effectiveness of their products. When introducing ML solutions, NSOs must ensure that high standards with respect to robustness, reproducibility, and accuracy are upheld as codified, e.g., in the Quality Framework for Statistical Algorithms (QF4SA; Yung et al. 2022). At the same time, a growing body of research focuses on fairness as a pre-condition of a safe deployment of ML to prevent disparate social impacts in practice. However, fairness has not yet been explicitly discussed as a quality aspect in the context of the application of ML at NSOs. We employ Yung et al. (2022)'s QF4SA quality framework and present a mapping of its quality dimensions to algorithmic fairness. We thereby extend the QF4SA framework in several ways: we argue for fairness as its own quality dimension, we investigate the interaction of fairness with other dimensions, and we explicitly address data, both on its own and its interaction with applied methodology. In parallel with empirical illustrations, we show how our mapping can contribute to methodology in the domains of official statistics, algorithmic fairness, and trustworthy machine learning.
[ "['Patrick Oliver Schenk' 'Christoph Kern']" ]
null
null
2402.09330
null
null
http://arxiv.org/pdf/2402.09330v2
2024-05-03T09:01:17Z
2024-02-14T17:22:03Z
3D-based RNA function prediction tools in rnaglib
Understanding the connection between complex structural features of RNA and biological function is a fundamental challenge in evolutionary studies and in RNA design. However, building datasets of RNA 3D structures and making appropriate modeling choices remains time-consuming and lacks standardization. In this chapter, we describe the use of rnaglib, to train supervised and unsupervised machine learning-based function prediction models on datasets of RNA 3D structures.
[ "['Carlos Oliver' 'Vincent Mallet' 'Jérôme Waldispühl']" ]
null
null
2402.09345
null
null
http://arxiv.org/pdf/2402.09345v4
2024-05-23T06:39:53Z
2024-02-14T17:49:07Z
InfoRM: Mitigating Reward Hacking in RLHF via Information-Theoretic Reward Modeling
Despite the success of reinforcement learning from human feedback (RLHF) in aligning language models with human values, reward hacking, also termed reward overoptimization, remains a critical challenge. This issue primarily arises from reward misgeneralization, where reward models (RMs) compute reward using spurious features that are irrelevant to human preferences. In this work, we tackle this problem from an information-theoretic perspective and propose a framework for reward modeling, namely InfoRM, by introducing a variational information bottleneck objective to filter out irrelevant information. Notably, we further identify a correlation between overoptimization and outliers in the IB latent space of InfoRM, establishing it as a promising tool for detecting reward overoptimization. Inspired by this finding, we propose the Cluster Separation Index (CSI), which quantifies deviations in the IB latent space, as an indicator of reward overoptimization to facilitate the development of online mitigation strategies. Extensive experiments on a wide range of settings and RM scales (70M, 440M, 1.4B, and 7B) demonstrate the effectiveness of InfoRM. Further analyses reveal that InfoRM's overoptimization detection mechanism is not only effective but also robust across a broad range of datasets, signifying a notable advancement in the field of RLHF. The code will be released upon acceptance.
[ "['Yuchun Miao' 'Sen Zhang' 'Liang Ding' 'Rong Bao' 'Lefei Zhang'\n 'Dacheng Tao']" ]
null
null
2402.09358
null
null
http://arxiv.org/pdf/2402.09358v1
2024-02-14T18:02:24Z
2024-02-14T18:02:24Z
Integrating ChatGPT into Secure Hospital Networks: A Case Study on Improving Radiology Report Analysis
This study demonstrates the first in-hospital adaptation of a cloud-based AI, similar to ChatGPT, into a secure model for analyzing radiology reports, prioritizing patient data privacy. By employing a unique sentence-level knowledge distillation method through contrastive learning, we achieve over 95% accuracy in detecting anomalies. The model also accurately flags uncertainties in its predictions, enhancing its reliability and interpretability for physicians with certainty indicators. These advancements represent significant progress in developing secure and efficient AI tools for healthcare, suggesting a promising future for in-hospital AI applications with minimal supervision.
[ "['Kyungsu Kim' 'Junhyun Park' 'Saul Langarica' 'Adham Mahmoud Alkhadrawi'\n 'Synho Do']" ]
null
null
2402.09360
null
null
http://arxiv.org/pdf/2402.09360v1
2024-02-14T18:04:36Z
2024-02-14T18:04:36Z
HiRE: High Recall Approximate Top-$k$ Estimation for Efficient LLM Inference
Autoregressive decoding with generative Large Language Models (LLMs) on accelerators (GPUs/TPUs) is often memory-bound where most of the time is spent on transferring model parameters from high bandwidth memory (HBM) to cache. On the other hand, recent works show that LLMs can maintain quality with significant sparsity/redundancy in the feedforward (FFN) layers by appropriately training the model to operate on a top-$k$ fraction of rows/columns (where $k approx 0.05$), there by suggesting a way to reduce the transfer of model parameters, and hence latency. However, exploiting this sparsity for improving latency is hindered by the fact that identifying top rows/columns is data-dependent and is usually performed using full matrix operations, severely limiting potential gains. To address these issues, we introduce HiRE (High Recall Approximate Top-k Estimation). HiRE comprises of two novel components: (i) a compression scheme to cheaply predict top-$k$ rows/columns with high recall, followed by full computation restricted to the predicted subset, and (ii) DA-TOP-$k$: an efficient multi-device approximate top-$k$ operator. We demonstrate that on a one billion parameter model, HiRE applied to both the softmax as well as feedforward layers, achieves almost matching pretraining and downstream accuracy, and speeds up inference latency by $1.47times$ on a single TPUv5e device.
[ "['Yashas Samaga B L' 'Varun Yerram' 'Chong You' 'Srinadh Bhojanapalli'\n 'Sanjiv Kumar' 'Prateek Jain' 'Praneeth Netrapalli']" ]
null
null
2402.09370
null
null
http://arxiv.org/pdf/2402.09370v2
2024-06-18T01:00:58Z
2024-02-14T18:17:45Z
Pseudorandom Error-Correcting Codes
We construct pseudorandom error-correcting codes (or simply pseudorandom codes), which are error-correcting codes with the property that any polynomial number of codewords are pseudorandom to any computationally-bounded adversary. Efficient decoding of corrupted codewords is possible with the help of a decoding key. We build pseudorandom codes that are robust to substitution and deletion errors, where pseudorandomness rests on standard cryptographic assumptions. Specifically, pseudorandomness is based on either $2^{O(sqrt{n})}$-hardness of LPN, or polynomial hardness of LPN and the planted XOR problem at low density. As our primary application of pseudorandom codes, we present an undetectable watermarking scheme for outputs of language models that is robust to cropping and a constant rate of random substitutions and deletions. The watermark is undetectable in the sense that any number of samples of watermarked text are computationally indistinguishable from text output by the original model. This is the first undetectable watermarking scheme that can tolerate a constant rate of errors. Our second application is to steganography, where a secret message is hidden in innocent-looking content. We present a constant-rate stateless steganography scheme with robustness to a constant rate of substitutions. Ours is the first stateless steganography scheme with provable steganographic security and any robustness to errors.
[ "['Miranda Christ' 'Sam Gunn']" ]
null
null
2402.09371
null
null
http://arxiv.org/pdf/2402.09371v1
2024-02-14T18:18:29Z
2024-02-14T18:18:29Z
Transformers Can Achieve Length Generalization But Not Robustly
Length generalization, defined as the ability to extrapolate from shorter training sequences to longer test ones, is a significant challenge for language models. This issue persists even with large-scale Transformers handling relatively straightforward tasks. In this paper, we test the Transformer's ability of length generalization using the task of addition of two integers. We show that the success of length generalization is intricately linked to the data format and the type of position encoding. Using the right combination of data format and position encodings, we show for the first time that standard Transformers can extrapolate to a sequence length that is 2.5x the input length. Nevertheless, unlike in-distribution generalization, length generalization remains fragile, significantly influenced by factors like random weight initialization and training data order, leading to large variances across different random seeds.
[ "['Yongchao Zhou' 'Uri Alon' 'Xinyun Chen' 'Xuezhi Wang' 'Rishabh Agarwal'\n 'Denny Zhou']" ]
null
null
2402.09373
null
null
http://arxiv.org/pdf/2402.09373v2
2024-07-11T23:43:18Z
2024-02-14T18:20:44Z
Loss Shaping Constraints for Long-Term Time Series Forecasting
Several applications in time series forecasting require predicting multiple steps ahead. Despite the vast amount of literature in the topic, both classical and recent deep learning based approaches have mostly focused on minimising performance averaged over the predicted window. We observe that this can lead to disparate distributions of errors across forecasting steps, especially for recent transformer architectures trained on popular forecasting benchmarks. That is, optimising performance on average can lead to undesirably large errors at specific time-steps. In this work, we present a Constrained Learning approach for long-term time series forecasting that aims to find the best model in terms of average performance that respects a user-defined upper bound on the loss at each time-step. We call our approach loss shaping constraints because it imposes constraints on the loss at each time step, and leverage recent duality results to show that despite its non-convexity, the resulting problem has a bounded duality gap. We propose a practical Primal-Dual algorithm to tackle it, and demonstrate that the proposed approach exhibits competitive average performance in time series forecasting benchmarks, while shaping the distribution of errors across the predicted window.
[ "['Ignacio Hounie' 'Javier Porras-Valenzuela' 'Alejandro Ribeiro']" ]
null
null
2402.09381
null
null
http://arxiv.org/pdf/2402.09381v1
2024-02-14T18:26:58Z
2024-02-14T18:26:58Z
GraSSRep: Graph-Based Self-Supervised Learning for Repeat Detection in Metagenomic Assembly
Repetitive DNA (repeats) poses significant challenges for accurate and efficient genome assembly and sequence alignment. This is particularly true for metagenomic data, where genome dynamics such as horizontal gene transfer, gene duplication, and gene loss/gain complicate accurate genome assembly from metagenomic communities. Detecting repeats is a crucial first step in overcoming these challenges. To address this issue, we propose GraSSRep, a novel approach that leverages the assembly graph's structure through graph neural networks (GNNs) within a self-supervised learning framework to classify DNA sequences into repetitive and non-repetitive categories. Specifically, we frame this problem as a node classification task within a metagenomic assembly graph. In a self-supervised fashion, we rely on a high-precision (but low-recall) heuristic to generate pseudo-labels for a small proportion of the nodes. We then use those pseudo-labels to train a GNN embedding and a random forest classifier to propagate the labels to the remaining nodes. In this way, GraSSRep combines sequencing features with pre-defined and learned graph features to achieve state-of-the-art performance in repeat detection. We evaluate our method using simulated and synthetic metagenomic datasets. The results on the simulated data highlight our GraSSRep's robustness to repeat attributes, demonstrating its effectiveness in handling the complexity of repeated sequences. Additionally, our experiments with synthetic metagenomic datasets reveal that incorporating the graph structure and the GNN enhances our detection performance. Finally, in comparative analyses, GraSSRep outperforms existing repeat detection tools with respect to precision and recall.
[ "['Ali Azizpour' 'Advait Balaji' 'Todd J. Treangen' 'Santiago Segarra']" ]
null
null
2402.09387
null
null
http://arxiv.org/pdf/2402.09387v1
2024-02-14T18:37:40Z
2024-02-14T18:37:40Z
Active Disruption Avoidance and Trajectory Design for Tokamak Ramp-downs with Neural Differential Equations and Reinforcement Learning
The tokamak offers a promising path to fusion energy, but plasma disruptions pose a major economic risk, motivating considerable advances in disruption avoidance. This work develops a reinforcement learning approach to this problem by training a policy to safely ramp-down the plasma current while avoiding limits on a number of quantities correlated with disruptions. The policy training environment is a hybrid physics and machine learning model trained on simulations of the SPARC primary reference discharge (PRD) ramp-down, an upcoming burning plasma scenario which we use as a testbed. To address physics uncertainty and model inaccuracies, the simulation environment is massively parallelized on GPU with randomized physics parameters during policy training. The trained policy is then successfully transferred to a higher fidelity simulator where it successfully ramps down the plasma while avoiding user-specified disruptive limits. We also address the crucial issue of safety criticality by demonstrating that a constraint-conditioned policy can be used as a trajectory design assistant to design a library of feed-forward trajectories to handle different physics conditions and user settings. As a library of trajectories is more interpretable and verifiable offline, we argue such an approach is a promising path for leveraging the capabilities of reinforcement learning in the safety-critical context of burning plasma tokamaks. Finally, we demonstrate how the training environment can be a useful platform for other feed-forward optimization approaches by using an evolutionary algorithm to perform optimization of feed-forward trajectories that are robust to physics uncertainty
[ "['Allen M. Wang' 'Oswin So' 'Charles Dawson' 'Darren T. Garnier'\n 'Cristina Rea' 'Chuchu Fan']" ]
null
null
2402.09398
null
null
http://arxiv.org/pdf/2402.09398v2
2024-06-12T06:08:58Z
2024-02-14T18:54:56Z
Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference
Many computational factors limit broader deployment of large language models. In this paper, we focus on a memory bottleneck imposed by the key-value (KV) cache, a computational shortcut that requires storing previous KV pairs during decoding. While existing KV cache methods approach this problem by pruning or evicting large swaths of relatively less important KV pairs to dramatically reduce the memory footprint of the cache, they can have limited success in tasks that require recollecting a majority of previous tokens. To alleviate this issue, we propose LESS, a simple integration of a (nearly free) constant sized cache with eviction-based cache methods, such that all tokens can be queried at later decoding steps. Its ability to retain information throughout time shows merit on a variety of tasks where we demonstrate LESS can help reduce the performance gap from caching everything, sometimes even matching it, all while being efficient. Relevant code can be found at https://github.com/hdong920/LESS.
[ "['Harry Dong' 'Xinyu Yang' 'Zhenyu Zhang' 'Zhangyang Wang' 'Yuejie Chi'\n 'Beidi Chen']" ]
null
null
2402.09401
null
null
http://arxiv.org/pdf/2402.09401v1
2024-02-14T18:58:40Z
2024-02-14T18:58:40Z
Reinforcement Learning from Human Feedback with Active Queries
Aligning large language models (LLM) with human preference plays a key role in building modern generative models and can be achieved by reinforcement learning from human feedback (RLHF). Despite their superior performance, current RLHF approaches often require a large amount of human-labelled preference data, which is expensive to collect. In this paper, inspired by the success of active learning, we address this problem by proposing query-efficient RLHF methods. We first formalize the alignment problem as a contextual dueling bandit problem and design an active-query-based proximal policy optimization (APPO) algorithm with an $tilde{O}(d^2/Delta)$ regret bound and an $tilde{O}(d^2/Delta^2)$ query complexity, where $d$ is the dimension of feature space and $Delta$ is the sub-optimality gap over all the contexts. We then propose ADPO, a practical version of our algorithm based on direct preference optimization (DPO) and apply it to fine-tuning LLMs. Our experiments show that ADPO, while only making about half of queries for human preference, matches the performance of the state-of-the-art DPO method.
[ "['Kaixuan Ji' 'Jiafan He' 'Quanquan Gu']" ]
null
null
2402.09404
null
null
http://arxiv.org/pdf/2402.09404v1
2024-02-14T18:59:33Z
2024-02-14T18:59:33Z
AQA-Bench: An Interactive Benchmark for Evaluating LLMs' Sequential Reasoning Ability
This paper introduces AQA-Bench, a novel benchmark to assess the sequential reasoning capabilities of large language models (LLMs) in algorithmic contexts, such as depth-first search (DFS). The key feature of our evaluation benchmark lies in its interactive evaluation protocol -- for example, in DFS, the availability of each node's connected edge is contingent upon the model's traversal to that node, thereby necessitating the LLM's ability to effectively remember visited nodes and strategize subsequent moves. We comprehensively build AQA-Bench with three different algorithms, namely binary search, depth-first search, and breadth-first search, and to evaluate the sequential reasoning ability of 12 different LLMs. Our investigations reveal several interesting findings: (1) Closed-source models like GPT-4 and Gemini generally show strong sequential reasoning ability, significantly outperforming open-source LLMs. (2) Naively providing interactive examples may inadvertently hurt few-shot performance. (3) A very limited number of predecessor steps following the optimal policy can substantially boost small models' performance. (4) The scaling correlation between performance and model size is not always significant, sometimes even showcasing an inverse trend. We hope our study can catalyze future work on advancing the understanding and enhancement of LLMs' capabilities in sequential reasoning. The code is available at https://github.com/UCSC-VLAA/AQA-Bench.
[ "['Siwei Yang' 'Bingchen Zhao' 'Cihang Xie']" ]
null
null
2402.09416
null
null
http://arxiv.org/pdf/2402.09416v1
2024-01-12T18:38:14Z
2024-01-12T18:38:14Z
Deep Manifold Transformation for Protein Representation Learning
Protein representation learning is critical in various tasks in biology, such as drug design and protein structure or function prediction, which has primarily benefited from protein language models and graph neural networks. These models can capture intrinsic patterns from protein sequences and structures through masking and task-related losses. However, the learned protein representations are usually not well optimized, leading to performance degradation due to limited data, difficulty adapting to new tasks, etc. To address this, we propose a new underline{d}eep underline{m}anifold underline{t}ransformation approach for universal underline{p}rotein underline{r}epresentation underline{l}earning (DMTPRL). It employs manifold learning strategies to improve the quality and adaptability of the learned embeddings. Specifically, we apply a novel manifold learning loss during training based on the graph inter-node similarity. Our proposed DMTPRL method outperforms state-of-the-art baselines on diverse downstream tasks across popular datasets. This validates our approach for learning universal and robust protein representations. We promise to release the code after acceptance.
[ "['Bozhen Hu' 'Zelin Zang' 'Cheng Tan' 'Stan Z. Li']" ]
null
null
2402.09419
null
null
http://arxiv.org/pdf/2402.09419v1
2024-01-19T08:34:12Z
2024-01-19T08:34:12Z
Multidimensional Gabor-Like Filters Derived from Gaussian Functions on Logarithmic Frequency Axes
A novel wavelet-like function is presented that makes it convenient to create filter banks given mainly two parameters that influence the focus area and the filter count. This is accomplished by computing the inverse Fourier transform of Gaussian functions on logarithmic frequency axes in the frequency domain. The resulting filters are similar to Gabor filters and represent oriented brief signal oscillations of different sizes. The wavelet-like function can be thought of as a generalized Log-Gabor filter that is multidimensional, always uses Gaussian functions on logarithmic frequency axes, and innately includes low-pass filters from Gaussian functions located at the frequency domain origin.
[ "['Dherik Devakumar' 'Ole Christian Eidheim']" ]
null
null
2402.09421
null
null
http://arxiv.org/pdf/2402.09421v1
2024-01-19T16:05:13Z
2024-01-19T16:05:13Z
EEG Based Generative Depression Discriminator
Depression is a very common but serious mood disorder.In this paper, We built a generative detection network(GDN) in accordance with three physiological laws. Our aim is that we expect the neural network to learn the relevant brain activity based on the EEG signal and, at the same time, to regenerate the target electrode signal based on the brain activity. We trained two generators, the first one learns the characteristics of depressed brain activity, and the second one learns the characteristics of control group's brain activity. In the test, a segment of EEG signal was put into the two generators separately, if the relationship between the EEG signal and brain activity conforms to the characteristics of a certain category, then the signal generated by the generator of the corresponding category is more consistent with the original signal. Thus it is possible to determine the category corresponding to a certain segment of EEG signal. We obtained an accuracy of 92.30% on the MODMA dataset and 86.73% on the HUSM dataset. Moreover, this model is able to output explainable information, which can be used to help the user to discover possible misjudgments of the network.Our code will be released.
[ "['Ziming Mao' 'Hao wu' 'Yongxi Tan' 'Yuhe Jin']" ]
null
null
2402.09424
null
null
http://arxiv.org/pdf/2402.09424v1
2024-01-21T19:23:56Z
2024-01-21T19:23:56Z
Epilepsy Seizure Detection and Prediction using an Approximate Spiking Convolutional Transformer
Epilepsy is a common disease of the nervous system. Timely prediction of seizures and intervention treatment can significantly reduce the accidental injury of patients and protect the life and health of patients. This paper presents a neuromorphic Spiking Convolutional Transformer, named Spiking Conformer, to detect and predict epileptic seizure segments from scalped long-term electroencephalogram (EEG) recordings. We report evaluation results from the Spiking Conformer model using the Boston Children's Hospital-MIT (CHB-MIT) EEG dataset. By leveraging spike-based addition operations, the Spiking Conformer significantly reduces the classification computational cost compared to the non-spiking model. Additionally, we introduce an approximate spiking neuron layer to further reduce spike-triggered neuron updates by nearly 38% without sacrificing accuracy. Using raw EEG data as input, the proposed Spiking Conformer achieved an average sensitivity rate of 94.9% and a specificity rate of 99.3% for the seizure detection task, and 96.8%, 89.5% for the seizure prediction task, and needs >10x fewer operations compared to the non-spiking equivalent model.
[ "['Qinyu Chen' 'Congyi Sun' 'Chang Gao' 'Shih-Chii Liu']" ]
null
null
2402.09426
null
null
http://arxiv.org/pdf/2402.09426v1
2024-01-23T23:42:55Z
2024-01-23T23:42:55Z
Graph Koopman Autoencoder for Predictive Covert Communication Against UAV Surveillance
Low Probability of Detection (LPD) communication aims to obscure the very presence of radio frequency (RF) signals, going beyond just hiding the content of the communication. However, the use of Unmanned Aerial Vehicles (UAVs) introduces a challenge, as UAVs can detect RF signals from the ground by hovering over specific areas of interest. With the growing utilization of UAVs in modern surveillance, there is a crucial need for a thorough understanding of their unknown nonlinear dynamic trajectories to effectively implement LPD communication. Unfortunately, this critical information is often not readily available, posing a significant hurdle in LPD communication. To address this issue, we consider a case-study for enabling terrestrial LPD communication in the presence of multiple UAVs that are engaged in surveillance. We introduce a novel framework that combines graph neural networks (GNN) with Koopman theory to predict the trajectories of multiple fixed-wing UAVs over an extended prediction horizon. Using the predicted UAV locations, we enable LPD communication in a terrestrial ad-hoc network by controlling nodes' transmit powers to keep the received power at UAVs' predicted locations minimized. Our extensive simulations validate the efficacy of the proposed framework in accurately predicting the trajectories of multiple UAVs, thereby effectively establishing LPD communication.
[ "['Sivaram Krishnan' 'Jihong Park' 'Gregory Sherman' 'Benjamin Campbell'\n 'Jinho Choi']" ]
null
null
2402.09427
null
null
http://arxiv.org/pdf/2402.09427v1
2024-01-24T05:28:29Z
2024-01-24T05:28:29Z
DoorINet: A Deep-Learning Inertial Framework for Door-Mounted IoT Applications
Many Internet of Things applications utilize low-cost, micro, electro-mechanical inertial sensors. A common task is orientation estimation. To tackle such a task, attitude and heading reference system algorithms are applied. Relying on the gyroscope readings, the accelerometer readings are used to update the attitude angles, and magnetometer measurements are utilized to update the heading angle. In indoor environments, magnetometers suffer from interference that degrades their performance. This mainly influences applications focused on estimating the heading angle like finding the heading angle of a closet or fridge door. To circumvent such situations, we propose DoorINet, an end-to-end deep-learning framework to calculate the heading angle from door-mounted, low-cost inertial sensors without using magnetometers. To evaluate our approach, we record a unique dataset containing 391 minutes of accelerometer and gyroscope measurements and corresponding ground-truth heading angle. We show that our proposed approach outperforms commonly used, model based approaches and data-driven methods.
[ "['Aleksei Zakharchenko' 'Sharon Farber' 'Itzik Klein']" ]
null
null
2402.09432
null
null
http://arxiv.org/abs/2402.09432v1
2024-01-24T22:28:14Z
2024-01-24T22:28:14Z
An Enhanced Analysis of Traffic Intelligence in Smart Cities Using Sustainable Deep Radial Function
Smart cities have revolutionized urban living by incorporating sophisticated technologies to optimize various aspects of urban infrastructure, such as transportation systems. Effective traffic management is a crucial component of smart cities, as it has a direct impact on the quality of life of residents and tourists. Utilizing deep radial basis function (RBF) networks, this paper describes a novel strategy for enhancing traffic intelligence in smart cities. Traditional methods of traffic analysis frequently rely on simplistic models that are incapable of capturing the intricate patterns and dynamics of urban traffic systems. Deep learning techniques, such as deep RBF networks, have the potential to extract valuable insights from traffic data and enable more precise predictions and decisions. In this paper, we propose an RBF based method for enhancing smart city traffic intelligence. Deep RBF networks combine the adaptability and generalization capabilities of deep learning with the discriminative capability of radial basis functions. The proposed method can effectively learn intricate relationships and nonlinear patterns in traffic data by leveraging the hierarchical structure of deep neural networks. The deep RBF model can learn to predict traffic conditions, identify congestion patterns, and make informed recommendations for optimizing traffic management strategies by incorporating these rich and diverse data To evaluate the efficacy of our proposed method, extensive experiments and comparisons with real world traffic datasets from a smart city environment were conducted. In terms of prediction accuracy and efficiency, the results demonstrate that the deep RBF based approach outperforms conventional traffic analysis methods. Smart city traffic intelligence is enhanced by the model capacity to capture nonlinear relationships and manage large scale data sets.
[ "['Ayad Ghany Ismaeel' 'S. J. Jereesha Mary' 'C. Anitha'\n 'Jaganathan Logeshwaran' 'Sarmad Nozad Mahmood' 'Sameer Alani'\n 'Akram H. Shather']" ]
null
null
2402.09433
null
null
http://arxiv.org/pdf/2402.09433v1
2024-01-26T03:23:09Z
2024-01-26T03:23:09Z
Electrical Behavior Association Mining for Household ShortTerm Energy Consumption Forecasting
Accurate household short-term energy consumption forecasting (STECF) is crucial for home energy management, but it is technically challenging, due to highly random behaviors of individual residential users. To improve the accuracy of STECF on a day-ahead scale, this paper proposes an novel STECF methodology that leverages association mining in electrical behaviors. First, a probabilistic association quantifying and discovering method is proposed to model the pairwise behaviors association and generate associated clusters. Then, a convolutional neural network-gated recurrent unit (CNN-GRU) based forecasting is provided to explore the temporal correlation and enhance accuracy. The testing results demonstrate that this methodology yields a significant enhancement in the STECF.
[ "['Heyang Yu' 'Yuxi Sun' 'Yintao Liu' 'Guangchao Geng' 'Quanyuan Jiang']" ]
null
null
2402.09434
null
null
http://arxiv.org/pdf/2402.09434v1
2024-01-26T06:08:49Z
2024-01-26T06:08:49Z
Disentangling Imperfect: A Wavelet-Infused Multilevel Heterogeneous Network for Human Activity Recognition in Flawed Wearable Sensor Data
The popularity and diffusion of wearable devices provides new opportunities for sensor-based human activity recognition that leverages deep learning-based algorithms. Although impressive advances have been made, two major challenges remain. First, sensor data is often incomplete or noisy due to sensor placement and other issues as well as data transmission failure, calling for imputation of missing values, which also introduces noise. Second, human activity has multi-scale characteristics. Thus, different groups of people and even the same person may behave differently under different circumstances. To address these challenges, we propose a multilevel heterogeneous neural network, called MHNN, for sensor data analysis. We utilize multilevel discrete wavelet decomposition to extract multi-resolution features from sensor data. This enables distinguishing signals with different frequencies, thereby suppressing noise. As the components resulting from the decomposition are heterogeneous, we equip the proposed model with heterogeneous feature extractors that enable the learning of multi-scale features. Due to the complementarity of these features, we also include a cross aggregation module for enhancing their interactions. An experimental study using seven publicly available datasets offers evidence that MHNN can outperform other cutting-edge models and offers evidence of robustness to missing values and noise. An ablation study confirms the importance of each module.
[ "['Mengna Liu' 'Dong Xiang' 'Xu Cheng' 'Xiufeng Liu' 'Dalin Zhang'\n 'Shengyong Chen' 'Christian S. Jensen']" ]
null
null
2402.09438
null
null
http://arxiv.org/pdf/2402.09438v1
2024-01-27T23:05:51Z
2024-01-27T23:05:51Z
Subject-Independent Deep Architecture for EEG-based Motor Imagery Classification
Motor imagery (MI) classification based on electroencephalogram (EEG) is a widely-used technique in non-invasive brain-computer interface (BCI) systems. Since EEG recordings suffer from heterogeneity across subjects and labeled data insufficiency, designing a classifier that performs the MI independently from the subject with limited labeled samples would be desirable. To overcome these limitations, we propose a novel subject-independent semi-supervised deep architecture (SSDA). The proposed SSDA consists of two parts: an unsupervised and a supervised element. The training set contains both labeled and unlabeled data samples from multiple subjects. First, the unsupervised part, known as the columnar spatiotemporal auto-encoder (CST-AE), extracts latent features from all the training samples by maximizing the similarity between the original and reconstructed data. A dimensional scaling approach is employed to reduce the dimensionality of the representations while preserving their discriminability. Second, a supervised part learns a classifier based on the labeled training samples using the latent features acquired in the unsupervised part. Moreover, we employ center loss in the supervised part to minimize the embedding space distance of each point in a class to its center. The model optimizes both parts of the network in an end-to-end fashion. The performance of the proposed SSDA is evaluated on test subjects who were not seen by the model during the training phase. To assess the performance, we use two benchmark EEG-based MI task datasets. The results demonstrate that SSDA outperforms state-of-the-art methods and that a small number of labeled training samples can be sufficient for strong classification performance.
[ "['Shadi Sartipi' 'Mujdat Cetin']" ]
null
null
2402.09439
null
null
http://arxiv.org/abs/2402.09439v2
2024-04-08T02:41:54Z
2024-01-29T14:14:39Z
Deep-Learning-Based Channel Estimation for IRS-Assisted ISAC System
Integrated sensing and communication (ISAC) and intelligent reflecting surface (IRS) are viewed as promising technologies for future generations of wireless networks. This paper investigates the channel estimation problem in an IRS-assisted ISAC system. A deep-learning framework is proposed to estimate the sensing and communication (S&C) channels in such a system. Considering different propagation environments of the S&C channels, two deep neural network (DNN) architectures are designed to realize this framework. The first DNN is devised at the ISAC base station for estimating the sensing channel, while the second DNN architecture is assigned to each downlink user equipment to estimate its communication channel. Moreover, the input-output pairs to train the DNNs are carefully designed. Simulation results show the superiority of the proposed estimation approach compared to the benchmark scheme under various signal-to-noise ratio conditions and system parameters.
[ "['Yu Liu' 'Ibrahim Al-Nahhal' 'Octavia A. Dobre' 'Fanggang Wang']" ]
null
null
2402.09440
null
null
http://arxiv.org/abs/2402.09440v2
2024-04-08T02:32:01Z
2024-01-29T14:15:11Z
Extreme Learning Machine-based Channel Estimation in IRS-Assisted Multi-User ISAC System
Multi-user integrated sensing and communication (ISAC) assisted by intelligent reflecting surface (IRS) has been recently investigated to provide a high spectral and energy efficiency transmission. This paper proposes a practical channel estimation approach for the first time to an IRS-assisted multiuser ISAC system. The estimation problem in such a system is challenging since the sensing and communication (SAC) signals interfere with each other, and the passive IRS lacks signal processing ability. A two-stage approach is proposed to transfer the overall estimation problem into sub-ones, successively including the direct and reflected channels estimation. Based on this scheme, the ISAC base station (BS) estimates all the SAC channels associated with the target and uplink users, while each downlink user estimates the downlink communication channels individually. Considering a low-cost demand of the ISAC BS and downlink users, the proposed two-stage approach is realized by an efficient neural network (NN) framework that contains two different extreme learning machine (ELM) structures to estimate the above SAC channels. Moreover, two types of input-output pairs to train the ELMs are carefully devised, which impact the estimation accuracy and computational complexity under different system parameters. Simulation results reveal a substantial performance improvement achieved by the proposed ELM-based approach over the least-squares and NN-based benchmarks, with reduced training complexity and faster training speed.
[ "['Yu Liu' 'Ibrahim Al-Nahhal' 'Octavia A. Dobre' 'Fanggang Wang'\n 'Hyundong Shin']" ]
null
null
2402.09441
null
null
http://arxiv.org/abs/2402.09441v2
2024-04-08T02:03:43Z
2024-01-29T14:15:48Z
Deep-Learning Channel Estimation for IRS-Assisted Integrated Sensing and Communication System
Integrated sensing and communication (ISAC), and intelligent reflecting surface (IRS) are envisioned as revolutionary technologies to enhance spectral and energy efficiencies for next wireless system generations. For the first time, this paper focuses on the channel estimation problem in an IRS-assisted ISAC system. This problem is challenging due to the lack of signal processing capacity in passive IRS, as well as the presence of mutual interference between sensing and communication (SAC) signals in ISAC systems. A three-stage approach is proposed to decouple the estimation problem into sub-ones, including the estimation of the direct SAC channels in the first stage, reflected communication channel in the second stage, and reflected sensing channel in the third stage. The proposed three-stage approach is based on a deep-learning framework, which involves two different convolutional neural network (CNN) architectures to estimate the channels at the full-duplex ISAC base station. Furthermore, two types of input-output pairs to train the CNNs are carefully designed, which affect the estimation performance under various signal-to-noise ratio conditions and system parameters. Simulation results validate the superiority of the proposed estimation approach compared to the least-squares baseline scheme, and its computational complexity is also analyzed.
[ "['Yu Liu' 'Ibrahim Al-Nahhal' 'Octavia A. Dobre' 'Fanggang Wang']" ]
null
null
2402.09443
null
null
http://arxiv.org/pdf/2402.09443v1
2024-01-30T17:32:02Z
2024-01-30T17:32:02Z
Review of algorithms for predicting fatigue using EEG
Fatigue detection is of paramount importance in enhancing safety, productivity, and well-being across diverse domains, including transportation, healthcare, and industry. This scientific paper presents a comprehensive investigation into the application of machine learning algorithms for the detection of physiological fatigue using Electroencephalogram (EEG) signals. The primary objective of this study was to assess the efficacy of various algorithms in predicting an individual's level of fatigue based on EEG data.
[ "['Ildar Rakhmatulin']" ]
null
null
2402.09445
null
null
http://arxiv.org/abs/2402.09445v2
2024-06-03T12:42:50Z
2024-01-31T16:53:50Z
iMove: Exploring Bio-impedance Sensing for Fitness Activity Recognition
Automatic and precise fitness activity recognition can be beneficial in aspects from promoting a healthy lifestyle to personalized preventative healthcare. While IMUs are currently the prominent fitness tracking modality, through iMove, we show bio-impedence can help improve IMU-based fitness tracking through sensor fusion and contrastive learning.To evaluate our methods, we conducted an experiment including six upper body fitness activities performed by ten subjects over five days to collect synchronized data from bio-impedance across two wrists and IMU on the left wrist.The contrastive learning framework uses the two modalities to train a better IMU-only classification model, where bio-impedance is only required at the training phase, by which the average Macro F1 score with the input of a single IMU was improved by 3.22 % reaching 84.71 % compared to the 81.49 % of the IMU baseline model. We have also shown how bio-impedance can improve human activity recognition (HAR) directly through sensor fusion, reaching an average Macro F1 score of 89.57 % (two modalities required for both training and inference) even if Bio-impedance alone has an average macro F1 score of 75.36 %, which is outperformed by IMU alone. In addition, similar results were obtained in an extended study on lower body fitness activity classification, demonstrating the generalisability of our approach.Our findings underscore the potential of sensor fusion and contrastive learning as valuable tools for advancing fitness activity recognition, with bio-impedance playing a pivotal role in augmenting the capabilities of IMU-based systems.
[ "['Mengxi Liu' 'Vitor Fortes Rey' 'Yu Zhang' 'Lala Shakti Swarup Ray'\n 'Bo Zhou' 'Paul Lukowicz']" ]
null
null
2402.09447
null
null
http://arxiv.org/pdf/2402.09447v1
2024-01-31T23:13:38Z
2024-01-31T23:13:38Z
Wavelet Analysis of Noninvasive EEG Signals Discriminates Complex and Natural Grasp Types
This research aims to decode hand grasps from Electroencephalograms (EEGs) for dexterous neuroprosthetic development and Brain-Computer Interface (BCI) applications, especially for patients with motor disorders. Particularly, it focuses on distinguishing two complex natural power and precision grasps in addition to a neutral condition as a no-movement condition using a new EEG-based BCI platform and wavelet signal processing. Wavelet analysis involved generating time-frequency and topographic maps from wavelet power coefficients. Then, by using machine learning techniques with novel wavelet features, we achieved high average accuracies: 85.16% for multiclass, 95.37% for No-Movement vs Power, 95.40% for No-Movement vs Precision, and 88.07% for Power vs Precision, demonstrating the effectiveness of these features in EEG-based grasp differentiation. In contrast to previous studies, a critical part of our study was permutation feature importance analysis, which highlighted key features for grasp classification. It revealed that the most crucial brain activities during grasping occur in the motor cortex, within the alpha and beta frequency bands. These insights demonstrate the potential of wavelet features in real-time neuroprosthetic technology and BCI applications.
[ "['Ali Rabiee' 'Sima Ghafoori' 'Anna Cetera' 'Reza Abiri']" ]
null
null
2402.09448
null
null
http://arxiv.org/pdf/2402.09448v1
2024-01-31T23:35:44Z
2024-01-31T23:35:44Z
A Comparative Study of Conventional and Tripolar EEG for High-Performance Reach-to-Grasp BCI Systems
This study aims to enhance BCI applications for individuals with motor impairments by comparing the effectiveness of tripolar EEG (tEEG) with conventional EEG. The focus is on interpreting and decoding various grasping movements, such as power grasp and precision grasp. The goal is to determine which EEG technology is more effective in processing and translating grasp related neural signals. The approach involved experimenting on ten healthy participants who performed two distinct grasp movements: power grasp and precision grasp, with a no movement condition serving as the baseline. Our research presents a thorough comparison between EEG and tEEG in decoding grasping movements. This comparison spans several key parameters, including signal to noise ratio (SNR), spatial resolution via functional connectivity, ERPs, and wavelet time frequency analysis. Additionally, our study involved extracting and analyzing statistical features from the wavelet coefficients, and both binary and multiclass classification methods were employed. Four machine learning algorithms were used to evaluate the decoding accuracies. Our results indicated that tEEG demonstrated superior performance over conventional EEG in various aspects. This included a higher signal to noise ratio, enhanced spatial resolution, and more informative data in ERPs and wavelet time frequency analysis. The use of tEEG led to notable improvements in decoding accuracy for differentiating movement types. Specifically, tEEG achieved around 90% accuracy in binary and 75.97% for multiclass classification. These results are markedly better than those from standard EEG, which recorded a maximum of 77.85% and 61.27% in similar tasks, respectively. These findings highlight the superior effectiveness of tEEG over EEG in decoding grasp types and its competitive or superior performance in complex classifications compared with existing research.
[ "['Ali Rabiee' 'Sima Ghafoori' 'Anna Cetera' 'Walter Besio' 'Reza Abiri']" ]
null
null
2402.09450
null
null
http://arxiv.org/pdf/2402.09450v3
2024-03-19T16:17:00Z
2024-02-02T10:04:13Z
Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram
Electrocardiograms (ECG) are widely employed as a diagnostic tool for monitoring electrical signals originating from a heart. Recent machine learning research efforts have focused on the application of screening various diseases using ECG signals. However, adapting to the application of screening disease is challenging in that labeled ECG data are limited. Achieving general representation through self-supervised learning (SSL) is a well-known approach to overcome the scarcity of labeled data; however, a naive application of SSL to ECG data, without considering the spatial-temporal relationships inherent in ECG signals, may yield suboptimal results. In this paper, we introduce ST-MEM (Spatio-Temporal Masked Electrocardiogram Modeling), designed to learn spatio-temporal features by reconstructing masked 12-lead ECG data. ST-MEM outperforms other SSL baseline methods in various experimental settings for arrhythmia classification tasks. Moreover, we demonstrate that ST-MEM is adaptable to various lead combinations. Through quantitative and qualitative analysis, we show a spatio-temporal relationship within ECG data. Our code is available at https://github.com/bakqui/ST-MEM.
[ "['Yeongyeon Na' 'Minje Park' 'Yunwon Tae' 'Sunghoon Joo']" ]
null
null
2402.09452
null
null
http://arxiv.org/pdf/2402.09452v1
2024-02-03T08:58:53Z
2024-02-03T08:58:53Z
Data Distribution Dynamics in Real-World WiFi-Based Patient Activity Monitoring for Home Healthcare
This paper examines the application of WiFi signals for real-world monitoring of daily activities in home healthcare scenarios. While the state-of-the-art of WiFi-based activity recognition is promising in lab environments, challenges arise in real-world settings due to environmental, subject, and system configuration variables, affecting accuracy and adaptability. The research involved deploying systems in various settings and analyzing data shifts. It aims to guide realistic development of robust, context-aware WiFi sensing systems for elderly care. The findings suggest a shift in WiFi-based activity sensing, bridging the gap between academic research and practical applications, enhancing life quality through technology.
[ "['Mahathir Monjur' 'Jia Liu' 'Jingye Xu' 'Yuntong Zhang' 'Xiaomeng Wang'\n 'Chengdong Li' 'Hyejin Park' 'Wei Wang' 'Karl Shieh' 'Sirajum Munir'\n 'Jing Wang' 'Lixin Song' 'Shahriar Nirjon']" ]
null
null
2402.09453
null
null
http://arxiv.org/pdf/2402.09453v1
2024-02-05T03:57:30Z
2024-02-05T03:57:30Z
Improving EEG Signal Classification Accuracy Using Wasserstein Generative Adversarial Networks
Electroencephalography (EEG) plays a vital role in recording brain activities and is integral to the development of brain-computer interface (BCI) technologies. However, the limited availability and high variability of EEG signals present substantial challenges in creating reliable BCIs. To address this issue, we propose a practical solution drawing on the latest developments in deep learning and Wasserstein Generative Adversarial Network (WGAN). The WGAN was trained on the BCI2000 dataset, consisting of around 1500 EEG recordings and 64 channels from 45 individuals. The generated EEG signals were evaluated via three classifiers yielding improved average accuracies. The quality of generated signals measured using Frechet Inception Distance (FID) yielded scores of 1.345 and 11.565 for eyes-open and closed respectively. Even without a spectral or spatial loss term, our WGAN model was able to emulate the spectral and spatial properties of the EEG training data. The WGAN-generated data mirrored the dominant alpha activity during closed-eye resting and high delta waves in the training data in its topographic map and power spectral density (PSD) plot. Our research testifies to the potential of WGANs in addressing the limited EEG data issue for BCI development by enhancing a small dataset to improve classifier generalizability.
[ "['Joshua Park' 'Priyanshu Mahey' 'Ore Adeniyi']" ]
null
null
2402.09456
null
null
http://arxiv.org/pdf/2402.09456v2
2024-02-25T04:48:38Z
2024-02-07T06:10:47Z
Optimistic Thompson Sampling for No-Regret Learning in Unknown Games
This work tackles the complexities of multi-player scenarios in emph{unknown games}, where the primary challenge lies in navigating the uncertainty of the environment through bandit feedback alongside strategic decision-making. We introduce Thompson Sampling (TS)-based algorithms that exploit the information of opponents' actions and reward structures, leading to a substantial reduction in experimental budgets -- achieving over tenfold improvements compared to conventional approaches. Notably, our algorithms demonstrate that, given specific reward structures, the regret bound depends logarithmically on the total action space, significantly alleviating the curse of multi-player. Furthermore, we unveil the emph{Optimism-then-NoRegret} (OTN) framework, a pioneering methodology that seamlessly incorporates our advancements with established algorithms, showcasing its utility in practical scenarios such as traffic routing and radar sensing in the real world.
[ "['Yingru Li' 'Liangqi Liu' 'Wenqiang Pu' 'Hao Liang' 'Zhi-Quan Luo']" ]
null
null
2402.09459
null
null
http://arxiv.org/abs/2402.09459v1
2024-02-04T19:08:34Z
2024-02-04T19:08:34Z
Custom IMU-Based Wearable System for Robust 2.4 GHz Wireless Human Body Parts Orientation Tracking and 3D Movement Visualization on an Avatar
Recent studies confirm the applicability of Inertial Measurement Unit (IMU)-based systems for human motion analysis. Notwithstanding, high-end IMU-based commercial solutions are yet too expensive and complex to democratize their use among a wide range of potential users. Less featured entry-level commercial solutions are being introduced in the market, trying to fill this gap, but still present some limitations that need to be overcome. At the same time, there is a growing number of scientific papers using not commercial, but custom do-it-yourself IMU-based systems in medical and sports applications. Even though these solutions can help to popularize the use of this technology, they have more limited features and the description on how to design and build them from scratch is yet too scarce in the literature. The aim of this work is two-fold: (1) Proving the feasibility of building an affordable custom solution aimed at simultaneous multiple body parts orientation tracking; while providing a detailed bottom-up description of the required hardware, tools, and mathematical operations to estimate and represent 3D movement in real-time. (2) Showing how the introduction of a custom 2.4 GHz communication protocol including a channel hopping strategy can address some of the current communication limitations of entry-level commercial solutions. The proposed system can be used for wireless real-time human body parts orientation tracking with up to 10 custom sensors, at least at 50 Hz. In addition, it provides a more reliable motion data acquisition in Bluetooth and Wi-Fi crowded environments, where the use of entry-level commercial solutions might be unfeasible. This system can be used as a groundwork for developing affordable human motion analysis solutions that do not require an accurate kinematic analysis.
[ "['Javier González-Alonso' 'David Oviedo-Pastor' 'Héctor J. Aguado'\n 'Francisco J. Díaz-Pernas' 'David González-Ortega'\n 'Mario Martínez-Zarzuela']" ]
null
null
2402.09460
null
null
http://arxiv.org/abs/2402.09460v1
2024-02-08T06:14:12Z
2024-02-08T06:14:12Z
Unsupervised learning based end-to-end delayless generative fixed-filter active noise control
Delayless noise control is achieved by our earlier generative fixed-filter active noise control (GFANC) framework through efficient coordination between the co-processor and real-time controller. However, the one-dimensional convolutional neural network (1D CNN) in the co-processor requires initial training using labelled noise datasets. Labelling noise data can be resource-intensive and may introduce some biases. In this paper, we propose an unsupervised-GFANC approach to simplify the 1D CNN training process and enhance its practicality. During training, the co-processor and real-time controller are integrated into an end-to-end differentiable ANC system. This enables us to use the accumulated squared error signal as the loss for training the 1D CNN. With this unsupervised learning paradigm, the unsupervised-GFANC method not only omits the labelling process but also exhibits better noise reduction performance compared to the supervised GFANC method in real noise experiments.
[ "['Zhengding Luo' 'Dongyuan Shi' 'Xiaoyi Shen' 'Woon-Seng Gan']" ]
null
null
2402.09461
null
null
http://arxiv.org/pdf/2402.09461v1
2024-02-08T06:36:29Z
2024-02-08T06:36:29Z
A Novel Approach to WaveNet Architecture for RF Signal Separation with Learnable Dilation and Data Augmentation
In this paper, we address the intricate issue of RF signal separation by presenting a novel adaptation of the WaveNet architecture that introduces learnable dilation parameters, significantly enhancing signal separation in dense RF spectrums. Our focused architectural refinements and innovative data augmentation strategies have markedly improved the model's ability to discern complex signal sources. This paper details our comprehensive methodology, including the refined model architecture, data preparation techniques, and the strategic training strategy that have been pivotal to our success. The efficacy of our approach is evidenced by the substantial improvements recorded: a 58.82% increase in SINR at a BER of $10^{-3}$ for OFDM-QPSK with EMI Signal 1, surpassing traditional benchmarks. Notably, our model achieved first place in the challenge cite{datadrivenrf2024}, demonstrating its superior performance and establishing a new standard for machine learning applications within the RF communications domain.
[ "['Yu Tian' 'Ahmed Alhammadi' 'Abdullah Quran' 'Abubakar Sani Ali']" ]
null
null
2402.09464
null
null
http://arxiv.org/abs/2402.09464v1
2024-02-08T19:55:07Z
2024-02-08T19:55:07Z
Different Algorithms (Might) Uncover Different Patterns: A Brain-Age Prediction Case Study
Machine learning is a rapidly evolving field with a wide range of applications, including biological signal analysis, where novel algorithms often improve the state-of-the-art. However, robustness to algorithmic variability - measured by different algorithms, consistently uncovering similar findings - is seldom explored. In this paper we investigate whether established hypotheses in brain-age prediction from EEG research validate across algorithms. First, we surveyed literature and identified various features known to be informative for brain-age prediction. We employed diverse feature extraction techniques, processing steps, and models, and utilized the interpretative power of SHapley Additive exPlanations (SHAP) values to align our findings with the existing research in the field. Few of our models achieved state-of-the-art performance on the specific data-set we utilized. Moreover, analysis demonstrated that while most models do uncover similar patterns in the EEG signals, some variability could still be observed. Finally, a few prominent findings could only be validated using specific models. We conclude by suggesting remedies to the potential implications of this lack of robustness to model variability.
[ "['Tobias Ettling' 'Sari Saba-Sadiya' 'Gemma Roig']" ]
null
null
2402.09467
null
null
http://arxiv.org/pdf/2402.09467v1
2024-02-11T21:25:14Z
2024-02-11T21:25:14Z
Optimal Thresholding Linear Bandit
We study a novel pure exploration problem: the $epsilon$-Thresholding Bandit Problem (TBP) with fixed confidence in stochastic linear bandits. We prove a lower bound for the sample complexity and extend an algorithm designed for Best Arm Identification in the linear case to TBP that is asymptotically optimal.
[ "['Eduardo Ochoa Rivera' 'Ambuj Tewari']" ]
null
null
2402.09469
null
null
http://arxiv.org/pdf/2402.09469v2
2024-05-24T07:28:24Z
2024-02-12T05:52:06Z
Fourier Circuits in Neural Networks: Unlocking the Potential of Large Language Models in Mathematical Reasoning and Modular Arithmetic
In the evolving landscape of machine learning, a pivotal challenge lies in deciphering the internal representations harnessed by neural networks and Transformers. Building on recent progress toward comprehending how networks execute distinct target functions, our study embarks on an exploration of the underlying reasons behind networks adopting specific computational strategies. We direct our focus to the complex algebraic learning task of modular addition involving $k$ inputs. Our research presents a thorough analytical characterization of the features learned by stylized one-hidden layer neural networks and one-layer Transformers in addressing this task. A cornerstone of our theoretical framework is the elucidation of how the principle of margin maximization shapes the features adopted by one-hidden layer neural networks. Let $p$ denote the modulus, $D_p$ denote the dataset of modular arithmetic with $k$ inputs and $m$ denote the network width. We demonstrate that a neuron count of $ m geq 2^{2k-2} cdot (p-1) $, these networks attain a maximum $ L_{2,k+1} $-margin on the dataset $ D_p $. Furthermore, we establish that each hidden-layer neuron aligns with a specific Fourier spectrum, integral to solving modular addition problems. By correlating our findings with the empirical observations of similar studies, we contribute to a deeper comprehension of the intrinsic computational mechanisms of neural networks. Furthermore, we observe similar computational mechanisms in the attention matrix of the one-layer Transformer. This research stands as a significant stride in unraveling their operation complexities, particularly in the realm of complex algebraic tasks.
[ "['Jiuxiang Gu' 'Chenyang Li' 'Yingyu Liang' 'Zhenmei Shi' 'Zhao Song'\n 'Tianyi Zhou']" ]
null
null
2402.09470
null
null
http://arxiv.org/pdf/2402.09470v2
2024-06-06T17:39:53Z
2024-02-12T08:16:10Z
Rolling Diffusion Models
Diffusion models have recently been increasingly applied to temporal data such as video, fluid mechanics simulations, or climate data. These methods generally treat subsequent frames equally regarding the amount of noise in the diffusion process. This paper explores Rolling Diffusion: a new approach that uses a sliding window denoising process. It ensures that the diffusion process progressively corrupts through time by assigning more noise to frames that appear later in a sequence, reflecting greater uncertainty about the future as the generation process unfolds. Empirically, we show that when the temporal dynamics are complex, Rolling Diffusion is superior to standard diffusion. In particular, this result is demonstrated in a video prediction task using the Kinetics-600 video dataset and in a chaotic fluid dynamics forecasting experiment.
[ "['David Ruhe' 'Jonathan Heek' 'Tim Salimans' 'Emiel Hoogeboom']" ]
null
null
2402.09471
null
null
http://arxiv.org/pdf/2402.09471v1
2024-02-12T08:38:53Z
2024-02-12T08:38:53Z
Machine Learning for Stochastic Parametrisation
Atmospheric models used for weather and climate prediction are traditionally formulated in a deterministic manner. In other words, given a particular state of the resolved scale variables, the most likely forcing from the sub-grid scale processes is estimated and used to predict the evolution of the large-scale flow. However, the lack of scale-separation in the atmosphere means that this approach is a large source of error in forecasts. Over recent years, an alternative paradigm has developed: the use of stochastic techniques to characterise uncertainty in small-scale processes. These techniques are now widely used across weather, sub-seasonal, seasonal, and climate timescales. In parallel, recent years have also seen significant progress in replacing parametrisation schemes using machine learning (ML). This has the potential to both speed up and improve our numerical models. However, the focus to date has largely been on deterministic approaches. In this position paper, we bring together these two key developments, and discuss the potential for data-driven approaches for stochastic parametrisation. We highlight early studies in this area, and draw attention to the novel challenges that remain.
[ "['Hannah M. Christensen' 'Salah Kouhen' 'Greta Miller' 'Raghul Parthipan']" ]
null
null
2402.09473
null
null
http://arxiv.org/pdf/2402.09473v1
2024-02-12T10:03:31Z
2024-02-12T10:03:31Z
One-for-many Counterfactual Explanations by Column Generation
In this paper, we consider the problem of generating a set of counterfactual explanations for a group of instances, with the one-for-many allocation rule, where one explanation is allocated to a subgroup of the instances. For the first time, we solve the problem of minimizing the number of explanations needed to explain all the instances, while considering sparsity by limiting the number of features allowed to be changed collectively in each explanation. A novel column generation framework is developed to efficiently search for the explanations. Our framework can be applied to any black-box classifier, like neural networks. Compared with a simple adaptation of a mixed-integer programming formulation from the literature, the column generation framework dominates in terms of scalability, computational performance and quality of the solutions.
[ "['Andrea Lodi' 'Jasone Ramírez-Ayerbe']" ]
null
null
2402.09474
null
null
http://arxiv.org/pdf/2402.09474v2
2024-04-28T20:05:45Z
2024-02-12T11:04:08Z
Deciphering Heartbeat Signatures: A Vision Transformer Approach to Explainable Atrial Fibrillation Detection from ECG Signals
Remote patient monitoring based on wearable single-lead electrocardiogram (ECG) devices has significant potential for enabling the early detection of heart disease, especially in combination with artificial intelligence (AI) approaches for automated heart disease detection. There have been prior studies applying AI approaches based on deep learning for heart disease detection. However, these models are yet to be widely accepted as a reliable aid for clinical diagnostics, in part due to the current black-box perception surrounding many AI algorithms. In particular, there is a need to identify the key features of the ECG signal that contribute toward making an accurate diagnosis, thereby enhancing the interpretability of the model. In the present study, we develop a vision transformer approach to identify atrial fibrillation based on single-lead ECG data. A residual network (ResNet) approach is also developed for comparison with the vision transformer approach. These models are applied to the Chapman-Shaoxing dataset to classify atrial fibrillation, as well as another common arrhythmia, sinus bradycardia, and normal sinus rhythm heartbeats. The models enable the identification of the key regions of the heartbeat that determine the resulting classification, and highlight the importance of P-waves and T-waves, as well as heartbeat duration and signal amplitude, in distinguishing normal sinus rhythm from atrial fibrillation and sinus bradycardia.
[ "['Aruna Mohan' 'Danne Elbers' 'Or Zilbershot' 'Fatemeh Afghah'\n 'David Vorchheimer']" ]
null
null
2402.09477
null
null
http://arxiv.org/pdf/2402.09477v1
2024-02-12T22:56:07Z
2024-02-12T22:56:07Z
PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining
We introduce a privacy auditing scheme for ML models that relies on membership inference attacks using generated data as "non-members". This scheme, which we call PANORAMIA, quantifies the privacy leakage for large-scale ML models without control of the training process or model re-training and only requires access to a subset of the training data. To demonstrate its applicability, we evaluate our auditing scheme across multiple ML domains, ranging from image and tabular data classification to large-scale language models.
[ "['Mishaal Kazmi' 'Hadrien Lautraite' 'Alireza Akbari' 'Mauricio Soroco'\n 'Qiaoyue Tang' 'Tao Wang' 'Sébastien Gambs' 'Mathias Lécuyer']" ]