categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2403.15309
null
null
http://arxiv.org/pdf/2403.15309v1
2024-03-22T15:59:24Z
2024-03-22T15:59:24Z
Controlled Training Data Generation with Diffusion Models
In this work, we present a method to control a text-to-image generative model to produce training data specifically "useful" for supervised learning. Unlike previous works that employ an open-loop approach and pre-define prompts to generate new data using either a language model or human expertise, we develop an automated closed-loop system which involves two feedback mechanisms. The first mechanism uses feedback from a given supervised model and finds adversarial prompts that result in image generations that maximize the model loss. While these adversarial prompts result in diverse data informed by the model, they are not informed of the target distribution, which can be inefficient. Therefore, we introduce the second feedback mechanism that guides the generation process towards a certain target distribution. We call the method combining these two mechanisms Guided Adversarial Prompts. We perform our evaluations on different tasks, datasets and architectures, with different types of distribution shifts (spuriously correlated data, unseen domains) and demonstrate the efficiency of the proposed feedback mechanisms compared to open-loop approaches.
[ "['Teresa Yeo' 'Andrei Atanov' 'Harold Benoit' 'Aleksandr Alekseev'\n 'Ruchira Ray' 'Pooya Esmaeil Akhoondi' 'Amir Zamir']" ]
null
null
2403.15312
null
null
http://arxiv.org/pdf/2403.15312v1
2024-03-22T16:04:26Z
2024-03-22T16:04:26Z
A Wasserstein perspective of Vanilla GANs
The empirical success of Generative Adversarial Networks (GANs) caused an increasing interest in theoretical research. The statistical literature is mainly focused on Wasserstein GANs and generalizations thereof, which especially allow for good dimension reduction properties. Statistical results for Vanilla GANs, the original optimization problem, are still rather limited and require assumptions such as smooth activation functions and equal dimensions of the latent space and the ambient space. To bridge this gap, we draw a connection from Vanilla GANs to the Wasserstein distance. By doing so, existing results for Wasserstein GANs can be extended to Vanilla GANs. In particular, we obtain an oracle inequality for Vanilla GANs in Wasserstein distance. The assumptions of this oracle inequality are designed to be satisfied by network architectures commonly used in practice, such as feedforward ReLU networks. By providing a quantitative result for the approximation of a Lipschitz function by a feedforward ReLU network with bounded H"older norm, we conclude a rate of convergence for Vanilla GANs as well as Wasserstein GANs as estimators of the unknown probability distribution.
[ "['Lea Kunkel' 'Mathias Trabs']" ]
null
null
2403.15316
null
null
http://arxiv.org/pdf/2403.15316v2
2024-06-17T17:25:42Z
2024-03-22T16:10:38Z
Ultrasound Imaging based on the Variance of a Diffusion Restoration Model
Despite today's prevalence of ultrasound imaging in medicine, ultrasound signal-to-noise ratio is still affected by several sources of noise and artefacts. Moreover, enhancing ultrasound image quality involves balancing concurrent factors like contrast, resolution, and speckle preservation. Recently, there has been progress in both model-based and learning-based approaches addressing the problem of ultrasound image reconstruction. Bringing the best from both worlds, we propose a hybrid reconstruction method combining an ultrasound linear direct model with a learning-based prior coming from a generative Denoising Diffusion model. More specifically, we rely on the unsupervised fine-tuning of a pre-trained Denoising Diffusion Restoration Model (DDRM). Given the nature of multiplicative noise inherent to ultrasound, this paper proposes an empirical model to characterize the stochasticity of diffusion reconstruction of ultrasound images, and shows the interest of its variance as an echogenicity map estimator. We conduct experiments on synthetic, in-vitro, and in-vivo data, demonstrating the efficacy of our variance imaging approach in achieving high-quality image reconstructions from single plane-wave acquisitions and in comparison to state-of-the-art methods. The code is available at: https://github.com/Yuxin-Zhang-Jasmine/DRUSvar
[ "['Yuxin Zhang' 'Clément Huneau' 'Jérôme Idier' 'Diana Mateus']" ]
null
null
2403.15360
null
null
http://arxiv.org/pdf/2403.15360v2
2024-04-24T18:19:21Z
2024-03-22T17:22:56Z
SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time series
Transformers have widely adopted attention networks for sequence mixing and MLPs for channel mixing, playing a pivotal role in achieving breakthroughs across domains. However, recent literature highlights issues with attention networks, including low inductive bias and quadratic complexity concerning input sequence length. State Space Models (SSMs) like S4 and others (Hippo, Global Convolutions, liquid S4, LRU, Mega, and Mamba), have emerged to address the above issues to help handle longer sequence lengths. Mamba, while being the state-of-the-art SSM, has a stability issue when scaled to large networks for computer vision datasets. We propose SiMBA, a new architecture that introduces Einstein FFT (EinFFT) for channel modeling by specific eigenvalue computations and uses the Mamba block for sequence modeling. Extensive performance studies across image and time-series benchmarks demonstrate that SiMBA outperforms existing SSMs, bridging the performance gap with state-of-the-art transformers. Notably, SiMBA establishes itself as the new state-of-the-art SSM on ImageNet and transfer learning benchmarks such as Stanford Car and Flower as well as task learning benchmarks as well as seven time series benchmark datasets. The project page is available on this website ~url{https://github.com/badripatro/Simba}.
[ "['Badri N. Patro' 'Vijay S. Agneeswaran']" ]
null
null
2403.15361
null
null
http://arxiv.org/pdf/2403.15361v1
2024-03-22T17:23:37Z
2024-03-22T17:23:37Z
Learning Topological Representations for Deep Image Understanding
In many scenarios, especially biomedical applications, the correct delineation of complex fine-scaled structures such as neurons, tissues, and vessels is critical for downstream analysis. Despite the strong predictive power of deep learning methods, they do not provide a satisfactory representation of these structures, thus creating significant barriers in scalable annotation and downstream analysis. In this dissertation, we tackle such challenges by proposing novel representations of these topological structures in a deep learning framework. We leverage the mathematical tools from topological data analysis, i.e., persistent homology and discrete Morse theory, to develop principled methods for better segmentation and uncertainty estimation, which will become powerful tools for scalable annotation.
[ "['Xiaoling Hu']" ]
null
null
2403.15363
null
null
http://arxiv.org/pdf/2403.15363v1
2024-03-22T17:31:21Z
2024-03-22T17:31:21Z
Cascading Blackout Severity Prediction with Statistically-Augmented Graph Neural Networks
Higher variability in grid conditions, resulting from growing renewable penetration and increased incidence of extreme weather events, has increased the difficulty of screening for scenarios that may lead to catastrophic cascading failures. Traditional power-flow-based tools for assessing cascading blackout risk are too slow to properly explore the space of possible failures and load/generation patterns. We add to the growing literature of faster graph-neural-network (GNN)-based techniques, developing two novel techniques for the estimation of blackout magnitude from initial grid conditions. First we propose several methods for employing an initial classification step to filter out safe "non blackout" scenarios prior to magnitude estimation. Second, using insights from the statistical properties of cascading blackouts, we propose a method for facilitating non-local message passing in our GNN models. We validate these two approaches on a large simulated dataset, and show the potential of both to increase blackout size estimation performance.
[ "['Joe Gorka' 'Tim Hsu' 'Wenting Li' 'Yury Maximov' 'Line Roald']" ]
null
null
2403.15365
null
null
http://arxiv.org/pdf/2403.15365v2
2024-03-25T03:06:08Z
2024-03-22T17:33:11Z
A Transfer Attack to Image Watermarks
Watermark has been widely deployed by industry to detect AI-generated images. The robustness of such watermark-based detector against evasion attacks in the white-box and black-box settings is well understood in the literature. However, the robustness in the no-box setting is much less understood. In particular, multiple studies claimed that image watermark is robust in such setting. In this work, we propose a new transfer evasion attack to image watermark in the no-box setting. Our transfer attack adds a perturbation to a watermarked image to evade multiple surrogate watermarking models trained by the attacker itself, and the perturbed watermarked image also evades the target watermarking model. Our major contribution is to show that, both theoretically and empirically, watermark-based AI-generated image detector is not robust to evasion attacks even if the attacker does not have access to the watermarking model nor the detection API.
[ "['Yuepeng Hu' 'Zhengyuan Jiang' 'Moyang Guo' 'Neil Gong']" ]
null
null
2403.15370
null
null
http://arxiv.org/pdf/2403.15370v1
2024-03-22T17:49:11Z
2024-03-22T17:49:11Z
Augmented Reality based Simulated Data (ARSim) with multi-view consistency for AV perception networks
Detecting a diverse range of objects under various driving scenarios is essential for the effectiveness of autonomous driving systems. However, the real-world data collected often lacks the necessary diversity presenting a long-tail distribution. Although synthetic data has been utilized to overcome this issue by generating virtual scenes, it faces hurdles such as a significant domain gap and the substantial efforts required from 3D artists to create realistic environments. To overcome these challenges, we present ARSim, a fully automated, comprehensive, modular framework designed to enhance real multi-view image data with 3D synthetic objects of interest. The proposed method integrates domain adaptation and randomization strategies to address covariate shift between real and simulated data by inferring essential domain attributes from real data and employing simulation-based randomization for other attributes. We construct a simplified virtual scene using real data and strategically place 3D synthetic assets within it. Illumination is achieved by estimating light distribution from multiple images capturing the surroundings of the vehicle. Camera parameters from real data are employed to render synthetic assets in each frame. The resulting augmented multi-view consistent dataset is used to train a multi-camera perception network for autonomous vehicles. Experimental results on various AV perception tasks demonstrate the superior performance of networks trained on the augmented dataset.
[ "['Aqeel Anwar' 'Tae Eun Choe' 'Zian Wang' 'Sanja Fidler' 'Minwoo Park']" ]
null
null
2403.15371
null
null
http://arxiv.org/pdf/2403.15371v2
2024-07-12T14:52:49Z
2024-03-22T17:50:43Z
Can large language models explore in-context?
We investigate the extent to which contemporary Large Language Models (LLMs) can engage in exploration, a core capability in reinforcement learning and decision making. We focus on native performance of existing LLMs, without training interventions. We deploy LLMs as agents in simple multi-armed bandit environments, specifying the environment description and interaction history entirely in-context, i.e., within the LLM prompt. We experiment with GPT-3.5, GPT-4, and Llama2, using a variety of prompt designs, and find that the models do not robustly engage in exploration without substantial interventions: i) Across all of our experiments, only one configuration resulted in satisfactory exploratory behavior: GPT-4 with chain-of-thought reasoning and an externally summarized interaction history, presented as sufficient statistics; ii) All other configurations did not result in robust exploratory behavior, including those with chain-of-thought reasoning but unsummarized history. Although these findings can be interpreted positively, they suggest that external summarization -- which may not be possible in more complex settings -- is important for obtaining desirable behavior from LLM agents. We conclude that non-trivial algorithmic interventions, such as fine-tuning or dataset curation, may be required to empower LLM-based decision making agents in complex settings.
[ "['Akshay Krishnamurthy' 'Keegan Harris' 'Dylan J. Foster' 'Cyril Zhang'\n 'Aleksandrs Slivkins']" ]
null
null
2403.15385
null
null
http://arxiv.org/pdf/2403.15385v1
2024-03-22T17:59:37Z
2024-03-22T17:59:37Z
LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis
Recent text-to-3D generation approaches produce impressive 3D results but require time-consuming optimization that can take up to an hour per prompt. Amortized methods like ATT3D optimize multiple prompts simultaneously to improve efficiency, enabling fast text-to-3D synthesis. However, they cannot capture high-frequency geometry and texture details and struggle to scale to large prompt sets, so they generalize poorly. We introduce LATTE3D, addressing these limitations to achieve fast, high-quality generation on a significantly larger prompt set. Key to our method is 1) building a scalable architecture and 2) leveraging 3D data during optimization through 3D-aware diffusion priors, shape regularization, and model initialization to achieve robustness to diverse and complex training prompts. LATTE3D amortizes both neural field and textured surface generation to produce highly detailed textured meshes in a single forward pass. LATTE3D generates 3D objects in 400ms, and can be further enhanced with fast test-time optimization.
[ "['Kevin Xie' 'Jonathan Lorraine' 'Tianshi Cao' 'Jun Gao' 'James Lucas'\n 'Antonio Torralba' 'Sanja Fidler' 'Xiaohui Zeng']" ]
null
null
2403.15389
null
null
http://arxiv.org/pdf/2403.15389v1
2024-03-22T17:59:58Z
2024-03-22T17:59:58Z
DiffusionMTL: Learning Multi-Task Denoising Diffusion Model from Partially Annotated Data
Recently, there has been an increased interest in the practical problem of learning multiple dense scene understanding tasks from partially annotated data, where each training sample is only labeled for a subset of the tasks. The missing of task labels in training leads to low-quality and noisy predictions, as can be observed from state-of-the-art methods. To tackle this issue, we reformulate the partially-labeled multi-task dense prediction as a pixel-level denoising problem, and propose a novel multi-task denoising diffusion framework coined as DiffusionMTL. It designs a joint diffusion and denoising paradigm to model a potential noisy distribution in the task prediction or feature maps and generate rectified outputs for different tasks. To exploit multi-task consistency in denoising, we further introduce a Multi-Task Conditioning strategy, which can implicitly utilize the complementary nature of the tasks to help learn the unlabeled tasks, leading to an improvement in the denoising performance of the different tasks. Extensive quantitative and qualitative experiments demonstrate that the proposed multi-task denoising diffusion model can significantly improve multi-task prediction maps, and outperform the state-of-the-art methods on three challenging multi-task benchmarks, under two different partial-labeling evaluation settings. The code is available at https://prismformore.github.io/diffusionmtl/.
[ "['Hanrong Ye' 'Dan Xu']" ]
null
null
2403.15393
null
null
http://arxiv.org/pdf/2403.15393v1
2024-02-09T22:12:20Z
2024-02-09T22:12:20Z
Detection of Opioid Users from Reddit Posts via an Attention-based Bidirectional Recurrent Neural Network
The opioid epidemic, referring to the growing hospitalizations and deaths because of overdose of opioid usage and addiction, has become a severe health problem in the United States. Many strategies have been developed by the federal and local governments and health communities to combat this crisis. Among them, improving our understanding of the epidemic through better health surveillance is one of the top priorities. In addition to direct testing, machine learning approaches may also allow us to detect opioid users by analyzing data from social media because many opioid users may choose not to do the tests but may share their experiences on social media anonymously. In this paper, we take advantage of recent advances in machine learning, collect and analyze user posts from a popular social network Reddit with the goal to identify opioid users. Posts from more than 1,000 users who have posted on three sub-reddits over a period of one month have been collected. In addition to the ones that contain keywords such as opioid, opiate, or heroin, we have also collected posts that contain slang words of opioid such as black or chocolate. We apply an attention-based bidirectional long short memory model to identify opioid users. Experimental results show that the approaches significantly outperform competitive algorithms in terms of F1-score. Furthermore, the model allows us to extract most informative words, such as opiate, opioid, and black, from posts via the attention layer, which provides more insights on how the machine learning algorithm works in distinguishing drug users from non-drug users.
[ "['Yuchen Wang' 'Zhengyu Fang' 'Wei Du' 'Shuai Xu' 'Rong Xu' 'Jing Li']" ]
null
null
2403.15394
null
null
http://arxiv.org/pdf/2403.15394v1
2024-02-15T14:56:00Z
2024-02-15T14:56:00Z
"Model Cards for Model Reporting" in 2024: Reclassifying Category of Ethical Considerations in Terms of Trustworthiness and Risk Management
In 2019, the paper entitled "Model Cards for Model Reporting" introduced a new tool for documenting model performance and encouraged the practice of transparent reporting for a defined list of categories. One of the categories detailed in that paper is ethical considerations, which includes the subcategories of data, human life, mitigations, risks and harms, and use cases. We propose to reclassify this category in the original model card due to the recent maturing of the field known as trustworthy AI, a term which analyzes whether the algorithmic properties of the model indicate that the AI system is deserving of trust from its stakeholders. In our examination of trustworthy AI, we highlight three respected organizations - the European Commission's High-Level Expert Group on AI, the OECD, and the U.S.-based NIST - that have written guidelines on various aspects of trustworthy AI. These recent publications converge on numerous characteristics of the term, including accountability, explainability, fairness, privacy, reliability, robustness, safety, security, and transparency, while recognizing that the implementation of trustworthy AI varies by context. Our reclassification of the original model-card category known as ethical considerations involves a two-step process: 1) adding a new category known as trustworthiness, where the subcategories will be derived from the discussion of trustworthy AI in our paper, and 2) maintaining the subcategories of ethical considerations under a renamed category known as risk environment and risk management, a title which we believe better captures today's understanding of the essence of these topics. We hope that this reclassification will further the goals of the original paper and continue to prompt those releasing trained models to accompany these models with documentation that will assist in the evaluation of their algorithmic properties.
[ "['DeBrae Kennedy-Mayo' 'Jake Gord']" ]
null
null
2403.15399
null
null
http://arxiv.org/pdf/2403.15399v1
2024-02-18T07:35:01Z
2024-02-18T07:35:01Z
ChatGPT in Linear Algebra: Strides Forward, Steps to Go
As soon as a new technology emerges, the education community explores its affordances and the possibilities to apply it in education. In this paper, we analyze sessions with ChatGPT around topics in basic Linear Algebra. We reflect the process undertaken by the ChatGPT along the recent year in our area of interest, emphasising the vast improvement that has been done in grappling with Linear Algebra problems. In particular, the question whether this software can be a teaching assistant or even somehow replace the human teacher, is addressed. As of the time this paper is written, the answer is generally negative. For the small part where the answer can be positive, some reflections about an original instrumental genesis are given. Communication with the software gives the impression to talk to a human, and sometimes the question is whether the software understands the question or not. Therefore, the reader's attention is drawn to the fact that ChatGPT works on a statistical basis and not according to reflection and understanding.
[ "['Eli Bagno' 'Thierry Dana-Picard' 'Shulamit Reches']" ]
null
null
2403.15408
null
null
http://arxiv.org/abs/2403.15408v1
2024-03-01T01:16:27Z
2024-03-01T01:16:27Z
Multi-modal Heart Failure Risk Estimation based on Short ECG and Sampled Long-Term HRV
Cardiovascular diseases, including Heart Failure (HF), remain a leading global cause of mortality, often evading early detection. In this context, accessible and effective risk assessment is indispensable. Traditional approaches rely on resource-intensive diagnostic tests, typically administered after the onset of symptoms. The widespread availability of electrocardiogram (ECG) technology and the power of Machine Learning are emerging as viable alternatives within smart healthcare. In this paper, we propose several multi-modal approaches that combine 30-second ECG recordings and approximate long-term Heart Rate Variability (HRV) data to estimate the risk of HF hospitalization. We introduce two survival models: an XGBoost model with Accelerated Failure Time (AFT) incorporating comprehensive ECG features and a ResNet model that learns from the raw ECG. We extend these with our novel long-term HRVs extracted from the combination of ultra-short-term beat-to-beat measurements taken over the day. To capture their temporal dynamics, we propose a survival model comprising ResNet and Transformer architectures (TFM-ResNet). Our experiments demonstrate high model performance for HF risk assessment with a concordance index of 0.8537 compared to 14 survival models and competitive discrimination power on various external ECG datasets. After transferability tests with Apple Watch data, our approach implemented in the myHeartScore App offers cost-effective and highly accessible HF risk assessment, contributing to its prevention and management.
[ "['Sergio González' 'Abel Ko-Chun Yi' 'Wan-Ting Hsieh' 'Wei-Chao Chen'\n 'Chun-Li Wang' 'Victor Chien-Chia Wu' 'Shang-Hung Chang']" ]
null
null
2403.15409
null
null
http://arxiv.org/pdf/2403.15409v1
2024-03-02T12:09:16Z
2024-03-02T12:09:16Z
Coupled generator decomposition for fusion of electro- and magnetoencephalography data
Data fusion modeling can identify common features across diverse data sources while accounting for source-specific variability. Here we introduce the concept of a textit{coupled generator decomposition} and demonstrate how it generalizes sparse principal component analysis (SPCA) for data fusion. Leveraging data from a multisubject, multimodal (electro- and magnetoencephalography (EEG and MEG)) neuroimaging experiment, we demonstrate the efficacy of the framework in identifying common features in response to face perception stimuli, while accommodating modality- and subject-specific variability. Through split-half cross-validation of EEG/MEG trials, we investigate the optimal model order and regularization strengths for models of varying complexity, comparing these to a group-level model assuming shared brain responses to stimuli. Our findings reveal altered $sim170ms$ fusiform face area activation for scrambled faces, as opposed to real faces, particularly evident in the multimodal, multisubject model. Model parameters were inferred using stochastic optimization in PyTorch, demonstrating comparable performance to conventional quadratic programming inference for SPCA but with considerably faster execution. We provide an easily accessible toolbox for coupled generator decomposition that includes data fusion for SPCA, archetypal analysis and directional archetypal analysis. Overall, our approach offers a promising new avenue for data fusion.
[ "['Anders Stevnhoved Olsen' 'Jesper Duemose Nielsen' 'Morten Mørup']" ]
null
null
2403.15415
null
null
http://arxiv.org/pdf/2403.15415v2
2024-06-27T11:35:11Z
2024-03-07T16:17:33Z
Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets
Combining electroencephalogram (EEG) datasets for supervised machine learning (ML) is challenging due to session, subject, and device variability. ML algorithms typically require identical features at train and test time, complicating analysis due to varying sensor numbers and positions across datasets. Simple channel selection discards valuable data, leading to poorer performance, especially with datasets sharing few channels. To address this, we propose an unsupervised approach leveraging EEG signal physics. We map EEG channels to fixed positions using field interpolation, facilitating source-free domain adaptation. Leveraging Riemannian geometry classification pipelines and transfer learning steps, our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications. Comparative analysis against a statistical-based approach known as Dimensionality Transcending, a signal-based imputation called ComImp, source-dependent methods, as well as common channel selection and spherical spline interpolation, was conducted with leave-one-dataset-out validation on six public BCI datasets for a right-hand/left-hand classification task. Numerical experiments show that in the presence of few shared channels in train and test, the field interpolation consistently outperforms other methods, demonstrating enhanced classification performance across all datasets. When more channels are shared, field interpolation was found to be competitive with other methods and faster to compute than source-dependent methods.
[ "['Apolline Mellot' 'Antoine Collas' 'Sylvain Chevallier' 'Denis Engemann'\n 'Alexandre Gramfort']" ]
null
null
2403.15416
null
null
http://arxiv.org/pdf/2403.15416v1
2024-03-08T07:47:27Z
2024-03-08T07:47:27Z
Fuzzy hyperparameters update in a second order optimization
This research will present a hybrid approach to accelerate convergence in a second order optimization. An online finite difference approximation of the diagonal Hessian matrix will be introduced, along with fuzzy inferencing of several hyperparameters. Competitive results have been achieved
[ "['Abdelaziz Bensadok' 'Muhammad Zeeshan Babar']" ]
null
null
2403.15417
null
null
http://arxiv.org/pdf/2403.15417v2
2024-04-05T18:17:08Z
2024-03-08T21:33:03Z
Enhancing Automatic Modulation Recognition for IoT Applications Using Transformers
Automatic modulation recognition (AMR) is vital for accurately identifying modulation types within incoming signals, a critical task for optimizing operations within edge devices in IoT ecosystems. This paper presents an innovative approach that leverages Transformer networks, initially designed for natural language processing, to address the challenges of efficient AMR. Our transformer network architecture is designed with the mindset of real-time edge computing on IoT devices. Four tokenization techniques are proposed and explored for creating proper embeddings of RF signals, specifically focusing on overcoming the limitations related to the model size often encountered in IoT scenarios. Extensive experiments reveal that our proposed method outperformed advanced deep learning techniques, achieving the highest recognition accuracy. Notably, our model achieves an accuracy of 65.75 on the RML2016 and 65.80 on the CSPB.ML.2018+ dataset.
[ "['Narges Rashvand' 'Kenneth Witham' 'Gabriel Maldonado' 'Vinit Katariya'\n 'Nishanth Marer Prabhu' 'Gunar Schirner' 'Hamed Tabkhi']" ]
null
null
2403.15419
null
null
http://arxiv.org/pdf/2403.15419v1
2024-03-10T11:28:33Z
2024-03-10T11:28:33Z
Attention is all you need for boosting graph convolutional neural network
Graph Convolutional Neural Networks (GCNs) possess strong capabilities for processing graph data in non-grid domains. They can capture the topological logical structure and node features in graphs and integrate them into nodes' final representations. GCNs have been extensively studied in various fields, such as recommendation systems, social networks, and protein molecular structures. With the increasing application of graph neural networks, research has focused on improving their performance while compressing their size. In this work, a plug-in module named Graph Knowledge Enhancement and Distillation Module (GKEDM) is proposed. GKEDM can enhance node representations and improve the performance of GCNs by extracting and aggregating graph information via multi-head attention mechanism. Furthermore, GKEDM can serve as an auxiliary transferor for knowledge distillation. With a specially designed attention distillation method, GKEDM can distill the knowledge of large teacher models into high-performance and compact student models. Experiments on multiple datasets demonstrate that GKEDM can significantly improve the performance of various GCNs with minimal overhead. Furthermore, it can efficiently transfer distilled knowledge from large teacher networks to small student networks via attention distillation.
[ "['Yinwei Wu']" ]
null
null
2403.15421
null
null
http://arxiv.org/pdf/2403.15421v1
2024-03-12T19:34:18Z
2024-03-12T19:34:18Z
Agile gesture recognition for low-power applications: customisation for generalisation
Automated hand gesture recognition has long been a focal point in the AI community. Traditionally, research in this field has predominantly focused on scenarios with access to a continuous flow of hand's images. This focus has been driven by the widespread use of cameras and the abundant availability of image data. However, there is an increasing demand for gesture recognition technologies that operate on low-power sensor devices. This is due to the rising concerns for data leakage and end-user privacy, as well as the limited battery capacity and the computing power in low-cost devices. Moreover, the challenge in data collection for individually designed hardware also hinders the generalisation of a gesture recognition model. In this study, we unveil a novel methodology for pattern recognition systems using adaptive and agile error correction, designed to enhance the performance of legacy gesture recognition models on devices with limited battery capacity and computing power. This system comprises a compact Support Vector Machine as the base model for live gesture recognition. Additionally, it features an adaptive agile error corrector that employs few-shot learning within the feature space induced by high-dimensional kernel mappings. The error corrector can be customised for each user, allowing for dynamic adjustments to the gesture prediction based on their movement patterns while maintaining the agile performance of its base model on a low-cost and low-power micro-controller. This proposed system is distinguished by its compact size, rapid processing speed, and low power consumption, making it ideal for a wide range of embedded systems.
[ "['Ying Liu' 'Liucheng Guo' 'Valeri A. Makarovc' 'Alexander Gorbana'\n 'Evgeny Mirkesa' 'Ivan Y. Tyukin']" ]
null
null
2403.15422
null
null
http://arxiv.org/pdf/2403.15422v1
2024-03-12T22:22:14Z
2024-03-12T22:22:14Z
Machine Learning Techniques for Sensor-based Human Activity Recognition with Data Heterogeneity -- A Review
Sensor-based Human Activity Recognition (HAR) is crucial in ubiquitous computing, analysing behaviours through multi-dimensional observations. Despite research progress, HAR confronts challenges, particularly in data distribution assumptions. Most studies often assume uniform data distributions across datasets, contrasting with the varied nature of practical sensor data in human activities. Addressing data heterogeneity issues can improve performance, reduce computational costs, and aid in developing personalized, adaptive models with less annotated data. This review investigates how machine learning addresses data heterogeneity in HAR, by categorizing data heterogeneity types, applying corresponding suitable machine learning methods, summarizing available datasets, and discussing future challenges.
[ "['Xiaozhou Ye' 'Kouichi Sakurai' 'Nirmal Nair' 'Kevin I-Kai Wang']" ]
null
null
2403.15423
null
null
http://arxiv.org/pdf/2403.15423v1
2024-03-12T22:33:56Z
2024-03-12T22:33:56Z
Cross-user activity recognition via temporal relation optimal transport
Current research on human activity recognition (HAR) mainly assumes that training and testing data are drawn from the same distribution to achieve a generalised model, which means all the data are considered to be independent and identically distributed $displaystyle (i.i.d.) $. In many real-world applications, this assumption does not hold, and collected training and target testing datasets have non-uniform distribution, such as in the case of cross-user HAR. Domain adaptation is a promising approach for cross-user HAR tasks. Existing domain adaptation works based on the assumption that samples in each domain are $displaystyle i.i.d. $ and do not consider the knowledge of temporal relation hidden in time series data for aligning data distribution. This strong assumption of $displaystyle i.i.d. $ may not be suitable for time series-related domain adaptation methods because the samples formed by time series segmentation and feature extraction techniques are only coarse approximations to $displaystyle i.i.d. $ assumption in each domain. In this paper, we propose the temporal relation optimal transport (TROT) method to utilise temporal relation and relax the $displaystyle i.i.d. $ assumption for the samples in each domain for accurate and efficient knowledge transfer. We obtain the temporal relation representation and implement temporal relation alignment of activities via the Hidden Markov model (HMM) and optimal transport (OT) techniques. Besides, a new regularisation term that preserves temporal relation order information for an improved optimal transport mapping is proposed to enhance the domain adaptation performance. Comprehensive experiments are conducted on three public activity recognition datasets (i.e. OPPT, PAMAP2 and DSADS), demonstrating that TROT outperforms other state-of-the-art methods.
[ "['Xiaozhou Ye' 'Kevin I-Kai Wang']" ]
null
null
2403.15424
null
null
http://arxiv.org/pdf/2403.15424v1
2024-03-12T22:38:09Z
2024-03-12T22:38:09Z
Cross-user activity recognition using deep domain adaptation with temporal relation information
Human Activity Recognition (HAR) is a cornerstone of ubiquitous computing, with promising applications in diverse fields such as health monitoring and ambient assisted living. Despite significant advancements, sensor-based HAR methods often operate under the assumption that training and testing data have identical distributions. However, in many real-world scenarios, particularly in sensor-based HAR, this assumption is invalidated by out-of-distribution ($displaystyle o.o.d.$) challenges, including differences from heterogeneous sensors, change over time, and individual behavioural variability. This paper centres on the latter, exploring the cross-user HAR problem where behavioural variability across individuals results in differing data distributions. To address this challenge, we introduce the Deep Temporal State Domain Adaptation (DTSDA) model, an innovative approach tailored for time series domain adaptation in cross-user HAR. Contrary to the common assumption of sample independence in existing domain adaptation approaches, DTSDA recognizes and harnesses the inherent temporal relations in the data. Therefore, we introduce 'Temporal State', a concept that defined the different sub-activities within an activity, consistent across different users. We ensure these sub-activities follow a logical time sequence through 'Temporal Consistency' property and propose the 'Pseudo Temporal State Labeling' method to identify the user-invariant temporal relations. Moreover, the design principle of DTSDA integrates adversarial learning for better domain adaptation. Comprehensive evaluations on three HAR datasets demonstrate DTSDA's superior performance in cross-user HAR applications by briding individual behavioral variability using temporal relations across sub-activities.
[ "['Xiaozhou Ye' 'Waleed H. Abdulla' 'Nirmal Nair' 'Kevin I-Kai Wang']" ]
null
null
2403.15426
null
null
http://arxiv.org/pdf/2403.15426v1
2024-03-13T05:38:39Z
2024-03-13T05:38:39Z
A Three-Phases SFT Hybrid Model Integrated Strong Prior Module and Data Overlap Estimation in the Eduation Context
In this paper, we propose an end-to-end prior-based three-phases supervised fine-tuned model, which is proved more competitive than traditional fine-tuning method. More specifically, our model realizes the structural disassembly and incremental guided output of educational knowledge. To this end, we robustify data classification of three types via a sampler and overlap estimation neural network, and inject the preprocessing datasets into pre-trained model in three batches for LORA fine-tuning. Then, we design a prior module couples system prompt, vector databases, and abstract syntax tree task segmentation. Finally, the compression method and regularization constraint are applied to the prior-based fine-tuned model, followed by text filter at the output end to obtain incremental guided results. Our model represents the first research effort to truly embody the tutor role with the features of abundant educational knowledge, step-by-step incremental guided outputs and non-disclosure of answers. Extensive experiments report that our model also achieves state-of-the-art in code abilities compared to open-source models, reaching an impressive 75.10% on the HumanEval (@pass 1) benchmark. Additionally, our model maintains strong conversational capabilities, with the 13B quantized version achieving scores of 56.34, 50.60, and 45.27 respectively on the MMLU, C-Eval, and AGIEval (5 shot) dialogue evaluation benchmarks.
[ "['Zhangquan Chen' 'Chunjiang Liu' 'Haobin Duan']" ]
null
null
2403.15431
null
null
http://arxiv.org/pdf/2403.15431v1
2024-03-14T09:49:00Z
2024-03-14T09:49:00Z
Transferring BCI models from calibration to control: Observing shifts in EEG features
Public Motor Imagery-based brain-computer interface (BCI) datasets are being used to develop increasingly good classifiers. However, they usually follow discrete paradigms where participants perform Motor Imagery at regularly timed intervals. It is often unclear what changes may happen in the EEG patterns when users attempt to perform a control task with such a BCI. This may lead to generalisation errors. We demonstrate a new paradigm containing a standard calibration session and a novel BCI control session based on EMG. This allows us to observe similarities in sensorimotor rhythms, and observe the additional preparation effects introduced by the control paradigm. In the Movement Related Cortical Potentials we found large differences between the calibration and control sessions. We demonstrate a CSP-based Machine Learning model trained on the calibration data that can make surprisingly good predictions on the BCI-controlled driving data.
[ "['Ivo Pascal de Jong' 'Lüke Luna van den Wittenboer'\n 'Matias Valdenegro-Toro' 'Andreea Ioana Sburlea']" ]
null
null
2403.15432
null
null
http://arxiv.org/pdf/2403.15432v1
2024-03-14T15:43:48Z
2024-03-14T15:43:48Z
BRIEDGE: EEG-Adaptive Edge AI for Multi-Brain to Multi-Robot Interaction
Recent advances in EEG-based BCI technologies have revealed the potential of brain-to-robot collaboration through the integration of sensing, computing, communication, and control. In this paper, we present BRIEDGE as an end-to-end system for multi-brain to multi-robot interaction through an EEG-adaptive neural network and an encoding-decoding communication framework, as illustrated in Fig.1. As depicted, the edge mobile server or edge portable server will collect EEG data from the users and utilize the EEG-adaptive neural network to identify the users' intentions. The encoding-decoding communication framework then encodes the EEG-based semantic information and decodes it into commands in the process of data transmission. To better extract the joint features of heterogeneous EEG data as well as enhance classification accuracy, BRIEDGE introduces an informer-based ProbSparse self-attention mechanism. Meanwhile, parallel and secure transmissions for multi-user multi-task scenarios under physical channels are addressed by dynamic autoencoder and autodecoder communications. From mobile computing and edge AI perspectives, model compression schemes composed of pruning, weight sharing, and quantization are also used to deploy lightweight EEG-adaptive models running on both transmitter and receiver sides. Based on the effectiveness of these components, a code map representing various commands enables multiple users to control multiple intelligent agents concurrently. Our experiments in comparison with state-of-the-art works show that BRIEDGE achieves the best classification accuracy of heterogeneous EEG data, and more stable performance under noisy environments.
[ "['Jinhui Ouyang' 'Mingzhu Wu' 'Xinglin Li' 'Hanhui Deng' 'Di Wu']" ]
null
null
2403.15433
null
null
http://arxiv.org/pdf/2403.15433v1
2024-03-15T02:30:00Z
2024-03-15T02:30:00Z
HyPer-EP: Meta-Learning Hybrid Personalized Models for Cardiac Electrophysiology
Personalized virtual heart models have demonstrated increasing potential for clinical use, although the estimation of their parameters given patient-specific data remain a challenge. Traditional physics-based modeling approaches are computationally costly and often neglect the inherent structural errors in these models due to model simplifications and assumptions. Modern deep learning approaches, on the other hand, rely heavily on data supervision and lacks interpretability. In this paper, we present a novel hybrid modeling framework to describe a personalized cardiac digital twin as a combination of a physics-based known expression augmented by neural network modeling of its unknown gap to reality. We then present a novel meta-learning framework to enable the separate identification of both the physics-based and neural components in the hybrid model. We demonstrate the feasibility and generality of this hybrid modeling framework with two examples of instantiations and their proof-of-concept in synthetic experiments.
[ "['Xiajun Jiang' 'Sumeet Vadhavkar' 'Yubo Ye' 'Maryam Toloubidokhti'\n 'Ryan Missel' 'Linwei Wang']" ]
null
null
2403.15438
null
null
http://arxiv.org/pdf/2403.15438v1
2024-03-15T22:22:10Z
2024-03-15T22:22:10Z
Unsupervised Adaptive Deep Learning Method For BCI Motor Imagery Decoding
In the context of Brain-Computer Interfaces, we propose an adaptive method that reaches offline performance level while being usable online without requiring supervision. Interestingly, our method does not require retraining the model, as it consists in using a frozen efficient deep learning backbone while continuously realigning data, both at input and latent spaces, based on streaming observations. We demonstrate its efficiency for Motor Imagery brain decoding from electroencephalography data, considering challenging cross-subject scenarios. For reproducibility, we share the code of our experiments.
[ "['Yassine El Ouahidi' 'Giulia Lioi' 'Nicolas Farrugia' 'Bastien Pasdeloup'\n 'Vincent Gripon']" ]
null
null
2403.15439
null
null
http://arxiv.org/pdf/2403.15439v1
2024-03-16T14:35:03Z
2024-03-16T14:35:03Z
Federated Learning based on Pruning and Recovery
A novel federated learning training framework for heterogeneous environments is presented, taking into account the diverse network speeds of clients in realistic settings. This framework integrates asynchronous learning algorithms and pruning techniques, effectively addressing the inefficiencies of traditional federated learning algorithms in scenarios involving heterogeneous devices, as well as tackling the staleness issue and inadequate training of certain clients in asynchronous algorithms. Through the incremental restoration of model size during training, the framework expedites model training while preserving model accuracy. Furthermore, enhancements to the federated learning aggregation process are introduced, incorporating a buffering mechanism to enable asynchronous federated learning to operate akin to synchronous learning. Additionally, optimizations in the process of the server transmitting the global model to clients reduce communication overhead. Our experiments across various datasets demonstrate that: (i) significant reductions in training time and improvements in convergence accuracy are achieved compared to conventional asynchronous FL and HeteroFL; (ii) the advantages of our approach are more pronounced in scenarios with heterogeneous clients and non-IID client data.
[ "['Chengjie Ma']" ]
null
null
2403.15441
null
null
http://arxiv.org/pdf/2403.15441v1
2024-03-17T08:40:06Z
2024-03-17T08:40:06Z
Unified Generative Modeling of 3D Molecules via Bayesian Flow Networks
Advanced generative model (e.g., diffusion model) derived from simplified continuity assumptions of data distribution, though showing promising progress, has been difficult to apply directly to geometry generation applications due to the multi-modality and noise-sensitive nature of molecule geometry. This work introduces Geometric Bayesian Flow Networks (GeoBFN), which naturally fits molecule geometry by modeling diverse modalities in the differentiable parameter space of distributions. GeoBFN maintains the SE-(3) invariant density modeling property by incorporating equivariant inter-dependency modeling on parameters of distributions and unifying the probabilistic modeling of different modalities. Through optimized training and sampling techniques, we demonstrate that GeoBFN achieves state-of-the-art performance on multiple 3D molecule generation benchmarks in terms of generation quality (90.87% molecule stability in QM9 and 85.6% atom stability in GEOM-DRUG. GeoBFN can also conduct sampling with any number of steps to reach an optimal trade-off between efficiency and quality (e.g., 20-times speedup without sacrificing performance).
[ "['Yuxuan Song' 'Jingjing Gong' 'Yanru Qu' 'Hao Zhou' 'Mingyue Zheng'\n 'Jingjing Liu' 'Wei-Ying Ma']" ]
null
null
2403.15443
null
null
http://arxiv.org/pdf/2403.15443v2
2024-04-01T16:37:08Z
2024-03-17T16:12:50Z
Introducing an ensemble method for the early detection of Alzheimer's disease through the analysis of PET scan images
Alzheimer's disease is a progressive neurodegenerative disorder that primarily affects cognitive functions such as memory, thinking, and behavior. In this disease, there is a critical phase, mild cognitive impairment, that is really important to be diagnosed early since some patients with progressive MCI will develop the disease. This study delves into the challenging task of classifying Alzheimer's disease into four distinct groups: control normal (CN), progressive mild cognitive impairment (pMCI), stable mild cognitive impairment (sMCI), and Alzheimer's disease (AD). This classification is based on a thorough examination of PET scan images obtained from the ADNI dataset, which provides a thorough understanding of the disease's progression. Several deep-learning and traditional machine-learning models have been used to detect Alzheimer's disease. In this paper, three deep-learning models, namely VGG16 and AlexNet, and a custom Convolutional neural network (CNN) with 8-fold cross-validation have been used for classification. Finally, an ensemble technique is used to improve the overall result of these models. The results show that using deep-learning models to tell the difference between MCI patients gives an overall average accuracy of 93.13% and an AUC of 94.4%.
[ "['Arezoo Borji' 'Taha-Hossein Hejazi' 'Abbas Seifi']" ]
null
null
2403.15444
null
null
http://arxiv.org/pdf/2403.15444v1
2024-03-17T22:31:14Z
2024-03-17T22:31:14Z
A Survey of IMU Based Cross-Modal Transfer Learning in Human Activity Recognition
Despite living in a multi-sensory world, most AI models are limited to textual and visual understanding of human motion and behavior. In fact, full situational awareness of human motion could best be understood through a combination of sensors. In this survey we investigate how knowledge can be transferred and utilized amongst modalities for Human Activity/Action Recognition (HAR), i.e. cross-modality transfer learning. We motivate the importance and potential of IMU data and its applicability in cross-modality learning as well as the importance of studying the HAR problem. We categorize HAR related tasks by time and abstractness and then compare various types of multimodal HAR datasets. We also distinguish and expound on many related but inconsistently used terms in the literature, such as transfer learning, domain adaptation, representation learning, sensor fusion, and multimodal learning, and describe how cross-modal learning fits with all these concepts. We then review the literature in IMU-based cross-modal transfer for HAR. The two main approaches for cross-modal transfer are instance-based transfer, where instances of one modality are mapped to another (e.g. knowledge is transferred in the input space), or feature-based transfer, where the model relates the modalities in an intermediate latent space (e.g. knowledge is transferred in the feature space). Finally, we discuss future research directions and applications in cross-modal HAR.
[ "['Abhi Kamboj' 'Minh Do']" ]
null
null
2403.15445
null
null
http://arxiv.org/pdf/2403.15445v1
2024-03-18T00:01:10Z
2024-03-18T00:01:10Z
Decoding Multilingual Topic Dynamics and Trend Identification through ARIMA Time Series Analysis on Social Networks: A Novel Data Translation Framework Enhanced by LDA/HDP Models
In this study, the authors present a novel methodology adept at decoding multilingual topic dynamics and identifying communication trends during crises. We focus on dialogues within Tunisian social networks during the Coronavirus Pandemic and other notable themes like sports and politics. We start by aggregating a varied multilingual corpus of comments relevant to these subjects. This dataset undergoes rigorous refinement during data preprocessing. We then introduce our No-English-to-English Machine Translation approach to handle linguistic differences. Empirical tests of this method showed high accuracy and F1 scores, highlighting its suitability for linguistically coherent tasks. Delving deeper, advanced modeling techniques, specifically LDA and HDP models are employed to extract pertinent topics from the translated content. This leads to applying ARIMA time series analysis to decode evolving topic trends. Applying our method to a multilingual Tunisian dataset, we effectively identified key topics mirroring public sentiment. Such insights prove vital for organizations and governments striving to understand public perspectives during crises. Compared to standard approaches, our model outperforms, as confirmed by metrics like Coherence Score, U-mass, and Topic Coherence. Additionally, an in-depth assessment of the identified topics revealed notable thematic shifts in discussions, with our trends identification indicating impressive accuracy, backed by RMSE-based analysis.
[ "['Samawel Jaballi' 'Azer Mahjoubi' 'Manar Joundy Hazar' 'Salah Zrigui'\n 'Henri Nicolas' 'Mounir Zrigui']" ]
null
null
2403.15448
null
null
http://arxiv.org/pdf/2403.15448v1
2024-03-18T03:01:53Z
2024-03-18T03:01:53Z
What is Wrong with End-to-End Learning for Phase Retrieval?
For nonlinear inverse problems that are prevalent in imaging science, symmetries in the forward model are common. When data-driven deep learning approaches are used to solve such problems, these intrinsic symmetries can cause substantial learning difficulties. In this paper, we explain how such difficulties arise and, more importantly, how to overcome them by preprocessing the training set before any learning, i.e., symmetry breaking. We take far-field phase retrieval (FFPR), which is central to many areas of scientific imaging, as an example and show that symmetric breaking can substantially improve data-driven learning. We also formulate the mathematical principle of symmetry breaking.
[ "['Wenjie Zhang' 'Yuxiang Wan' 'Zhong Zhuang' 'Ju Sun']" ]
null
null
2403.15455
null
null
http://arxiv.org/pdf/2403.15455v1
2024-03-18T23:41:52Z
2024-03-18T23:41:52Z
Improving Sampling Methods for Fine-tuning SentenceBERT in Text Streams
The proliferation of textual data on the Internet presents a unique opportunity for institutions and companies to monitor public opinion about their services and products. Given the rapid generation of such data, the text stream mining setting, which handles sequentially arriving, potentially infinite text streams, is often more suitable than traditional batch learning. While pre-trained language models are commonly employed for their high-quality text vectorization capabilities in streaming contexts, they face challenges adapting to concept drift - the phenomenon where the data distribution changes over time, adversely affecting model performance. Addressing the issue of concept drift, this study explores the efficacy of seven text sampling methods designed to selectively fine-tune language models, thereby mitigating performance degradation. We precisely assess the impact of these methods on fine-tuning the SBERT model using four different loss functions. Our evaluation, focused on Macro F1-score and elapsed time, employs two text stream datasets and an incremental SVM classifier to benchmark performance. Our findings indicate that Softmax loss and Batch All Triplets loss are particularly effective for text stream classification, demonstrating that larger sample sizes generally correlate with improved macro F1-scores. Notably, our proposed WordPieceToken ratio sampling method significantly enhances performance with the identified loss functions, surpassing baseline results.
[ "['Cristiano Mesquita Garcia' 'Alessandro Lameiras Koerich'\n 'Alceu de Souza Britto Jr' 'Jean Paul Barddal']" ]
null
null
2403.15458
null
null
http://arxiv.org/abs/2403.15458v1
2024-03-19T11:36:53Z
2024-03-19T11:36:53Z
Fine-Tuning Pre-trained Language Models to Detect In-Game Trash Talks
Common problems in playing online mobile and computer games were related to toxic behavior and abusive communication among players. Based on different reports and studies, the study also discusses the impact of online hate speech and toxicity on players' in-game performance and overall well-being. This study investigates the capability of pre-trained language models to classify or detect trash talk or toxic in-game messages The study employs and evaluates the performance of pre-trained BERT and GPT language models in detecting toxicity within in-game chats. Using publicly available APIs, in-game chat data from DOTA 2 game matches were collected, processed, reviewed, and labeled as non-toxic, mild (toxicity), and toxic. The study was able to collect around two thousand in-game chats to train and test BERT (Base-uncased), BERT (Large-uncased), and GPT-3 models. Based on the three models' state-of-the-art performance, this study concludes pre-trained language models' promising potential for addressing online hate speech and in-game insulting trash talk.
[ "['Daniel Fesalbon' 'Arvin De La Cruz' 'Marvin Mallari' 'Nelson Rodelas']" ]
null
null
2403.15462
null
null
http://arxiv.org/pdf/2403.15462v1
2024-03-19T14:20:46Z
2024-03-19T14:20:46Z
FUELVISION: A Multimodal Data Fusion and Multimodel Ensemble Algorithm for Wildfire Fuels Mapping
Accurate assessment of fuel conditions is a prerequisite for fire ignition and behavior prediction, and risk management. The method proposed herein leverages diverse data sources including Landsat-8 optical imagery, Sentinel-1 (C-band) Synthetic Aperture Radar (SAR) imagery, PALSAR (L-band) SAR imagery, and terrain features to capture comprehensive information about fuel types and distributions. An ensemble model was trained to predict landscape-scale fuels such as the 'Scott and Burgan 40' using the as-received Forest Inventory and Analysis (FIA) field survey plot data obtained from the USDA Forest Service. However, this basic approach yielded relatively poor results due to the inadequate amount of training data. Pseudo-labeled and fully synthetic datasets were developed using generative AI approaches to address the limitations of ground truth data availability. These synthetic datasets were used for augmenting the FIA data from California to enhance the robustness and coverage of model training. The use of an ensemble of methods including deep learning neural networks, decision trees, and gradient boosting offered a fuel mapping accuracy of nearly 80%. Through extensive experimentation and evaluation, the effectiveness of the proposed approach was validated for regions of the 2021 Dixie and Caldor fires. Comparative analyses against high-resolution data from the National Agriculture Imagery Program (NAIP) and timber harvest maps affirmed the robustness and reliability of the proposed approach, which is capable of near-real-time fuel mapping.
[ "['Riyaaz Uddien Shaik' 'Mohamad Alipour' 'Eric Rowell' 'Bharathan Balaji'\n 'Adam Watts' 'Ertugrul Taciroglu']" ]
null
null
2403.15463
null
null
http://arxiv.org/pdf/2403.15463v1
2024-03-19T15:53:57Z
2024-03-19T15:53:57Z
Unveiling the Anomalies in an Ever-Changing World: A Benchmark for Pixel-Level Anomaly Detection in Continual Learning
Anomaly Detection is a relevant problem in numerous real-world applications, especially when dealing with images. However, little attention has been paid to the issue of changes over time in the input data distribution, which may cause a significant decrease in performance. In this study, we investigate the problem of Pixel-Level Anomaly Detection in the Continual Learning setting, where new data arrives over time and the goal is to perform well on new and old data. We implement several state-of-the-art techniques to solve the Anomaly Detection problem in the classic setting and adapt them to work in the Continual Learning setting. To validate the approaches, we use a real-world dataset of images with pixel-based anomalies to provide a reliable benchmark and serve as a foundation for further advancements in the field. We provide a comprehensive analysis, discussing which Anomaly Detection methods and which families of approaches seem more suitable for the Continual Learning setting.
[ "['Nikola Bugarin' 'Jovana Bugaric' 'Manuel Barusco' 'Davide Dalle Pezze'\n 'Gian Antonio Susto']" ]
null
null
2403.15464
null
null
http://arxiv.org/pdf/2403.15464v1
2024-03-19T18:10:13Z
2024-03-19T18:10:13Z
LLMs-based Few-Shot Disease Predictions using EHR: A Novel Approach Combining Predictive Agent Reasoning and Critical Agent Instruction
Electronic health records (EHRs) contain valuable patient data for health-related prediction tasks, such as disease prediction. Traditional approaches rely on supervised learning methods that require large labeled datasets, which can be expensive and challenging to obtain. In this study, we investigate the feasibility of applying Large Language Models (LLMs) to convert structured patient visit data (e.g., diagnoses, labs, prescriptions) into natural language narratives. We evaluate the zero-shot and few-shot performance of LLMs using various EHR-prediction-oriented prompting strategies. Furthermore, we propose a novel approach that utilizes LLM agents with different roles: a predictor agent that makes predictions and generates reasoning processes and a critic agent that analyzes incorrect predictions and provides guidance for improving the reasoning of the predictor agent. Our results demonstrate that with the proposed approach, LLMs can achieve decent few-shot performance compared to traditional supervised learning methods in EHR-based disease predictions, suggesting its potential for health-oriented applications.
[ "['Hejie Cui' 'Zhuocheng Shen' 'Jieyu Zhang' 'Hui Shao' 'Lianhui Qin'\n 'Joyce C. Ho' 'Carl Yang']" ]
null
null
2403.15465
null
null
http://arxiv.org/pdf/2403.15465v1
2024-03-19T19:58:46Z
2024-03-19T19:58:46Z
Most Likely Sequence Generation for $n$-Grams, Transformers, HMMs, and Markov Chains, by Using Rollout Algorithms
In this paper we consider a transformer with an $n$-gram structure, such as the one underlying ChatGPT. The transformer provides next word probabilities, which can be used to generate word sequences. We consider methods for computing word sequences that are highly likely, based on these probabilities. Computing the optimal (i.e., most likely) word sequence starting with a given initial state is an intractable problem, so we propose methods to compute highly likely sequences of $N$ words in time that is a low order polynomial in $N$ and in the vocabulary size of the $n$-gram. These methods are based on the rollout approach from approximate dynamic programming, a form of single policy iteration, which can improve the performance of any given heuristic policy. In our case we use a greedy heuristic that generates as next word one that has the highest probability. We show with analysis, examples, and computational experimentation that our methods are capable of generating highly likely sequences with a modest increase in computation over the greedy heuristic. While our analysis and experiments are focused on Markov chains of the type arising in transformer and ChatGPT-like models, our methods apply to general finite-state Markov chains, and related inference applications of Hidden Markov Models (HMM), where Viterbi decoding is used extensively.
[ "['Yuchao Li' 'Dimitri Bertsekas']" ]
null
null
2403.15466
null
null
http://arxiv.org/pdf/2403.15466v1
2024-03-20T03:42:15Z
2024-03-20T03:42:15Z
Using Super-Resolution Imaging for Recognition of Low-Resolution Blurred License Plates: A Comparative Study of Real-ESRGAN, A-ESRGAN, and StarSRGAN
With the robust development of technology, license plate recognition technology can now be properly applied in various scenarios, such as road monitoring, tracking of stolen vehicles, detection at parking lot entrances and exits, and so on. However, the precondition for these applications to function normally is that the license plate must be 'clear' enough to be recognized by the system with the correct license plate number. If the license plate becomes blurred due to some external factors, then the accuracy of recognition will be greatly reduced. Although there are many road surveillance cameras in Taiwan, the quality of most cameras is not good, often leading to the inability to recognize license plate numbers due to low photo resolution. Therefore, this study focuses on using super-resolution technology to process blurred license plates. This study will mainly fine-tune three super-resolution models: Real-ESRGAN, A-ESRGAN, and StarSRGAN, and compare their effectiveness in enhancing the resolution of license plate photos and enabling accurate license plate recognition. By comparing different super-resolution models, it is hoped to find the most suitable model for this task, providing valuable references for future researchers.
[ "['Ching-Hsiang Wang']" ]
null
null
2403.15469
null
null
http://arxiv.org/pdf/2403.15469v1
2024-03-20T08:52:40Z
2024-03-20T08:52:40Z
Isometric Neural Machine Translation using Phoneme Count Ratio Reward-based Reinforcement Learning
Traditional Automatic Video Dubbing (AVD) pipeline consists of three key modules, namely, Automatic Speech Recognition (ASR), Neural Machine Translation (NMT), and Text-to-Speech (TTS). Within AVD pipelines, isometric-NMT algorithms are employed to regulate the length of the synthesized output text. This is done to guarantee synchronization with respect to the alignment of video and audio subsequent to the dubbing process. Previous approaches have focused on aligning the number of characters and words in the source and target language texts of Machine Translation models. However, our approach aims to align the number of phonemes instead, as they are closely associated with speech duration. In this paper, we present the development of an isometric NMT system using Reinforcement Learning (RL), with a focus on optimizing the alignment of phoneme counts in the source and target language sentence pairs. To evaluate our models, we propose the Phoneme Count Compliance (PCC) score, which is a measure of length compliance. Our approach demonstrates a substantial improvement of approximately 36% in the PCC score compared to the state-of-the-art models when applied to English-Hindi language pairs. Moreover, we propose a student-teacher architecture within the framework of our RL approach to maintain a trade-off between the phoneme count and translation quality.
[ "['Shivam Ratnakant Mhaskar' 'Nirmesh J. Shah' 'Mohammadi Zaki'\n 'Ashishkumar P. Gudmalwar' 'Pankaj Wasnik' 'Rajiv Ratn Shah']" ]
null
null
2403.15474
null
null
http://arxiv.org/pdf/2403.15474v1
2024-03-20T16:25:49Z
2024-03-20T16:25:49Z
EC-IoU: Orienting Safety for Object Detectors via Ego-Centric Intersection-over-Union
This paper presents safety-oriented object detection via a novel Ego-Centric Intersection-over-Union (EC-IoU) measure, addressing practical concerns when applying state-of-the-art learning-based perception models in safety-critical domains such as autonomous driving. Concretely, we propose a weighting mechanism to refine the widely used IoU measure, allowing it to assign a higher score to a prediction that covers closer points of a ground-truth object from the ego agent's perspective. The proposed EC-IoU measure can be used in typical evaluation processes to select object detectors with higher safety-related performance for downstream tasks. It can also be integrated into common loss functions for model fine-tuning. While geared towards safety, our experiment with the KITTI dataset demonstrates the performance of a model trained on EC-IoU can be better than that of a variant trained on IoU in terms of mean Average Precision as well.
[ "['Brian Hsuan-Cheng Liao' 'Chih-Hong Cheng' 'Hasan Esen' 'Alois Knoll']" ]
null
null
2403.15476
null
null
http://arxiv.org/pdf/2403.15476v2
2024-06-09T21:54:18Z
2024-03-20T17:29:58Z
Learning to Infer Generative Template Programs for Visual Concepts
People grasp flexible visual concepts from a few examples. We explore a neurosymbolic system that learns how to infer programs that capture visual concepts in a domain-general fashion. We introduce Template Programs: programmatic expressions from a domain-specific language that specify structural and parametric patterns common to an input concept. Our framework supports multiple concept-related tasks, including few-shot generation and co-segmentation through parsing. We develop a learning paradigm that allows us to train networks that infer Template Programs directly from visual datasets that contain concept groupings. We run experiments across multiple visual domains: 2D layouts, Omniglot characters, and 3D shapes. We find that our method outperforms task-specific alternatives, and performs competitively against domain-specific approaches for the limited domains where they exist.
[ "['R. Kenny Jones' 'Siddhartha Chaudhuri' 'Daniel Ritchie']" ]
null
null
2403.15480
null
null
http://arxiv.org/pdf/2403.15480v1
2024-03-21T03:11:53Z
2024-03-21T03:11:53Z
SpikeGraphormer: A High-Performance Graph Transformer with Spiking Graph Attention
Recently, Graph Transformers have emerged as a promising solution to alleviate the inherent limitations of Graph Neural Networks (GNNs) and enhance graph representation performance. Unfortunately, Graph Transformers are computationally expensive due to the quadratic complexity inherent in self-attention when applied over large-scale graphs, especially for node tasks. In contrast, spiking neural networks (SNNs), with event-driven and binary spikes properties, can perform energy-efficient computation. In this work, we propose a novel insight into integrating SNNs with Graph Transformers and design a Spiking Graph Attention (SGA) module. The matrix multiplication is replaced by sparse addition and mask operations. The linear complexity enables all-pair node interactions on large-scale graphs with limited GPU memory. To our knowledge, our work is the first attempt to introduce SNNs into Graph Transformers. Furthermore, we design SpikeGraphormer, a Dual-branch architecture, combining a sparse GNN branch with our SGA-driven Graph Transformer branch, which can simultaneously perform all-pair node interactions and capture local neighborhoods. SpikeGraphormer consistently outperforms existing state-of-the-art approaches across various datasets and makes substantial improvements in training time, inference time, and GPU memory cost (10 ~ 20x lower than vanilla self-attention). It also performs well in cross-domain applications (image and text classification). We release our code at https://github.com/PHD-lanyu/SpikeGraphormer.
[ "['Yundong Sun' 'Dongjie Zhu' 'Yansong Wang' 'Zhaoshuo Tian' 'Ning Cao'\n \"Gregory O'Hared\"]" ]
null
null
2403.15482
null
null
http://arxiv.org/pdf/2403.15482v1
2024-03-21T04:23:56Z
2024-03-21T04:23:56Z
Multi-Level Feedback Generation with Large Language Models for Empowering Novice Peer Counselors
Realistic practice and tailored feedback are key processes for training peer counselors with clinical skills. However, existing mechanisms of providing feedback largely rely on human supervision. Peer counselors often lack mechanisms to receive detailed feedback from experienced mentors, making it difficult for them to support the large number of people with mental health issues who use peer counseling. Our work aims to leverage large language models to provide contextualized and multi-level feedback to empower peer counselors, especially novices, at scale. To achieve this, we co-design with a group of senior psychotherapy supervisors to develop a multi-level feedback taxonomy, and then construct a publicly available dataset with comprehensive feedback annotations of 400 emotional support conversations. We further design a self-improvement method on top of large language models to enhance the automatic generation of feedback. Via qualitative and quantitative evaluation with domain experts, we demonstrate that our method minimizes the risk of potentially harmful and low-quality feedback generation which is desirable in such high-stakes scenarios.
[ "['Alicja Chaszczewicz' 'Raj Sanjay Shah' 'Ryan Louie' 'Bruce A Arnow'\n 'Robert Kraut' 'Diyi Yang']" ]
null
null
2403.15483
null
null
http://arxiv.org/pdf/2403.15483v1
2024-03-21T06:42:35Z
2024-03-21T06:42:35Z
Rolling bearing fault diagnosis method based on generative adversarial enhanced multi-scale convolutional neural network model
In order to solve the problem that current convolutional neural networks can not capture the correlation features between the time domain signals of rolling bearings effectively, and the model accuracy is limited by the number and quality of samples, a rolling bearing fault diagnosis method based on generative adversarial enhanced multi-scale convolutional neural network model is proposed. Firstly, Gram angular field coding technique is used to encode the time domain signal of the rolling bearing and generate the feature map to retain the complete information of the vibration signal. Then, the re-sulting data is divided into a training set, a validation set, and a test set. Among them, the training set is input into the gradient penalty Wasserstein distance generation adversarial network to complete the training, and a new sample with similar features to the training sample is obtained, and then the original training set is expanded. Next, multi-scale convolution is used to extract the fault features of the extended training set, and the feature graph is normalized by example to overcome the influence of the difference in feature distribution. Finally, the attention mechanism is applied to the adaptive weighting of normalized features and the extraction of deep features, and the fault diagnosis is completed by the softmax classifier. Compared with ResNet method, the experimental results show that the proposed method has better generalization performance and anti-noise performance.
[ "['Maoxuan Zhou' 'Wei Kang' 'Kun He']" ]
null
null
2403.15484
null
null
http://arxiv.org/pdf/2403.15484v1
2024-03-21T06:56:07Z
2024-03-21T06:56:07Z
RakutenAI-7B: Extending Large Language Models for Japanese
We introduce RakutenAI-7B, a suite of Japanese-oriented large language models that achieve the best performance on the Japanese LM Harness benchmarks among the open 7B models. Along with the foundation model, we release instruction- and chat-tuned models, RakutenAI-7B-instruct and RakutenAI-7B-chat respectively, under the Apache 2.0 license.
[ "['Rakuten Group' 'Aaron Levine' 'Connie Huang' 'Chenguang Wang'\n 'Eduardo Batista' 'Ewa Szymanska' 'Hongyi Ding' 'Hou Wei Chou'\n 'Jean-François Pessiot' 'Johanes Effendi' 'Justin Chiu'\n 'Kai Torben Ohlhus' 'Karan Chopra' 'Keiji Shinzato' 'Koji Murakami'\n 'Lee Xiong' 'Lei Chen' 'Maki Kubota' 'Maksim Tkachenko' 'Miroku Lee'\n 'Naoki Takahashi' 'Prathyusha Jwalapuram' 'Ryutaro Tatsushima'\n 'Saurabh Jain' 'Sunil Kumar Yadav' 'Ting Cai' 'Wei-Te Chen' 'Yandi Xia'\n 'Yuki Nakayama' 'Yutaka Higashiyama']" ]
null
null
2403.15485
null
null
http://arxiv.org/pdf/2403.15485v1
2024-03-21T07:45:58Z
2024-03-21T07:45:58Z
MOGAM: A Multimodal Object-oriented Graph Attention Model for Depression Detection
Early detection plays a crucial role in the treatment of depression. Therefore, numerous studies have focused on social media platforms, where individuals express their emotions, aiming to achieve early detection of depression. However, the majority of existing approaches often rely on specific features, leading to limited scalability across different types of social media datasets, such as text, images, or videos. To overcome this limitation, we introduce a Multimodal Object-Oriented Graph Attention Model (MOGAM), which can be applied to diverse types of data, offering a more scalable and versatile solution. Furthermore, to ensure that our model can capture authentic symptoms of depression, we only include vlogs from users with a clinical diagnosis. To leverage the diverse features of vlogs, we adopt a multimodal approach and collect additional metadata such as the title, description, and duration of the vlogs. To effectively aggregate these multimodal features, we employed a cross-attention mechanism. MOGAM achieved an accuracy of 0.871 and an F1-score of 0.888. Moreover, to validate the scalability of MOGAM, we evaluated its performance with a benchmark dataset and achieved comparable results with prior studies (0.61 F1-score). In conclusion, we believe that the proposed model, MOGAM, is an effective solution for detecting depression in social media, offering potential benefits in the early detection and treatment of this mental health condition.
[ "['Junyeop Cha' 'Seoyun Kim' 'Dongjae Kim' 'Eunil Park']" ]
null
null
2403.15489
null
null
http://arxiv.org/pdf/2403.15489v1
2024-03-21T13:38:59Z
2024-03-21T13:38:59Z
EEG decoding with conditional identification information
Decoding EEG signals is crucial for unraveling human brain and advancing brain-computer interfaces. Traditional machine learning algorithms have been hindered by the high noise levels and inherent inter-person variations in EEG signals. Recent advances in deep neural networks (DNNs) have shown promise, owing to their advanced nonlinear modeling capabilities. However, DNN still faces challenge in decoding EEG samples of unseen individuals. To address this, this paper introduces a novel approach by incorporating the conditional identification information of each individual into the neural network, thereby enhancing model representation through the synergistic interaction of EEG and personal traits. We test our model on the WithMe dataset and demonstrated that the inclusion of these identifiers substantially boosts accuracy for both subjects in the training set and unseen subjects. This enhancement suggests promising potential for improving for EEG interpretability and understanding of relevant identification features.
[ "['Pengfei Sun' 'Jorg De Winne' 'Paul Devos' 'Dick Botteldooren']" ]
null
null
2403.15494
null
null
http://arxiv.org/pdf/2403.15494v1
2024-03-21T17:36:53Z
2024-03-21T17:36:53Z
Multiple and Gyro-Free Inertial Datasets
An inertial navigation system (INS) utilizes three orthogonal accelerometers and gyroscopes to determine platform position, velocity, and orientation. There are countless applications for INS, including robotics, autonomous platforms, and the internet of things. Recent research explores the integration of data-driven methods with INS, highlighting significant innovations, improving accuracy and efficiency. Despite the growing interest in this field and the availability of INS datasets, no datasets are available for gyro-free INS (GFINS) and multiple inertial measurement unit (MIMU) architectures. To fill this gap and to stimulate further research in this field, we designed and recorded GFINS and MIMU datasets using 54 inertial sensors grouped in nine inertial measurement units. These sensors can be used to define and evaluate different types of MIMU and GFINS architectures. The inertial sensors were arranged in three different sensor configurations and mounted on a mobile robot and a passenger car. In total, the dataset contains 35 hours of inertial data and corresponding ground truth trajectories. The data and code are freely accessible through our GitHub repository.
[ "['Zeev Yampolsky' 'Yair Stolero' 'Nitzan Pri-Hadash' 'Dan Solodar'\n 'Shira Massas' 'Itai Savin' 'Itzik Klein']" ]
null
null
2403.15497
null
null
http://arxiv.org/pdf/2403.15497v1
2024-03-21T18:31:47Z
2024-03-21T18:31:47Z
On the Detection of Anomalous or Out-Of-Distribution Data in Vision Models Using Statistical Techniques
Out-of-distribution data and anomalous inputs are vulnerabilities of machine learning systems today, often causing systems to make incorrect predictions. The diverse range of data on which these models are used makes detecting atypical inputs a difficult and important task. We assess a tool, Benford's law, as a method used to quantify the difference between real and corrupted inputs. We believe that in many settings, it could function as a filter for anomalous data points and for signalling out-of-distribution data. We hope to open a discussion on these applications and further areas where this technique is underexplored.
[ "[\"Laura O'Mahony\" \"David JP O'Sullivan\" 'Nikola S. Nikolov']" ]
null
null
2403.15498
null
null
http://arxiv.org/pdf/2403.15498v2
2024-07-14T20:23:19Z
2024-03-21T18:53:23Z
Emergent World Models and Latent Variable Estimation in Chess-Playing Language Models
Language models have shown unprecedented capabilities, sparking debate over the source of their performance. Is it merely the outcome of learning syntactic patterns and surface level statistics, or do they extract semantics and a world model from the text? Prior work by Li et al. investigated this by training a GPT model on synthetic, randomly generated Othello games and found that the model learned an internal representation of the board state. We extend this work into the more complex domain of chess, training on real games and investigating our model's internal representations using linear probes and contrastive activations. The model is given no a priori knowledge of the game and is solely trained on next character prediction, yet we find evidence of internal representations of board state. We validate these internal representations by using them to make interventions on the model's activations and edit its internal board state. Unlike Li et al's prior synthetic dataset approach, our analysis finds that the model also learns to estimate latent variables like player skill to better predict the next character. We derive a player skill vector and add it to the model, improving the model's win rate by up to 2.6 times.
[ "['Adam Karvonen']" ]
null
null
2403.15499
null
null
http://arxiv.org/pdf/2403.15499v1
2024-03-21T18:55:05Z
2024-03-21T18:55:05Z
A Causal Analysis of CO2 Reduction Strategies in Electricity Markets Through Machine Learning-Driven Metalearners
This study employs the Causal Machine Learning (CausalML) statistical method to analyze the influence of electricity pricing policies on carbon dioxide (CO2) levels in the household sector. Investigating the causality between potential outcomes and treatment effects, where changes in pricing policies are the treatment, our analysis challenges the conventional wisdom surrounding incentive-based electricity pricing. The study's findings suggest that adopting such policies may inadvertently increase CO2 intensity. Additionally, we integrate a machine learning-based meta-algorithm, reflecting a contemporary statistical approach, to enhance the depth of our causal analysis. The study conducts a comparative analysis of learners X, T, S, and R to ascertain the optimal methods based on the defined question's specified goals and contextual nuances. This research contributes valuable insights to the ongoing dialogue on sustainable development practices, emphasizing the importance of considering unintended consequences in policy formulation.
[ "['Iman Emtiazi Naeini' 'Zahra Saberi' 'Khadijeh Hassanzadeh']" ]
null
null
2403.15500
null
null
http://arxiv.org/pdf/2403.15500v1
2024-03-21T21:27:43Z
2024-03-21T21:27:43Z
Gene Regulatory Network Inference in the Presence of Dropouts: a Causal View
Gene regulatory network inference (GRNI) is a challenging problem, particularly owing to the presence of zeros in single-cell RNA sequencing data: some are biological zeros representing no gene expression, while some others are technical zeros arising from the sequencing procedure (aka dropouts), which may bias GRNI by distorting the joint distribution of the measured gene expressions. Existing approaches typically handle dropout error via imputation, which may introduce spurious relations as the true joint distribution is generally unidentifiable. To tackle this issue, we introduce a causal graphical model to characterize the dropout mechanism, namely, Causal Dropout Model. We provide a simple yet effective theoretical result: interestingly, the conditional independence (CI) relations in the data with dropouts, after deleting the samples with zero values (regardless if technical or not) for the conditioned variables, are asymptotically identical to the CI relations in the original data without dropouts. This particular test-wise deletion procedure, in which we perform CI tests on the samples without zeros for the conditioned variables, can be seamlessly integrated with existing structure learning approaches including constraint-based and greedy score-based methods, thus giving rise to a principled framework for GRNI in the presence of dropouts. We further show that the causal dropout model can be validated from data, and many existing statistical models to handle dropouts fit into our model as specific parametric instances. Empirical evaluation on synthetic, curated, and real-world experimental transcriptomic data comprehensively demonstrate the efficacy of our method.
[ "['Haoyue Dai' 'Ignavier Ng' 'Gongxu Luo' 'Peter Spirtes' 'Petar Stojanov'\n 'Kun Zhang']" ]
null
null
2403.15502
null
null
http://arxiv.org/pdf/2403.15502v2
2024-06-15T18:02:51Z
2024-03-21T22:33:16Z
Sequential Decision-Making for Inline Text Autocomplete
Autocomplete suggestions are fundamental to modern text entry systems, with applications in domains such as messaging and email composition. Typically, autocomplete suggestions are generated from a language model with a confidence threshold. However, this threshold does not directly take into account the cognitive load imposed on the user by surfacing suggestions, such as the effort to switch contexts from typing to reading the suggestion, and the time to decide whether to accept the suggestion. In this paper, we study the problem of improving inline autocomplete suggestions in text entry systems via a sequential decision-making formulation, and use reinforcement learning to learn suggestion policies through repeated interactions with a target user over time. This formulation allows us to factor cognitive load into the objective of training an autocomplete model, through a reward function based on text entry speed. We acquired theoretical and experimental evidence that, under certain objectives, the sequential decision-making formulation of the autocomplete problem provides a better suggestion policy than myopic single-step reasoning. However, aligning these objectives with real users requires further exploration. In particular, we hypothesize that the objectives under which sequential decision-making can improve autocomplete systems are not tailored solely to text entry speed, but more broadly to metrics such as user satisfaction and convenience.
[ "['Rohan Chitnis' 'Shentao Yang' 'Alborz Geramifard']" ]
null
null
2403.15509
null
null
http://arxiv.org/pdf/2403.15509v1
2024-03-22T03:39:40Z
2024-03-22T03:39:40Z
Twin Auto-Encoder Model for Learning Separable Representation in Cyberattack Detection
Representation Learning (RL) plays a pivotal role in the success of many problems including cyberattack detection. Most of the RL methods for cyberattack detection are based on the latent vector of Auto-Encoder (AE) models. An AE transforms raw data into a new latent representation that better exposes the underlying characteristics of the input data. Thus, it is very useful for identifying cyberattacks. However, due to the heterogeneity and sophistication of cyberattacks, the representation of AEs is often entangled/mixed resulting in the difficulty for downstream attack detection models. To tackle this problem, we propose a novel mod called Twin Auto-Encoder (TAE). TAE deterministically transforms the latent representation into a more distinguishable representation namely the textit{separable representation} and the reconstructsuct the separable representation at the output. The output of TAE called the textit{reconstruction representation} is input to downstream models to detect cyberattacks. We extensively evaluate the effectiveness of TAE using a wide range of bench-marking datasets. Experiment results show the superior accuracy of TAE over state-of-the-art RL models and well-known machine learning algorithms. Moreover, TAE also outperforms state-of-the-art models on some sophisticated and challenging attacks. We then investigate various characteristics of TAE to further demonstrate its superiority.
[ "['Phai Vu Dinh' 'Quang Uy Nguyen' 'Thai Hoang Dinh' 'Diep N. Nguyen'\n 'Bao Son Pham' 'Eryk Dutkiewicz']" ]
null
null
2403.15510
null
null
http://arxiv.org/pdf/2403.15510v1
2024-03-22T03:41:57Z
2024-03-22T03:41:57Z
Privacy-Preserving End-to-End Spoken Language Understanding
Spoken language understanding (SLU), one of the key enabling technologies for human-computer interaction in IoT devices, provides an easy-to-use user interface. Human speech can contain a lot of user-sensitive information, such as gender, identity, and sensitive content. New types of security and privacy breaches have thus emerged. Users do not want to expose their personal sensitive information to malicious attacks by untrusted third parties. Thus, the SLU system needs to ensure that a potential malicious attacker cannot deduce the sensitive attributes of the users, while it should avoid greatly compromising the SLU accuracy. To address the above challenge, this paper proposes a novel SLU multi-task privacy-preserving model to prevent both the speech recognition (ASR) and identity recognition (IR) attacks. The model uses the hidden layer separation technique so that SLU information is distributed only in a specific portion of the hidden layer, and the other two types of information are removed to obtain a privacy-secure hidden layer. In order to achieve good balance between efficiency and privacy, we introduce a new mechanism of model pre-training, namely joint adversarial training, to further enhance the user privacy. Experiments over two SLU datasets show that the proposed method can reduce the accuracy of both the ASR and IR attacks close to that of a random guess, while leaving the SLU performance largely unaffected.
[ "['Yinggui Wang' 'Wei Huang' 'Le Yang']" ]
null
null
2403.15511
null
null
http://arxiv.org/pdf/2403.15511v1
2024-03-22T03:54:04Z
2024-03-22T03:54:04Z
Multiple-Input Auto-Encoder Guided Feature Selection for IoT Intrusion Detection Systems
While intrusion detection systems (IDSs) benefit from the diversity and generalization of IoT data features, the data diversity (e.g., the heterogeneity and high dimensions of data) also makes it difficult to train effective machine learning models in IoT IDSs. This also leads to potentially redundant/noisy features that may decrease the accuracy of the detection engine in IDSs. This paper first introduces a novel neural network architecture called Multiple-Input Auto-Encoder (MIAE). MIAE consists of multiple sub-encoders that can process inputs from different sources with different characteristics. The MIAE model is trained in an unsupervised learning mode to transform the heterogeneous inputs into lower-dimensional representation, which helps classifiers distinguish between normal behaviour and different types of attacks. To distil and retain more relevant features but remove less important/redundant ones during the training process, we further design and embed a feature selection layer right after the representation layer of MIAE resulting in a new model called MIAEFS. This layer learns the importance of features in the representation vector, facilitating the selection of informative features from the representation vector. The results on three IDS datasets, i.e., NSLKDD, UNSW-NB15, and IDS2017, show the superior performance of MIAE and MIAEFS compared to other methods, e.g., conventional classifiers, dimensionality reduction models, unsupervised representation learning methods with different input dimensions, and unsupervised feature selection models. Moreover, MIAE and MIAEFS combined with the Random Forest (RF) classifier achieve accuracy of 96.5% in detecting sophisticated attacks, e.g., Slowloris. The average running time for detecting an attack sample using RF with the representation of MIAE and MIAEFS is approximate 1.7E-6 seconds, whilst the model size is lower than 1 MB.
[ "['Phai Vu Dinh' 'Diep N. Nguyen' 'Dinh Thai Hoang' 'Quang Uy Nguyen'\n 'Eryk Dutkiewicz' 'Son Pham Bao']" ]
null
null
2403.15512
null
null
http://arxiv.org/pdf/2403.15512v1
2024-03-22T05:18:08Z
2024-03-22T05:18:08Z
Enhancing Effectiveness and Robustness in a Low-Resource Regime via Decision-Boundary-aware Data Augmentation
Efforts to leverage deep learning models in low-resource regimes have led to numerous augmentation studies. However, the direct application of methods such as mixup and cutout to text data, is limited due to their discrete characteristics. While methods using pretrained language models have exhibited efficiency, they require additional considerations for robustness. Inspired by recent studies on decision boundaries, this paper proposes a decision-boundary-aware data augmentation strategy to enhance robustness using pretrained language models. The proposed technique first focuses on shifting the latent features closer to the decision boundary, followed by reconstruction to generate an ambiguous version with a soft label. Additionally, mid-K sampling is suggested to enhance the diversity of the generated sentences. This paper demonstrates the performance of the proposed augmentation strategy compared to other methods through extensive experiments. Furthermore, the ablation study reveals the effect of soft labels and mid-K sampling and the extensibility of the method with curriculum data augmentation.
[ "['Kyohoon Jin' 'Junho Lee' 'Juhwan Choi' 'Sangmin Song' 'Youngbin Kim']" ]
null
null
2403.15517
null
null
http://arxiv.org/pdf/2403.15517v1
2024-03-22T11:14:30Z
2024-03-22T11:14:30Z
Improving Forward Compatibility in Class Incremental Learning by Increasing Representation Rank and Feature Richness
Class Incremental Learning (CIL) constitutes a pivotal subfield within continual learning, aimed at enabling models to progressively learn new classification tasks while retaining knowledge obtained from prior tasks. Although previous studies have predominantly focused on backward compatible approaches to mitigate catastrophic forgetting, recent investigations have introduced forward compatible methods to enhance performance on novel tasks and complement existing backward compatible methods. In this study, we introduce an effective-Rank based Feature Richness enhancement (RFR) method, designed for improving forward compatibility. Specifically, this method increases the effective rank of representations during the base session, thereby facilitating the incorporation of more informative features pertinent to unseen novel tasks. Consequently, RFR achieves dual objectives in backward and forward compatibility: minimizing feature extractor modifications and enhancing novel task performance, respectively. To validate the efficacy of our approach, we establish a theoretical connection between effective rank and the Shannon entropy of representations. Subsequently, we conduct comprehensive experiments by integrating RFR into eleven well-known CIL methods. Our results demonstrate the effectiveness of our approach in enhancing novel-task performance while mitigating catastrophic forgetting. Furthermore, our method notably improves the average incremental accuracy across all eleven cases examined.
[ "['Jaeill Kim' 'Wonseok Lee' 'Moonjung Eo' 'Wonjong Rhee']" ]
null
null
2403.15520
null
null
http://arxiv.org/pdf/2403.15520v1
2024-03-22T12:22:44Z
2024-03-22T12:22:44Z
GTC: GNN-Transformer Co-contrastive Learning for Self-supervised Heterogeneous Graph Representation
Graph Neural Networks (GNNs) have emerged as the most powerful weapon for various graph tasks due to the message-passing mechanism's great local information aggregation ability. However, over-smoothing has always hindered GNNs from going deeper and capturing multi-hop neighbors. Unlike GNNs, Transformers can model global information and multi-hop interactions via multi-head self-attention and a proper Transformer structure can show more immunity to the over-smoothing problem. So, can we propose a novel framework to combine GNN and Transformer, integrating both GNN's local information aggregation and Transformer's global information modeling ability to eliminate the over-smoothing problem? To realize this, this paper proposes a collaborative learning scheme for GNN-Transformer and constructs GTC architecture. GTC leverages the GNN and Transformer branch to encode node information from different views respectively, and establishes contrastive learning tasks based on the encoded cross-view information to realize self-supervised heterogeneous graph representation. For the Transformer branch, we propose Metapath-aware Hop2Token and CG-Hetphormer, which can cooperate with GNN to attentively encode neighborhood information from different levels. As far as we know, this is the first attempt in the field of graph representation learning to utilize both GNN and Transformer to collaboratively capture different view information and conduct cross-view contrastive learning. The experiments on real datasets show that GTC exhibits superior performance compared with state-of-the-art methods. Codes can be available at https://github.com/PHD-lanyu/GTC.
[ "['Yundong Sun' 'Dongjie Zhu' 'Yansong Wang' 'Zhaoshuo Tian']" ]
null
null
2403.15521
null
null
http://arxiv.org/pdf/2403.15521v2
2024-05-17T14:48:53Z
2024-03-22T13:20:33Z
Exploring new territory: Calibration-free decoding for c-VEP BCI
This study explores two zero-training methods aimed at enhancing the usability of brain-computer interfaces (BCIs) by eliminating the need for a calibration session. We introduce a novel method rooted in the event-related potential (ERP) domain, unsupervised mean maximization (UMM), to the fast code-modulated visual evoked potential (c-VEP) stimulus protocol. We compare UMM to the state-of-the-art c-VEP zero-training method that uses canonical correlation analysis (CCA). The comparison includes instantaneous classification and classification with cumulative learning from previously classified trials for both CCA and UMM. Our study shows the effectiveness of both methods in navigating the complexities of a c-VEP dataset, highlighting their differences and distinct strengths. This research not only provides insights into the practical implementation of calibration-free BCI methods but also paves the way for further exploration and refinement. Ultimately, the fusion of CCA and UMM holds promise for enhancing the accessibility and usability of BCI systems across various application domains and a multitude of stimulus protocols.
[ "['J. Thielen' 'J. Sosulski' 'M. Tangermann']" ]
null
null
2403.15523
null
null
http://arxiv.org/pdf/2403.15523v2
2024-05-17T14:44:24Z
2024-03-22T13:35:34Z
Towards auditory attention decoding with noise-tagging: A pilot study
Auditory attention decoding (AAD) aims to extract from brain activity the attended speaker amidst candidate speakers, offering promising applications for neuro-steered hearing devices and brain-computer interfacing. This pilot study makes a first step towards AAD using the noise-tagging stimulus protocol, which evokes reliable code-modulated evoked potentials, but is minimally explored in the auditory modality. Participants were sequentially presented with two Dutch speech stimuli that were amplitude-modulated with a unique binary pseudo-random noise-code, effectively tagging these with additional decodable information. We compared the decoding of unmodulated audio against audio modulated with various modulation depths, and a conventional AAD method against a standard method to decode noise-codes. Our pilot study revealed higher performances for the conventional method with 70 to 100 percent modulation depths compared to unmodulated audio. The noise-code decoder did not further improve these results. These fundamental insights highlight the potential of integrating noise-codes in speech to enhance auditory speaker detection when multiple speakers are presented simultaneously.
[ "['H. A. Scheppink' 'S. Ahmadi' 'P. Desain' 'M. Tangermann' 'J. Thielen']" ]
null
null
2403.15524
null
null
http://arxiv.org/pdf/2403.15524v1
2024-03-22T14:13:11Z
2024-03-22T14:13:11Z
PPA-Game: Characterizing and Learning Competitive Dynamics Among Online Content Creators
We introduce the Proportional Payoff Allocation Game (PPA-Game) to model how agents, akin to content creators on platforms like YouTube and TikTok, compete for divisible resources and consumers' attention. Payoffs are allocated to agents based on heterogeneous weights, reflecting the diversity in content quality among creators. Our analysis reveals that although a pure Nash equilibrium (PNE) is not guaranteed in every scenario, it is commonly observed, with its absence being rare in our simulations. Beyond analyzing static payoffs, we further discuss the agents' online learning about resource payoffs by integrating a multi-player multi-armed bandit framework. We propose an online algorithm facilitating each agent's maximization of cumulative payoffs over $T$ rounds. Theoretically, we establish that the regret of any agent is bounded by $O(log^{1 + eta} T)$ for any $eta > 0$. Empirical results further validate the effectiveness of our approach.
[ "['Renzhe Xu' 'Haotian Wang' 'Xingxuan Zhang' 'Bo Li' 'Peng Cui']" ]
null
null
2403.15525
null
null
http://arxiv.org/pdf/2403.15525v1
2024-03-22T14:15:28Z
2024-03-22T14:15:28Z
Latent Neural Cellular Automata for Resource-Efficient Image Restoration
Neural cellular automata represent an evolution of the traditional cellular automata model, enhanced by the integration of a deep learning-based transition function. This shift from a manual to a data-driven approach significantly increases the adaptability of these models, enabling their application in diverse domains, including content generation and artificial life. However, their widespread application has been hampered by significant computational requirements. In this work, we introduce the Latent Neural Cellular Automata (LNCA) model, a novel architecture designed to address the resource limitations of neural cellular automata. Our approach shifts the computation from the conventional input space to a specially designed latent space, relying on a pre-trained autoencoder. We apply our model in the context of image restoration, which aims to reconstruct high-quality images from their degraded versions. This modification not only reduces the model's resource consumption but also maintains a flexible framework suitable for various applications. Our model achieves a significant reduction in computational requirements while maintaining high reconstruction fidelity. This increase in efficiency allows for inputs up to 16 times larger than current state-of-the-art neural cellular automata models, using the same resources.
[ "['Andrea Menta' 'Alberto Archetti' 'Matteo Matteucci']" ]
null
null
2403.15527
null
null
http://arxiv.org/pdf/2403.15527v2
2024-05-02T16:38:17Z
2024-03-22T15:40:06Z
Conformal online model aggregation
Conformal prediction equips machine learning models with a reasonable notion of uncertainty quantification without making strong distributional assumptions. It wraps around any black-box prediction model and converts point predictions into set predictions that have a predefined marginal coverage guarantee. However, conformal prediction only works if we fix the underlying machine learning model in advance. A relatively unaddressed issue in conformal prediction is that of model selection and/or aggregation: for a given problem, which of the plethora of prediction methods (random forests, neural nets, regularized linear models, etc.) should we conformalize? This paper proposes a new approach towards conformal model aggregation in online settings that is based on combining the prediction sets from several algorithms by voting, where weights on the models are adapted over time based on past performance.
[ "['Matteo Gasparin' 'Aaditya Ramdas']" ]
null
null
2403.15560
null
null
http://arxiv.org/pdf/2403.15560v1
2024-03-22T18:31:24Z
2024-03-22T18:31:24Z
A2DMN: Anatomy-Aware Dilated Multiscale Network for Breast Ultrasound Semantic Segmentation
In recent years, convolutional neural networks for semantic segmentation of breast ultrasound (BUS) images have shown great success; however, two major challenges still exist. 1) Most current approaches inherently lack the ability to utilize tissue anatomy, resulting in misclassified image regions. 2) They struggle to produce accurate boundaries due to the repeated down-sampling operations. To address these issues, we propose a novel breast anatomy-aware network for capturing fine image details and a new smoothness term that encodes breast anatomy. It incorporates context information across multiple spatial scales to generate more accurate semantic boundaries. Extensive experiments are conducted to compare the proposed method and eight state-of-the-art approaches using a BUS dataset with 325 images. The results demonstrate the proposed method significantly improves the segmentation of the muscle, mammary, and tumor classes and produces more accurate fine details of tissue boundaries.
[ "['Kyle Lucke' 'Aleksandar Vakanski' 'Min Xian']" ]
null
null
2403.15567
null
null
http://arxiv.org/pdf/2403.15567v1
2024-03-22T18:43:46Z
2024-03-22T18:43:46Z
Do not trust what you trust: Miscalibration in Semi-supervised Learning
State-of-the-art semi-supervised learning (SSL) approaches rely on highly confident predictions to serve as pseudo-labels that guide the training on unlabeled samples. An inherent drawback of this strategy stems from the quality of the uncertainty estimates, as pseudo-labels are filtered only based on their degree of uncertainty, regardless of the correctness of their predictions. Thus, assessing and enhancing the uncertainty of network predictions is of paramount importance in the pseudo-labeling process. In this work, we empirically demonstrate that SSL methods based on pseudo-labels are significantly miscalibrated, and formally demonstrate the minimization of the min-entropy, a lower bound of the Shannon entropy, as a potential cause for miscalibration. To alleviate this issue, we integrate a simple penalty term, which enforces the logit distances of the predictions on unlabeled samples to remain low, preventing the network predictions to become overconfident. Comprehensive experiments on a variety of SSL image classification benchmarks demonstrate that the proposed solution systematically improves the calibration performance of relevant SSL models, while also enhancing their discriminative power, being an appealing addition to tackle SSL tasks.
[ "['Shambhavi Mishra' 'Balamurali Murugesan' 'Ismail Ben Ayed'\n 'Marco Pedersoli' 'Jose Dolz']" ]
null
null
2403.15576
null
null
http://arxiv.org/pdf/2403.15576v1
2024-03-22T19:04:02Z
2024-03-22T19:04:02Z
Data-centric Prediction Explanation via Kernelized Stein Discrepancy
Existing example-based prediction explanation methods often bridge test and training data points through the model's parameters or latent representations. While these methods offer clues to the causes of model predictions, they often exhibit innate shortcomings, such as incurring significant computational overhead or producing coarse-grained explanations. This paper presents a Highly-precise and Data-centric Explanation (HD-Explain), a straightforward prediction explanation method exploiting properties of Kernelized Stein Discrepancy (KSD). Specifically, the KSD uniquely defines a parameterized kernel function for a trained model that encodes model-dependent data correlation. By leveraging the kernel function, one can identify training samples that provide the best predictive support to a test point efficiently. We conducted thorough analyses and experiments across multiple classification domains, where we show that HD-Explain outperforms existing methods from various aspects, including 1) preciseness (fine-grained explanation), 2) consistency, and 3) computation efficiency, leading to a surprisingly simple, effective, and robust prediction explanation solution.
[ "['Mahtab Sarvmaili' 'Hassan Sajjad' 'Ga Wu']" ]
null
null
2403.15593
null
null
http://arxiv.org/pdf/2403.15593v2
2024-05-16T20:51:13Z
2024-03-22T19:41:26Z
FairerCLIP: Debiasing CLIP's Zero-Shot Predictions using Functions in RKHSs
Large pre-trained vision-language models such as CLIP provide compact and general-purpose representations of text and images that are demonstrably effective across multiple downstream zero-shot prediction tasks. However, owing to the nature of their training process, these models have the potential to 1) propagate or amplify societal biases in the training data and 2) learn to rely on spurious features. This paper proposes FairerCLIP, a general approach for making zero-shot predictions of CLIP more fair and robust to spurious correlations. We formulate the problem of jointly debiasing CLIP's image and text representations in reproducing kernel Hilbert spaces (RKHSs), which affords multiple benefits: 1) Flexibility: Unlike existing approaches, which are specialized to either learn with or without ground-truth labels, FairerCLIP is adaptable to learning in both scenarios. 2) Ease of Optimization: FairerCLIP lends itself to an iterative optimization involving closed-form solvers, which leads to $4times$-$10times$ faster training than the existing methods. 3) Sample Efficiency: Under sample-limited conditions, FairerCLIP significantly outperforms baselines when they fail entirely. And, 4) Performance: Empirically, FairerCLIP achieves appreciable accuracy gains on benchmark fairness and spurious correlation datasets over their respective baselines.
[ "['Sepehr Dehdashtian' 'Lan Wang' 'Vishnu Naresh Boddeti']" ]
null
null
2403.15594
null
null
http://arxiv.org/pdf/2403.15594v1
2024-03-22T19:53:21Z
2024-03-22T19:53:21Z
Analyzing Male Domestic Violence through Exploratory Data Analysis and Explainable Machine Learning Insights
Domestic violence, which is often perceived as a gendered issue among female victims, has gained increasing attention in recent years. Despite this focus, male victims of domestic abuse remain primarily overlooked, particularly in Bangladesh. Our study represents a pioneering exploration of the underexplored realm of male domestic violence (MDV) within the Bangladeshi context, shedding light on its prevalence, patterns, and underlying factors. Existing literature predominantly emphasizes female victimization in domestic violence scenarios, leading to an absence of research on male victims. We collected data from the major cities of Bangladesh and conducted exploratory data analysis to understand the underlying dynamics. We implemented 11 traditional machine learning models with default and optimized hyperparameters, 2 deep learning, and 4 ensemble models. Despite various approaches, CatBoost has emerged as the top performer due to its native support for categorical features, efficient handling of missing values, and robust regularization techniques, achieving 76% accuracy. In contrast, other models achieved accuracy rates in the range of 58-75%. The eXplainable AI techniques, SHAP and LIME, were employed to gain insights into the decision-making of black-box machine learning models. By shedding light on this topic and identifying factors associated with domestic abuse, the study contributes to identifying groups of people vulnerable to MDV, raising awareness, and informing policies and interventions aimed at reducing MDV. Our findings challenge the prevailing notion that domestic abuse primarily affects women, thus emphasizing the need for tailored interventions and support systems for male victims. ML techniques enhance the analysis and understanding of the data, providing valuable insights for developing effective strategies to combat this pressing social issue.
[ "['Md Abrar Jahin' 'Saleh Akram Naife' 'Fatema Tuj Johora Lima'\n 'M. F. Mridha' 'Jungpil Shin']" ]
null
null
2403.15598
null
null
http://arxiv.org/pdf/2403.15598v1
2024-03-22T20:01:53Z
2024-03-22T20:01:53Z
An ensemble of data-driven weather prediction models for operational sub-seasonal forecasting
We present an operations-ready multi-model ensemble weather forecasting system which uses hybrid data-driven weather prediction models coupled with the European Centre for Medium-range Weather Forecasts (ECMWF) ocean model to predict global weather at 1-degree resolution for 4 weeks of lead time. For predictions of 2-meter temperature, our ensemble on average outperforms the raw ECMWF extended-range ensemble by 4-17%, depending on the lead time. However, after applying statistical bias corrections, the ECMWF ensemble is about 3% better at 4 weeks. For other surface parameters, our ensemble is also within a few percentage points of ECMWF's ensemble. We demonstrate that it is possible to achieve near-state-of-the-art subseasonal-to-seasonal forecasts using a multi-model ensembling approach with data-driven weather prediction models.
[ "['Jonathan A. Weyn' 'Divya Kumar' 'Jeremy Berman' 'Najeeb Kazmi'\n 'Sylwester Klocek' 'Pete Luferenko' 'Kit Thambiratnam']" ]
null
null
2403.15605
null
null
http://arxiv.org/pdf/2403.15605v1
2024-03-22T20:22:08Z
2024-03-22T20:22:08Z
Efficiently Assemble Normalization Layers and Regularization for Federated Domain Generalization
Domain shift is a formidable issue in Machine Learning that causes a model to suffer from performance degradation when tested on unseen domains. Federated Domain Generalization (FedDG) attempts to train a global model using collaborative clients in a privacy-preserving manner that can generalize well to unseen clients possibly with domain shift. However, most existing FedDG methods either cause additional privacy risks of data leakage or induce significant costs in client communication and computation, which are major concerns in the Federated Learning paradigm. To circumvent these challenges, here we introduce a novel architectural method for FedDG, namely gPerXAN, which relies on a normalization scheme working with a guiding regularizer. In particular, we carefully design Personalized eXplicitly Assembled Normalization to enforce client models selectively filtering domain-specific features that are biased towards local data while retaining discrimination of those features. Then, we incorporate a simple yet effective regularizer to guide these models in directly capturing domain-invariant representations that the global model's classifier can leverage. Extensive experimental results on two benchmark datasets, i.e., PACS and Office-Home, and a real-world medical dataset, Camelyon17, indicate that our proposed method outperforms other existing methods in addressing this particular problem.
[ "['Khiem Le' 'Long Ho' 'Cuong Do' 'Danh Le-Phuoc' 'Kok-Seng Wong']" ]
null
null
2403.15638
null
null
http://arxiv.org/pdf/2403.15638v3
2024-04-26T20:24:11Z
2024-03-22T22:27:44Z
Differentially Private Next-Token Prediction of Large Language Models
Ensuring the privacy of Large Language Models (LLMs) is becoming increasingly important. The most widely adopted technique to accomplish this is DP-SGD, which trains a model to guarantee Differential Privacy (DP). However, DP-SGD overestimates an adversary's capabilities in having white box access to the model and, as a result, causes longer training times and larger memory usage than SGD. On the other hand, commercial LLM deployments are predominantly cloud-based; hence, adversarial access to LLMs is black-box. Motivated by these observations, we present Private Mixing of Ensemble Distributions (PMixED): a private prediction protocol for next-token prediction that utilizes the inherent stochasticity of next-token sampling and a public model to achieve Differential Privacy. We formalize this by introducing RD-mollifers which project each of the model's output distribution from an ensemble of fine-tuned LLMs onto a set around a public LLM's output distribution, then average the projected distributions and sample from it. Unlike DP-SGD which needs to consider the model architecture during training, PMixED is model agnostic, which makes PMixED a very appealing solution for current deployments. Our results show that PMixED achieves a stronger privacy guarantee than sample-level privacy and outperforms DP-SGD for privacy $epsilon = 8$ on large-scale datasets. Thus, PMixED offers a practical alternative to DP training methods for achieving strong generative utility without compromising privacy.
[ "['James Flemings' 'Meisam Razaviyayn' 'Murali Annavaram']" ]
null
null
2403.15652
null
null
http://arxiv.org/pdf/2403.15652v1
2024-03-22T23:56:40Z
2024-03-22T23:56:40Z
Parametric Encoding with Attention and Convolution Mitigate Spectral Bias of Neural Partial Differential Equation Solvers
Deep neural networks (DNNs) are increasingly used to solve partial differential equations (PDEs) that naturally arise while modeling a wide range of systems and physical phenomena. However, the accuracy of such DNNs decreases as the PDE complexity increases and they also suffer from spectral bias as they tend to learn the low-frequency solution characteristics. To address these issues, we introduce Parametric Grid Convolutional Attention Networks (PGCANs) that can solve PDE systems without leveraging any labeled data in the domain. The main idea of PGCAN is to parameterize the input space with a grid-based encoder whose parameters are connected to the output via a DNN decoder that leverages attention to prioritize feature training. Our encoder provides a localized learning ability and uses convolution layers to avoid overfitting and improve information propagation rate from the boundaries to the interior of the domain. We test the performance of PGCAN on a wide range of PDE systems and show that it effectively addresses spectral bias and provides more accurate solutions compared to competing methods.
[ "['Mehdi Shishehbor' 'Shirin Hosseinmardi' 'Ramin Bostanabad']" ]
null
null
2403.15654
null
null
http://arxiv.org/pdf/2403.15654v1
2024-03-23T00:01:34Z
2024-03-23T00:01:34Z
The Effectiveness of Local Updates for Decentralized Learning under Data Heterogeneity
We revisit two fundamental decentralized optimization methods, Decentralized Gradient Tracking (DGT) and Decentralized Gradient Descent (DGD), with multiple local updates. We consider two settings and demonstrate that incorporating $K > 1$ local update steps can reduce communication complexity. Specifically, for $mu$-strongly convex and $L$-smooth loss functions, we proved that local DGT achieves communication complexity $tilde{mathcal{O}} Big(frac{L}{mu K} + frac{delta}{mu (1 - rho)} + frac{rho }{(1 - rho)^2} cdot frac{L+ delta}{mu}Big)$, where $rho$ measures the network connectivity and $delta$ measures the second-order heterogeneity of the local loss. Our result reveals the tradeoff between communication and computation and shows increasing $K$ can effectively reduce communication costs when the data heterogeneity is low and the network is well-connected. We then consider the over-parameterization regime where the local losses share the same minimums, we proved that employing local updates in DGD, even without gradient correction, can yield a similar effect as DGT in reducing communication complexity. Numerical experiments validate our theoretical results.
[ "['Tongle Wu' 'Ying Sun']" ]
null
null
2403.15681
null
null
http://arxiv.org/pdf/2403.15681v1
2024-03-23T02:13:22Z
2024-03-23T02:13:22Z
Differentiable Information Bottleneck for Deterministic Multi-view Clustering
In recent several years, the information bottleneck (IB) principle provides an information-theoretic framework for deep multi-view clustering (MVC) by compressing multi-view observations while preserving the relevant information of multiple views. Although existing IB-based deep MVC methods have achieved huge success, they rely on variational approximation and distribution assumption to estimate the lower bound of mutual information, which is a notoriously hard and impractical problem in high-dimensional multi-view spaces. In this work, we propose a new differentiable information bottleneck (DIB) method, which provides a deterministic and analytical MVC solution by fitting the mutual information without the necessity of variational approximation. Specifically, we first propose to directly fit the mutual information of high-dimensional spaces by leveraging normalized kernel Gram matrix, which does not require any auxiliary neural estimator to estimate the lower bound of mutual information. Then, based on the new mutual information measurement, a deterministic multi-view neural network with analytical gradients is explicitly trained to parameterize IB principle, which derives a deterministic compression of input variables from different views. Finally, a triplet consistency discovery mechanism is devised, which is capable of mining the feature consistency, cluster consistency and joint consistency based on the deterministic and compact representations. Extensive experimental results show the superiority of our DIB method on 6 benchmarks compared with 13 state-of-the-art baselines.
[ "['Xiaoqiang Yan' 'Zhixiang Jin' 'Fengshou Han' 'Yangdong Ye']" ]
null
null
2403.15690
null
null
http://arxiv.org/pdf/2403.15690v1
2024-03-23T02:44:20Z
2024-03-23T02:44:20Z
EAGLE: A Domain Generalization Framework for AI-generated Text Detection
With the advancement in capabilities of Large Language Models (LLMs), one major step in the responsible and safe use of such LLMs is to be able to detect text generated by these models. While supervised AI-generated text detectors perform well on text generated by older LLMs, with the frequent release of new LLMs, building supervised detectors for identifying text from such new models would require new labeled training data, which is infeasible in practice. In this work, we tackle this problem and propose a domain generalization framework for the detection of AI-generated text from unseen target generators. Our proposed framework, EAGLE, leverages the labeled data that is available so far from older language models and learns features invariant across these generators, in order to detect text generated by an unknown target generator. EAGLE learns such domain-invariant features by combining the representational power of self-supervised contrastive learning with domain adversarial training. Through our experiments we demonstrate how EAGLE effectively achieves impressive performance in detecting text generated by unseen target generators, including recent state-of-the-art ones such as GPT-4 and Claude, reaching detection scores of within 4.7% of a fully supervised detector.
[ "['Amrita Bhattacharjee' 'Raha Moraffah' 'Joshua Garland' 'Huan Liu']" ]
null
null
2403.15694
null
null
http://arxiv.org/pdf/2403.15694v1
2024-03-23T03:06:19Z
2024-03-23T03:06:19Z
Group Benefits Instances Selection for Data Purification
Manually annotating datasets for training deep models is very labor-intensive and time-consuming. To overcome such inferiority, directly leveraging web images to conduct training data becomes a natural choice. Nevertheless, the presence of label noise in web data usually degrades the model performance. Existing methods for combating label noise are typically designed and tested on synthetic noisy datasets. However, they tend to fail to achieve satisfying results on real-world noisy datasets. To this end, we propose a method named GRIP to alleviate the noisy label problem for both synthetic and real-world datasets. Specifically, GRIP utilizes a group regularization strategy that estimates class soft labels to improve noise robustness. Soft label supervision reduces overfitting on noisy labels and learns inter-class similarities to benefit classification. Furthermore, an instance purification operation globally identifies noisy labels by measuring the difference between each training sample and its class soft label. Through operations at both group and instance levels, our approach integrates the advantages of noise-robust and noise-cleaning methods and remarkably alleviates the performance degradation caused by noisy labels. Comprehensive experimental results on synthetic and real-world datasets demonstrate the superiority of GRIP over the existing state-of-the-art methods.
[ "['Zhenhuang Cai' 'Chuanyi Zhang' 'Dan Huang' 'Yuanbo Chen' 'Xiuyun Guan'\n 'Yazhou Yao']" ]
null
null
2403.15706
null
null
http://arxiv.org/pdf/2403.15706v2
2024-04-13T12:06:35Z
2024-03-23T03:56:31Z
G-ACIL: Analytic Learning for Exemplar-Free Generalized Class Incremental Learning
Class incremental learning (CIL) trains a network on sequential tasks with separated categories but suffers from catastrophic forgetting, where models quickly lose previously learned knowledge when acquiring new tasks. The generalized CIL (GCIL) aims to address the CIL problem in a more real-world scenario, where incoming data have mixed data categories and unknown sample size distribution, leading to intensified forgetting. Existing attempts for the GCIL either have poor performance, or invade data privacy by saving historical exemplars. To address this, in this paper, we propose an exemplar-free generalized analytic class incremental learning (G-ACIL). The G-ACIL adopts analytic learning (a gradient-free training technique), and delivers an analytical solution (i.e., closed-form) to the GCIL scenario. This solution is derived via decomposing the incoming data into exposed and unexposed classes, allowing an equivalence between the incremental learning and its joint training, i.e., the weight-invariant property. Such an equivalence is theoretically validated through matrix analysis tools, and hence contributes interpretability in GCIL. It is also empirically evidenced by experiments on various datasets and settings of GCIL. The results show that the G-ACIL exhibits leading performance with high robustness compared with existing competitive GCIL methods. Codes will be ready at url{https://github.com/ZHUANGHP/Analytic-continual-learning}.
[ "['Huiping Zhuang' 'Yizhu Chen' 'Di Fang' 'Run He' 'Kai Tong' 'Hongxin Wei'\n 'Ziqian Zeng' 'Cen Chen']" ]
null
null
2403.15707
null
null
http://arxiv.org/pdf/2403.15707v1
2024-03-23T03:57:28Z
2024-03-23T03:57:28Z
Role of Locality and Weight Sharing in Image-Based Tasks: A Sample Complexity Separation between CNNs, LCNs, and FCNs
Vision tasks are characterized by the properties of locality and translation invariance. The superior performance of convolutional neural networks (CNNs) on these tasks is widely attributed to the inductive bias of locality and weight sharing baked into their architecture. Existing attempts to quantify the statistical benefits of these biases in CNNs over locally connected convolutional neural networks (LCNs) and fully connected neural networks (FCNs) fall into one of the following categories: either they disregard the optimizer and only provide uniform convergence upper bounds with no separating lower bounds, or they consider simplistic tasks that do not truly mirror the locality and translation invariance as found in real-world vision tasks. To address these deficiencies, we introduce the Dynamic Signal Distribution (DSD) classification task that models an image as consisting of $k$ patches, each of dimension $d$, and the label is determined by a $d$-sparse signal vector that can freely appear in any one of the $k$ patches. On this task, for any orthogonally equivariant algorithm like gradient descent, we prove that CNNs require $tilde{O}(k+d)$ samples, whereas LCNs require $Omega(kd)$ samples, establishing the statistical advantages of weight sharing in translation invariant tasks. Furthermore, LCNs need $tilde{O}(k(k+d))$ samples, compared to $Omega(k^2d)$ samples for FCNs, showcasing the benefits of locality in local tasks. Additionally, we develop information theoretic tools for analyzing randomized algorithms, which may be of interest for statistical research.
[ "['Aakash Lahoti' 'Stefani Karp' 'Ezra Winston' 'Aarti Singh' 'Yuanzhi Li']" ]
null
null
2403.15711
null
null
http://arxiv.org/pdf/2403.15711v1
2024-03-23T04:13:55Z
2024-03-23T04:13:55Z
Identifiable Latent Neural Causal Models
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data. It is particularly good at predictions under unseen distribution shifts, because these shifts can generally be interpreted as consequences of interventions. Hence leveraging {seen} distribution shifts becomes a natural strategy to help identifying causal representations, which in turn benefits predictions where distributions are previously {unseen}. Determining the types (or conditions) of such distribution shifts that do contribute to the identifiability of causal representations is critical. This work establishes a {sufficient} and {necessary} condition characterizing the types of distribution shifts for identifiability in the context of latent additive noise models. Furthermore, we present partial identifiability results when only a portion of distribution shifts meets the condition. In addition, we extend our findings to latent post-nonlinear causal models. We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations. Our algorithm, guided by our underlying theory, has demonstrated outstanding performance across a diverse range of synthetic and real-world datasets. The empirical observations align closely with the theoretical findings, affirming the robustness and effectiveness of our approach.
[ "['Yuhang Liu' 'Zhen Zhang' 'Dong Gong' 'Mingming Gong' 'Biwei Huang'\n 'Anton van den Hengel' 'Kun Zhang' 'Javen Qinfeng Shi']" ]
null
null
2403.15717
null
null
http://arxiv.org/pdf/2403.15717v1
2024-03-23T04:44:55Z
2024-03-23T04:44:55Z
Ev-Edge: Efficient Execution of Event-based Vision Algorithms on Commodity Edge Platforms
Event cameras have emerged as a promising sensing modality for autonomous navigation systems, owing to their high temporal resolution, high dynamic range and negligible motion blur. To process the asynchronous temporal event streams from such sensors, recent research has shown that a mix of Artificial Neural Networks (ANNs), Spiking Neural Networks (SNNs) as well as hybrid SNN-ANN algorithms are necessary to achieve high accuracies across a range of perception tasks. However, we observe that executing such workloads on commodity edge platforms which feature heterogeneous processing elements such as CPUs, GPUs and neural accelerators results in inferior performance. This is due to the mismatch between the irregular nature of event streams and diverse characteristics of algorithms on the one hand and the underlying hardware platform on the other. We propose Ev-Edge, a framework that contains three key optimizations to boost the performance of event-based vision systems on edge platforms: (1) An Event2Sparse Frame converter directly transforms raw event streams into sparse frames, enabling the use of sparse libraries with minimal encoding overheads (2) A Dynamic Sparse Frame Aggregator merges sparse frames at runtime by trading off the temporal granularity of events and computational demand thereby improving hardware utilization (3) A Network Mapper maps concurrently executing tasks to different processing elements while also selecting layer precision by considering both compute and communication overheads. On several state-of-art networks for a range of autonomous navigation tasks, Ev-Edge achieves 1.28x-2.05x improvements in latency and 1.23x-2.15x in energy over an all-GPU implementation on the NVIDIA Jetson Xavier AGX platform for single-task execution scenarios. Ev-Edge also achieves 1.43x-1.81x latency improvements over round-robin scheduling methods in multi-task execution scenarios.
[ "['Shrihari Sridharan' 'Surya Selvam' 'Kaushik Roy' 'Anand Raghunathan']" ]
null
null
2403.15726
null
null
http://arxiv.org/pdf/2403.15726v1
2024-03-23T05:26:36Z
2024-03-23T05:26:36Z
Convection-Diffusion Equation: A Theoretically Certified Framework for Neural Networks
In this paper, we study the partial differential equation models of neural networks. Neural network can be viewed as a map from a simple base model to a complicate function. Based on solid analysis, we show that this map can be formulated by a convection-diffusion equation. This theoretically certified framework gives mathematical foundation and more understanding of neural networks. Moreover, based on the convection-diffusion equation model, we design a novel network structure, which incorporates diffusion mechanism into network architecture. Extensive experiments on both benchmark datasets and real-world applications validate the performance of the proposed model.
[ "['Tangjun Wang' 'Chenglong Bao' 'Zuoqiang Shi']" ]
null
null
2403.15734
null
null
http://arxiv.org/pdf/2403.15734v1
2024-03-23T06:01:45Z
2024-03-23T06:01:45Z
Space Group Informed Transformer for Crystalline Materials Generation
We introduce CrystalFormer, a transformer-based autoregressive model specifically designed for space group-controlled generation of crystalline materials. The space group symmetry significantly simplifies the crystal space, which is crucial for data and compute efficient generative modeling of crystalline materials. Leveraging the prominent discrete and sequential nature of the Wyckoff positions, CrystalFormer learns to generate crystals by directly predicting the species and locations of symmetry-inequivalent atoms in the unit cell. Our results demonstrate that CrystalFormer matches state-of-the-art performance on standard benchmarks for both validity, novelty, and stability of the generated crystalline materials. Our analysis also shows that CrystalFormer ingests sensible solid-state chemistry information from data for generative modeling. The CrystalFormer unifies symmetry-based structure search and generative pre-training in the realm of crystalline materials. The simplicity, generality, and flexibility of CrystalFormer position it as a promising architecture to be the foundational model of the entire crystalline materials space, heralding a new era in materials modeling and discovery.
[ "['Zhendong Cao' 'Xiaoshan Luo' 'Jian Lv' 'Lei Wang']" ]
null
null
2403.15740
null
null
http://arxiv.org/pdf/2403.15740v1
2024-03-23T06:36:32Z
2024-03-23T06:36:32Z
Ghost Sentence: A Tool for Everyday Users to Copyright Data from Large Language Models
Web user data plays a central role in the ecosystem of pre-trained large language models (LLMs) and their fine-tuned variants. Billions of data are crawled from the web and fed to LLMs. How can textit{textbf{everyday web users}} confirm if LLMs misuse their data without permission? In this work, we suggest that users repeatedly insert personal passphrases into their documents, enabling LLMs to memorize them. These concealed passphrases in user documents, referred to as textit{ghost sentences}, once they are identified in the generated content of LLMs, users can be sure that their data is used for training. To explore the effectiveness and usage of this copyrighting tool, we define the textit{user training data identification} task with ghost sentences. Multiple datasets from various sources at different scales are created and tested with LLMs of different sizes. For evaluation, we introduce a last $k$ words verification manner along with two metrics: document and user identification accuracy. In the specific case of instruction tuning of a 3B LLaMA model, 11 out of 16 users with ghost sentences identify their data within the generation content. These 16 users contribute 383 examples to $sim$1.8M training documents. For continuing pre-training of a 1.1B TinyLlama model, 61 out of 64 users with ghost sentences identify their data within the LLM output. These 64 users contribute 1156 examples to $sim$10M training documents.
[ "['Shuai Zhao' 'Linchao Zhu' 'Ruijie Quan' 'Yi Yang']" ]
null
null
2403.15744
null
null
http://arxiv.org/pdf/2403.15744v3
2024-04-15T17:10:46Z
2024-03-23T07:16:23Z
On the Fragility of Active Learners
Active learning (AL) techniques aim to maximally utilize a labeling budget by iteratively selecting instances that are most likely to improve prediction accuracy. However, their benefit compared to random sampling has not been consistent across various setups, e.g., different datasets, classifiers. In this empirical study, we examine how a combination of different factors might obscure any gains from an AL technique. Focusing on text classification, we rigorously evaluate AL techniques over around 1000 experiments that vary wrt the dataset, batch size, text representation and the classifier. We show that AL is only effective in a narrow set of circumstances. We also address the problem of using metrics that are better aligned with real world expectations. The impact of this study is in its insights for a practitioner: (a) the choice of text representation and classifier is as important as that of an AL technique, (b) choice of the right metric is critical in assessment of the latter, and, finally, (c) reported AL results must be holistically interpreted, accounting for variables other than just the query strategy.
[ "['Abhishek Ghose' 'Emma Thuong Nguyen']" ]
null
null
2403.15749
null
null
http://arxiv.org/pdf/2403.15749v2
2024-04-02T18:11:09Z
2024-03-23T07:34:18Z
Horoballs and the subgradient method
To explore convex optimization on Hadamard spaces, we consider an iteration in the style of a subgradient algorithm. Traditionally, such methods assume that the underlying spaces are manifolds and that the objectives are geodesically convex: the methods are described using tangent spaces and exponential maps. By contrast, our iteration applies in a general Hadamard space, is framed in the underlying space itself, and relies instead on horospherical convexity of the objective level sets. For this restricted class of objectives, we prove a complexity result of the usual form. Notably, the complexity does not depend on a lower bound on the space curvature. We illustrate our subgradient algorithm on the minimal enclosing ball problem in Hadamard spaces.
[ "['Adrian S. Lewis' 'Genaro Lopez-Acedo' 'Adriana Nicolae']" ]
null
null
2403.15757
null
null
http://arxiv.org/pdf/2403.15757v1
2024-03-23T08:03:50Z
2024-03-23T08:03:50Z
User-Side Realization
Users are dissatisfied with services. Since the service is not tailor-made for a user, it is natural for dissatisfaction to arise. The problem is, that even if users are dissatisfied, they often do not have the means to resolve their dissatisfaction. The user cannot alter the source code of the service, nor can they force the service provider to change. The user has no choice but to remain dissatisfied or quit the service. User-side realization offers proactive solutions to this problem by providing general algorithms to deal with common problems on the user's side. These algorithms run on the user's side and solve the problems without having the service provider change the service itself.
[ "['Ryoma Sato']" ]
null
null
2403.15766
null
null
http://arxiv.org/pdf/2403.15766v1
2024-03-23T08:40:38Z
2024-03-23T08:40:38Z
BEND: Bagging Deep Learning Training Based on Efficient Neural Network Diffusion
Bagging has achieved great success in the field of machine learning by integrating multiple base classifiers to build a single strong classifier to reduce model variance. The performance improvement of bagging mainly relies on the number and diversity of base classifiers. However, traditional deep learning model training methods are expensive to train individually and difficult to train multiple models with low similarity in a restricted dataset. Recently, diffusion models, which have been tremendously successful in the fields of imaging and vision, have been found to be effective in generating neural network model weights and biases with diversity. We creatively propose a Bagging deep learning training algorithm based on Efficient Neural network Diffusion (BEND). The originality of BEND comes from the first use of a neural network diffusion model to efficiently build base classifiers for bagging. Our approach is simple but effective, first using multiple trained model weights and biases as inputs to train autoencoder and latent diffusion model to realize a diffusion model from noise to valid neural network parameters. Subsequently, we generate several base classifiers using the trained diffusion model. Finally, we integrate these ba se classifiers for various inference tasks using the Bagging method. Resulting experiments on multiple models and datasets show that our proposed BEND algorithm can consistently outperform the mean and median accuracies of both the original trained model and the diffused model. At the same time, new models diffused using the diffusion model have higher diversity and lower cost than multiple models trained using traditional methods. The BEND approach successfully introduces diffusion models into the new deep learning training domain and provides a new paradigm for future deep learning training and inference.
[ "['Jia Wei' 'Xingjun Zhang' 'Witold Pedrycz']" ]
null
null
2403.15769
null
null
http://arxiv.org/pdf/2403.15769v3
2024-06-10T13:09:53Z
2024-03-23T08:54:03Z
FusionINN: Decomposable Image Fusion for Brain Tumor Monitoring
Image fusion typically employs non-invertible neural networks to merge multiple source images into a single fused image. However, for clinical experts, solely relying on fused images may be insufficient for making diagnostic decisions, as the fusion mechanism blends features from source images, thereby making it difficult to interpret the underlying tumor pathology. We introduce FusionINN, a novel decomposable image fusion framework, capable of efficiently generating fused images and also decomposing them back to the source images. FusionINN is designed to be bijective by including a latent image alongside the fused image, while ensuring minimal transfer of information from the source images to the latent representation. To the best of our knowledge, we are the first to investigate the decomposability of fused images, which is particularly crucial for life-sensitive applications such as medical image fusion compared to other tasks like multi-focus or multi-exposure image fusion. Our extensive experimentation validates FusionINN over existing discriminative and generative fusion methods, both subjectively and objectively. Moreover, compared to a recent denoising diffusion-based fusion model, our approach offers faster and qualitatively better fusion results.
[ "['Nishant Kumar' 'Ziyan Tao' 'Jaikirat Singh' 'Yang Li' 'Peiwen Sun'\n 'Binghui Zhao' 'Stefan Gumhold']" ]
null
null
2403.15778
null
null
http://arxiv.org/pdf/2403.15778v1
2024-03-23T09:24:29Z
2024-03-23T09:24:29Z
Supervised Learning via Ensembles of Diverse Functional Representations: the Functional Voting Classifier
Many conventional statistical and machine learning methods face challenges when applied directly to high dimensional temporal observations. In recent decades, Functional Data Analysis (FDA) has gained widespread popularity as a framework for modeling and analyzing data that are, by their nature, functions in the domain of time. Although supervised classification has been extensively explored in recent decades within the FDA literature, ensemble learning of functional classifiers has only recently emerged as a topic of significant interest. Thus, the latter subject presents unexplored facets and challenges from various statistical perspectives. The focal point of this paper lies in the realm of ensemble learning for functional data and aims to show how different functional data representations can be used to train ensemble members and how base model predictions can be combined through majority voting. The so-called Functional Voting Classifier (FVC) is proposed to demonstrate how different functional representations leading to augmented diversity can increase predictive accuracy. Many real-world datasets from several domains are used to display that the FVC can significantly enhance performance compared to individual models. The framework presented provides a foundation for voting ensembles with functional data and can stimulate a highly encouraging line of research in the FDA context.
[ "['Donato Riccio' 'Fabrizio Maturo' 'Elvira Romano']" ]
null
null
2403.15780
null
null
http://arxiv.org/pdf/2403.15780v1
2024-03-23T09:32:23Z
2024-03-23T09:32:23Z
A Fairness-Oriented Reinforcement Learning Approach for the Operation and Control of Shared Micromobility Services
As Machine Learning systems become increasingly popular across diverse application domains, including those with direct human implications, the imperative of equity and algorithmic fairness has risen to prominence in the Artificial Intelligence community. On the other hand, in the context of Shared Micromobility Systems, the exploration of fairness-oriented approaches remains limited. Addressing this gap, we introduce a pioneering investigation into the balance between performance optimization and algorithmic fairness in the operation and control of Shared Micromobility Services. Our study leverages the Q-Learning algorithm in Reinforcement Learning, benefiting from its convergence guarantees to ensure the robustness of our proposed approach. Notably, our methodology stands out for its ability to achieve equitable outcomes, as measured by the Gini index, across different station categories--central, peripheral, and remote. Through strategic rebalancing of vehicle distribution, our approach aims to maximize operator performance while simultaneously upholding fairness principles for users. In addition to theoretical insights, we substantiate our findings with a case study or simulation based on synthetic data, validating the efficacy of our approach. This paper underscores the critical importance of fairness considerations in shaping control strategies for Shared Micromobility Services, offering a pragmatic framework for enhancing equity in urban transportation systems.
[ "['Luca Vittorio Piron' 'Matteo Cederle' 'Marina Ceccon'\n 'Federico Chiariotti' 'Alessandro Fabris' 'Marco Fabris'\n 'Gian Antonio Susto']" ]
null
null
2403.15790
null
null
http://arxiv.org/pdf/2403.15790v1
2024-03-23T10:37:22Z
2024-03-23T10:37:22Z
Boarding for ISS: Imbalanced Self-Supervised: Discovery of a Scaled Autoencoder for Mixed Tabular Datasets
The field of imbalanced self-supervised learning, especially in the context of tabular data, has not been extensively studied. Existing research has predominantly focused on image datasets. This paper aims to fill this gap by examining the specific challenges posed by data imbalance in self-supervised learning in the domain of tabular data, with a primary focus on autoencoders. Autoencoders are widely employed for learning and constructing a new representation of a dataset, particularly for dimensionality reduction. They are also often used for generative model learning, as seen in variational autoencoders. When dealing with mixed tabular data, qualitative variables are often encoded using a one-hot encoder with a standard loss function (MSE or Cross Entropy). In this paper, we analyze the drawbacks of this approach, especially when categorical variables are imbalanced. We propose a novel metric to balance learning: a Multi-Supervised Balanced MSE. This approach reduces the reconstruction error by balancing the influence of variables. Finally, we empirically demonstrate that this new metric, compared to the standard MSE: i) outperforms when the dataset is imbalanced, especially when the learning process is insufficient, and ii) provides similar results in the opposite case.
[ "['Samuel Stocksieker' 'Denys Pommeret' 'Arthur Charpentier']" ]
null
null
2403.15796
null
null
http://arxiv.org/pdf/2403.15796v2
2024-03-30T09:55:12Z
2024-03-23T11:03:31Z
Understanding Emergent Abilities of Language Models from the Loss Perspective
Recent studies have put into question the belief that emergent abilities in language models are exclusive to large models. This skepticism arises from two observations: 1) smaller models can also exhibit high performance on emergent abilities and 2) there is doubt on the discontinuous metrics used to measure these abilities. In this paper, we propose to study emergent abilities in the lens of pre-training loss, instead of model size or training compute. We demonstrate that the models with the same pre-training loss, but different model and data sizes, generate the same performance on various downstream tasks. We also discover that a model exhibits emergent abilities on certain tasks -- regardless of the continuity of metrics -- when its pre-training loss falls below a specific threshold. Before reaching this threshold, its performance remains at the level of random guessing. This inspires us to redefine emergent abilities as those that manifest in models with lower pre-training losses, highlighting that these abilities cannot be predicted by merely extrapolating the performance trends of models with higher pre-training losses.
[ "['Zhengxiao Du' 'Aohan Zeng' 'Yuxiao Dong' 'Jie Tang']" ]
null
null
2403.15807
null
null
http://arxiv.org/pdf/2403.15807v1
2024-03-23T11:34:17Z
2024-03-23T11:34:17Z
Efficient Data Access Paths for Mixed Vector-Relational Search
The rapid growth of machine learning capabilities and the adoption of data processing methods using vector embeddings sparked a great interest in creating systems for vector data management. While the predominant approach of vector data management is to use specialized index structures for fast search over the entirety of the vector embeddings, once combined with other (meta)data, the search queries can also become selective on relational attributes - typical for analytical queries. As using vector indexes differs from traditional relational data access, we revisit and analyze alternative access paths for efficient mixed vector-relational search. We first evaluate the accurate but exhaustive scan-based search and propose hardware optimizations and alternative tensor-based formulation and batching to offset the cost. We outline the complex access-path design space, primarily driven by relational selectivity, and the decisions to consider when selecting an exhaustive scan-based search against an approximate index-based approach. Since the vector index primarily avoids expensive computation across the entire dataset, contrary to the common relational knowledge, it is better to scan at lower selectivity and probe at higher, with a cross-point between the two approaches dictated by data dimensionality and the number of concurrent search queries.
[ "['Viktor Sanca' 'Anastasia Ailamaki']" ]
null
null
2403.15824
null
null
http://arxiv.org/pdf/2403.15824v1
2024-03-23T12:33:12Z
2024-03-23T12:33:12Z
Carbon Intensity-Aware Adaptive Inference of DNNs
DNN inference, known for its significant energy consumption and the resulting high carbon footprint, can be made more sustainable by adapting model size and accuracy to the varying carbon intensity throughout the day. Our heuristic algorithm uses larger, high-accuracy models during low-intensity periods and smaller, lower-accuracy ones during high-intensity periods. We also introduce a metric, carbon-emission efficiency, which quantitatively measures the efficacy of adaptive model selection in terms of carbon footprint. The evaluation showed that the proposed approach could improve the carbon emission efficiency in improving the accuracy of vision recognition services by up to 80%.
[ "['Jiwan Jung']" ]
null
null
2403.15826
null
null
http://arxiv.org/pdf/2403.15826v1
2024-03-23T12:53:51Z
2024-03-23T12:53:51Z
Scaling Learning based Policy Optimization for Temporal Tasks via Dropout
This paper introduces a model-based approach for training feedback controllers for an autonomous agent operating in a highly nonlinear environment. We desire the trained policy to ensure that the agent satisfies specific task objectives, expressed in discrete-time Signal Temporal Logic (DT-STL). One advantage for reformulation of a task via formal frameworks, like DT-STL, is that it permits quantitative satisfaction semantics. In other words, given a trajectory and a DT-STL formula, we can compute the robustness, which can be interpreted as an approximate signed distance between the trajectory and the set of trajectories satisfying the formula. We utilize feedback controllers, and we assume a feed forward neural network for learning these feedback controllers. We show how this learning problem is similar to training recurrent neural networks (RNNs), where the number of recurrent units is proportional to the temporal horizon of the agent's task objectives. This poses a challenge: RNNs are susceptible to vanishing and exploding gradients, and na"{i}ve gradient descent-based strategies to solve long-horizon task objectives thus suffer from the same problems. To tackle this challenge, we introduce a novel gradient approximation algorithm based on the idea of dropout or gradient sampling. We show that, the existing smooth semantics for robustness are inefficient regarding gradient computation when the specification becomes complex. To address this challenge, we propose a new smooth semantics for DT-STL that under-approximates the robustness value and scales well for backpropagation over a complex specification. We show that our control synthesis methodology, can be quite helpful for stochastic gradient descent to converge with less numerical issues, enabling scalable backpropagation over long time horizons and trajectories over high dimensional state spaces.
[ "['Navid Hashemi' 'Bardh Hoxha' 'Danil Prokhorov' 'Georgios Fainekos'\n 'Jyotirmoy Deshmukh']" ]