categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2403.18731
null
null
http://arxiv.org/pdf/2403.18731v1
2024-03-27T16:21:24Z
2024-03-27T16:21:24Z
Enhancing Manufacturing Quality Prediction Models through the Integration of Explainability Methods
This research presents a method that utilizes explainability techniques to amplify the performance of machine learning (ML) models in forecasting the quality of milling processes, as demonstrated in this paper through a manufacturing use case. The methodology entails the initial training of ML models, followed by a fine-tuning phase where irrelevant features identified through explainability methods are eliminated. This procedural refinement results in performance enhancements, paving the way for potential reductions in manufacturing costs and a better understanding of the trained ML models. This study highlights the usefulness of explainability techniques in both explaining and optimizing predictive models in the manufacturing realm.
[ "['Dennis Gross' 'Helge Spieker' 'Arnaud Gotlieb' 'Ricardo Knoblauch']" ]
null
null
2403.18735
null
null
http://arxiv.org/pdf/2403.18735v1
2024-03-27T16:24:26Z
2024-03-27T16:24:26Z
Nonlinear model reduction for operator learning
Operator learning provides methods to approximate mappings between infinite-dimensional function spaces. Deep operator networks (DeepONets) are a notable architecture in this field. Recently, an extension of DeepONet based on model reduction and neural networks, proper orthogonal decomposition (POD)-DeepONet, has been able to outperform other architectures in terms of accuracy for several benchmark tests. We extend this idea towards nonlinear model order reduction by proposing an efficient framework that combines neural networks with kernel principal component analysis (KPCA) for operator learning. Our results demonstrate the superior performance of KPCA-DeepONet over POD-DeepONet.
[ "['Hamidreza Eivazi' 'Stefan Wittek' 'Andreas Rausch']" ]
null
null
2403.18739
null
null
http://arxiv.org/pdf/2403.18739v1
2024-03-27T16:32:32Z
2024-03-27T16:32:32Z
Usage-Specific Survival Modeling Based on Operational Data and Neural Networks
Accurate predictions of when a component will fail are crucial when planning maintenance, and by modeling the distribution of these failure times, survival models have shown to be particularly useful in this context. The presented methodology is based on conventional neural network-based survival models that are trained using data that is continuously gathered and stored at specific times, called snapshots. An important property of this type of training data is that it can contain more than one snapshot from a specific individual which results in that standard maximum likelihood training can not be directly applied since the data is not independent. However, the papers show that if the data is in a specific format where all snapshot times are the same for all individuals, called homogeneously sampled, maximum likelihood training can be applied and produce desirable results. In many cases, the data is not homogeneously sampled and in this case, it is proposed to resample the data to make it homogeneously sampled. How densely the dataset is sampled turns out to be an important parameter; it should be chosen large enough to produce good results, but this also increases the size of the dataset which makes training slow. To reduce the number of samples needed during training, the paper also proposes a technique to, instead of resampling the dataset once before the training starts, randomly resample the dataset at the start of each epoch during the training. The proposed methodology is evaluated on both a simulated dataset and an experimental dataset of starter battery failures. The results show that if the data is homogeneously sampled the methodology works as intended and produces accurate survival models. The results also show that randomly resampling the dataset on each epoch is an effective way to reduce the size of the training data.
[ "['Olov Holmer' 'Mattias Krysander' 'Erik Frisk']" ]
null
null
2403.18742
null
null
http://arxiv.org/pdf/2403.18742v4
2024-04-16T16:38:37Z
2024-03-27T16:39:28Z
Understanding the Learning Dynamics of Alignment with Human Feedback
Aligning large language models (LLMs) with human intentions has become a critical task for safely deploying models in real-world systems. While existing alignment approaches have seen empirical success, theoretically understanding how these methods affect model behavior remains an open question. Our work provides an initial attempt to theoretically analyze the learning dynamics of human preference alignment. We formally show how the distribution of preference datasets influences the rate of model updates and provide rigorous guarantees on the training accuracy. Our theory also reveals an intricate phenomenon where the optimization is prone to prioritizing certain behaviors with higher preference distinguishability. We empirically validate our findings on contemporary LLMs and alignment tasks, reinforcing our theoretical insights and shedding light on considerations for future alignment approaches. Disclaimer: This paper contains potentially offensive text; reader discretion is advised.
[ "['Shawn Im' 'Yixuan Li']" ]
null
null
2403.18756
null
null
http://arxiv.org/pdf/2403.18756v1
2024-03-27T16:56:14Z
2024-03-27T16:56:14Z
Detection of subclinical atherosclerosis by image-based deep learning on chest x-ray
Aims. To develop a deep-learning based system for recognition of subclinical atherosclerosis on a plain frontal chest x-ray. Methods and Results. A deep-learning algorithm to predict coronary artery calcium (CAC) score (the AI-CAC model) was developed on 460 chest x-ray (80% training cohort, 20% internal validation cohort) of primary prevention patients (58.4% male, median age 63 [51-74] years) with available paired chest x-ray and chest computed tomography (CT) indicated for any clinical reason and performed within 3 months. The CAC score calculated on chest CT was used as ground truth. The model was validated on an temporally-independent cohort of 90 patients from the same institution (external validation). The diagnostic accuracy of the AI-CAC model assessed by the area under the curve (AUC) was the primary outcome. Overall, median AI-CAC score was 35 (0-388) and 28.9% patients had no AI-CAC. AUC of the AI-CAC model to identify a CAC>0 was 0.90 in the internal validation cohort and 0.77 in the external validation cohort. Sensitivity was consistently above 92% in both cohorts. In the overall cohort (n=540), among patients with AI-CAC=0, a single ASCVD event occurred, after 4.3 years. Patients with AI-CAC>0 had significantly higher Kaplan Meier estimates for ASCVD events (13.5% vs. 3.4%, log-rank=0.013). Conclusion. The AI-CAC model seems to accurately detect subclinical atherosclerosis on chest x-ray with elevated sensitivity, and to predict ASCVD events with elevated negative predictive value. Adoption of the AI-CAC model to refine CV risk stratification or as an opportunistic screening tool requires prospective evaluation.
[ "['Guglielmo Gallone' 'Francesco Iodice' 'Alberto Presta' 'Davide Tore'\n 'Ovidio de Filippo' 'Michele Visciano' 'Carlo Alberto Barbano'\n 'Alessandro Serafini' 'Paola Gorrini' 'Alessandro Bruno'\n 'Walter Grosso Marra' 'James Hughes' 'Mario Iannaccone' 'Paolo Fonio'\n 'Attilio Fiandrotti' 'Alessandro Depaoli' 'Marco Grangetto'\n 'Gaetano Maria de Ferrari' \"Fabrizio D'Ascenzo\"]" ]
null
null
2403.18765
null
null
http://arxiv.org/pdf/2403.18765v1
2024-03-27T17:03:31Z
2024-03-27T17:03:31Z
CaT: Constraints as Terminations for Legged Locomotion Reinforcement Learning
Deep Reinforcement Learning (RL) has demonstrated impressive results in solving complex robotic tasks such as quadruped locomotion. Yet, current solvers fail to produce efficient policies respecting hard constraints. In this work, we advocate for integrating constraints into robot learning and present Constraints as Terminations (CaT), a novel constrained RL algorithm. Departing from classical constrained RL formulations, we reformulate constraints through stochastic terminations during policy learning: any violation of a constraint triggers a probability of terminating potential future rewards the RL agent could attain. We propose an algorithmic approach to this formulation, by minimally modifying widely used off-the-shelf RL algorithms in robot learning (such as Proximal Policy Optimization). Our approach leads to excellent constraint adherence without introducing undue complexity and computational overhead, thus mitigating barriers to broader adoption. Through empirical evaluation on the real quadruped robot Solo crossing challenging obstacles, we demonstrate that CaT provides a compelling solution for incorporating constraints into RL frameworks. Videos and code are available at https://constraints-as-terminations.github.io.
[ "['Elliot Chane-Sane' 'Pierre-Alexandre Leziart' 'Thomas Flayols'\n 'Olivier Stasse' 'Philippe Souères' 'Nicolas Mansard']" ]
null
null
2403.18766
null
null
http://arxiv.org/pdf/2403.18766v1
2024-03-27T17:05:03Z
2024-03-27T17:05:03Z
Superior Parallel Big Data Clustering through Competitive Stochastic Sample Size Optimization in Big-means
This paper introduces a novel K-means clustering algorithm, an advancement on the conventional Big-means methodology. The proposed method efficiently integrates parallel processing, stochastic sampling, and competitive optimization to create a scalable variant designed for big data applications. It addresses scalability and computation time challenges typically faced with traditional techniques. The algorithm adjusts sample sizes dynamically for each worker during execution, optimizing performance. Data from these sample sizes are continually analyzed, facilitating the identification of the most efficient configuration. By incorporating a competitive element among workers using different sample sizes, efficiency within the Big-means algorithm is further stimulated. In essence, the algorithm balances computational time and clustering quality by employing a stochastic, competitive sampling strategy in a parallel computing setting.
[ "['Rustam Mussabayev' 'Ravil Mussabayev']" ]
null
null
2403.18774
null
null
http://arxiv.org/pdf/2403.18774v1
2024-01-23T22:00:49Z
2024-01-23T22:00:49Z
RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees
Safeguarding intellectual property and preventing potential misuse of AI-generated images are of paramount importance. This paper introduces a robust and agile plug-and-play watermark detection framework, dubbed as RAW. As a departure from traditional encoder-decoder methods, which incorporate fixed binary codes as watermarks within latent representations, our approach introduces learnable watermarks directly into the original image data. Subsequently, we employ a classifier that is jointly trained with the watermark to detect the presence of the watermark. The proposed framework is compatible with various generative architectures and supports on-the-fly watermark injection after training. By incorporating state-of-the-art smoothing techniques, we show that the framework provides provable guarantees regarding the false positive rate for misclassifying a watermarked image, even in the presence of certain adversarial attacks targeting watermark removal. Experiments on a diverse range of images generated by state-of-the-art diffusion models reveal substantial performance enhancements compared to existing approaches. For instance, our method demonstrates a notable increase in AUROC, from 0.48 to 0.82, when compared to state-of-the-art approaches in detecting watermarked images under adversarial attacks, while maintaining image quality, as indicated by closely aligned FID and CLIP scores.
[ "['Xun Xian' 'Ganghua Wang' 'Xuan Bi' 'Jayanth Srinivasa' 'Ashish Kundu'\n 'Mingyi Hong' 'Jie Ding']" ]
null
null
2403.18775
null
null
http://arxiv.org/pdf/2403.18775v1
2024-03-27T17:23:39Z
2024-03-27T17:23:39Z
ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object
We establish rigorous benchmarks for visual perception robustness. Synthetic images such as ImageNet-C, ImageNet-9, and Stylized ImageNet provide specific type of evaluation over synthetic corruptions, backgrounds, and textures, yet those robustness benchmarks are restricted in specified variations and have low synthetic quality. In this work, we introduce generative model as a data source for synthesizing hard images that benchmark deep models' robustness. Leveraging diffusion models, we are able to generate images with more diversified backgrounds, textures, and materials than any prior work, where we term this benchmark as ImageNet-D. Experimental results show that ImageNet-D results in a significant accuracy drop to a range of vision models, from the standard ResNet visual classifier to the latest foundation models like CLIP and MiniGPT-4, significantly reducing their accuracy by up to 60%. Our work suggests that diffusion models can be an effective source to test vision models. The code and dataset are available at https://github.com/chenshuang-zhang/imagenet_d.
[ "['Chenshuang Zhang' 'Fei Pan' 'Junmo Kim' 'In So Kweon' 'Chengzhi Mao']" ]
null
null
2403.18802
null
null
http://arxiv.org/pdf/2403.18802v3
2024-04-03T20:54:11Z
2024-03-27T17:48:55Z
Long-form factuality in large language models
Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model's long-form factuality in open domains, we first use GPT-4 to generate LongFact, a prompt set comprising thousands of questions spanning 38 topics. We then propose that LLM agents can be used as automated evaluators for long-form factuality through a method which we call Search-Augmented Factuality Evaluator (SAFE). SAFE utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results. Furthermore, we propose extending F1 score as an aggregated metric for long-form factuality. To do so, we balance the percentage of supported facts in a response (precision) with the percentage of provided facts relative to a hyperparameter representing a user's preferred response length (recall). Empirically, we demonstrate that LLM agents can outperform crowdsourced human annotators - on a set of ~16k individual facts, SAFE agrees with crowdsourced human annotators 72% of the time, and on a random subset of 100 disagreement cases, SAFE wins 76% of the time. At the same time, SAFE is more than 20 times cheaper than human annotators. We also benchmark thirteen language models on LongFact across four model families (Gemini, GPT, Claude, and PaLM-2), finding that larger language models generally achieve better long-form factuality. LongFact, SAFE, and all experimental code are available at https://github.com/google-deepmind/long-form-factuality.
[ "['Jerry Wei' 'Chengrun Yang' 'Xinying Song' 'Yifeng Lu' 'Nathan Hu'\n 'Jie Huang' 'Dustin Tran' 'Daiyi Peng' 'Ruibo Liu' 'Da Huang' 'Cosmo Du'\n 'Quoc V. Le']" ]
null
null
2403.18807
null
null
http://arxiv.org/pdf/2403.18807v4
2024-04-17T14:59:51Z
2024-03-27T17:53:30Z
ECoDepth: Effective Conditioning of Diffusion Models for Monocular Depth Estimation
In the absence of parallax cues, a learning-based single image depth estimation (SIDE) model relies heavily on shading and contextual cues in the image. While this simplicity is attractive, it is necessary to train such models on large and varied datasets, which are difficult to capture. It has been shown that using embeddings from pre-trained foundational models, such as CLIP, improves zero shot transfer in several applications. Taking inspiration from this, in our paper we explore the use of global image priors generated from a pre-trained ViT model to provide more detailed contextual information. We argue that the embedding vector from a ViT model, pre-trained on a large dataset, captures greater relevant information for SIDE than the usual route of generating pseudo image captions, followed by CLIP based text embeddings. Based on this idea, we propose a new SIDE model using a diffusion backbone which is conditioned on ViT embeddings. Our proposed design establishes a new state-of-the-art (SOTA) for SIDE on NYUv2 dataset, achieving Abs Rel error of 0.059 (14% improvement) compared to 0.069 by the current SOTA (VPD). And on KITTI dataset, achieving Sq Rel error of 0.139 (2% improvement) compared to 0.142 by the current SOTA (GEDepth). For zero-shot transfer with a model trained on NYUv2, we report mean relative improvement of (20%, 23%, 81%, 25%) over NeWCRFs on (Sun-RGBD, iBims1, DIODE, HyperSim) datasets, compared to (16%, 18%, 45%, 9%) by ZoeDepth. The project page is available at https://ecodepth-iitd.github.io
[ "['Suraj Patni' 'Aradhye Agarwal' 'Chetan Arora']" ]
null
null
2403.18810
null
null
http://arxiv.org/pdf/2403.18810v1
2024-02-08T21:56:43Z
2024-02-08T21:56:43Z
LightningNet: Distributed Graph-based Cellular Network Performance Forecasting for the Edge
The cellular network plays a pivotal role in providing Internet access, since it is the only global-scale infrastructure with ubiquitous mobility support. To manage and maintain large-scale networks, mobile network operators require timely information, or even accurate performance forecasts. In this paper, we propose LightningNet, a lightweight and distributed graph-based framework for forecasting cellular network performance, which can capture spatio-temporal dependencies that arise in the network traffic. LightningNet achieves a steady performance increase over state-of-the-art forecasting techniques, while maintaining a similar resource usage profile. Our architecture ideology also excels in the respect that it is specifically designed to support IoT and edge devices, giving us an even greater step ahead of the current state-of-the-art, as indicated by our performance experiments with NVIDIA Jetson.
[ "['Konstantinos Zacharopoulos' 'Georgios Koutroumpas' 'Ioannis Arapakis'\n 'Konstantinos Georgopoulos' 'Javad Khangosstar' 'Sotiris Ioannidis']" ]
null
null
2403.18822
null
null
http://arxiv.org/pdf/2403.18822v1
2023-12-09T07:53:25Z
2023-12-09T07:53:25Z
Enhancing Financial Data Visualization for Investment Decision-Making
Navigating the intricate landscape of financial markets requires adept forecasting of stock price movements. This paper delves into the potential of Long Short-Term Memory (LSTM) networks for predicting stock dynamics, with a focus on discerning nuanced rise and fall patterns. Leveraging a dataset from the New York Stock Exchange (NYSE), the study incorporates multiple features to enhance LSTM's capacity in capturing complex patterns. Visualization of key attributes, such as opening, closing, low, and high prices, aids in unraveling subtle distinctions crucial for comprehensive market understanding. The meticulously crafted LSTM input structure, inspired by established guidelines, incorporates both price and volume attributes over a 25-day time step, enabling the model to capture temporal intricacies. A comprehensive methodology, including hyperparameter tuning with Grid Search, Early Stopping, and Callback mechanisms, leads to a remarkable 53% improvement in predictive accuracy. The study concludes with insights into model robustness, contributions to financial forecasting literature, and a roadmap for real-time stock market prediction. The amalgamation of LSTM networks, strategic hyperparameter tuning, and informed feature selection presents a potent framework for advancing the accuracy of stock price predictions, contributing substantively to financial time series forecasting discourse.
[ "['Nisarg Patel' 'Harmit Shah' 'Kishan Mewada']" ]
null
null
2403.18827
null
null
http://arxiv.org/pdf/2403.18827v1
2024-01-25T05:48:50Z
2024-01-25T05:48:50Z
Bridging Generative Networks with the Common Model of Cognition
This article presents a theoretical framework for adapting the Common Model of Cognition to large generative network models within the field of artificial intelligence. This can be accomplished by restructuring modules within the Common Model into shadow production systems that are peripheral to a central production system, which handles higher-level reasoning based on the shadow productions' output. Implementing this novel structure within the Common Model allows for a seamless connection between cognitive architectures and generative neural networks.
[ "['Robert L. West' 'Spencer Eckler' 'Brendan Conway-Smith' 'Nico Turcas'\n 'Eilene Tomkins-Flanagan' 'Mary Alexandria Kelly']" ]
null
null
2403.18833
null
null
http://arxiv.org/abs/2403.18833v1
2024-02-07T21:15:53Z
2024-02-07T21:15:53Z
A New Method for Sensorless Estimation of the Speed and Position in Brushed DC Motors Using Support Vector Machines
Currently, for many applications, it is necessary to know the speed and position of motors. This can be achieved using mechanical sensors coupled to the motor shaft or using sensorless techniques. The sensorless techniques in brushed dc motors can be classified into two types: 1) techniques based on the dynamic brushed dc motor model and 2) techniques based on the ripple component of the current. This paper presents a new method, based on the ripple component, for speed and position estimation in brushed dc motors, using support vector machines. The proposed method only measures the current and detects the pulses in this signal. The motor speed is estimated by using the inverse distance between the detected pulses, and the position is estimated by counting all detected pulses. The ability to detect ghost pulses and to discard false pulses is the main advantage of this method over other sensorless methods. The performed tests on two fractional horsepower brushed dc motors indicate that the method works correctly in a wide range of speeds and situations, in which the speed is constant or varies dynamically.
[ "['Ernesto Vazquez-Sanchez' 'Jaime Gomez-Gil' 'Jose-Carlos Gamazo-Real'\n 'Jose Fernando Diez-Higuera']" ]
null
null
2403.18839
null
null
http://arxiv.org/pdf/2403.18839v1
2024-02-23T12:59:49Z
2024-02-23T12:59:49Z
Long Short-Term Memory Pattern Recognition in Currency Trading
This study delves into the analysis of financial markets through the lens of Wyckoff Phases, a framework devised by Richard D. Wyckoff in the early 20th century. Focusing on the accumulation pattern within the Wyckoff framework, the research explores the phases of trading range and secondary test, elucidating their significance in understanding market dynamics and identifying potential trading opportunities. By dissecting the intricacies of these phases, the study sheds light on the creation of liquidity through market structure, offering insights into how traders can leverage this knowledge to anticipate price movements and make informed decisions. The effective detection and analysis of Wyckoff patterns necessitate robust computational models capable of processing complex market data, with spatial data best analyzed using Convolutional Neural Networks (CNNs) and temporal data through Long Short-Term Memory (LSTM) models. The creation of training data involves the generation of swing points, representing significant market movements, and filler points, introducing noise and enhancing model generalization. Activation functions, such as the sigmoid function, play a crucial role in determining the output behavior of neural network models. The results of the study demonstrate the remarkable efficacy of deep learning models in detecting Wyckoff patterns within financial data, underscoring their potential for enhancing pattern recognition and analysis in financial markets. In conclusion, the study highlights the transformative potential of AI-driven approaches in financial analysis and trading strategies, with the integration of AI technologies shaping the future of trading and investment practices.
[ "['Jai Pal']" ]
null
null
2403.18840
null
null
http://arxiv.org/pdf/2403.18840v1
2024-02-28T03:45:55Z
2024-02-28T03:45:55Z
Feynman Diagrams as Computational Graphs
We propose a computational graph representation of high-order Feynman diagrams in Quantum Field Theory (QFT), applicable to any combination of spatial, temporal, momentum, and frequency domains. Utilizing the Dyson-Schwinger and parquet equations, our approach effectively organizes these diagrams into a fractal structure of tensor operations, significantly reducing computational redundancy. This approach not only streamlines the evaluation of complex diagrams but also facilitates an efficient implementation of the field-theoretic renormalization scheme, crucial for enhancing perturbative QFT calculations. Key to this advancement is the integration of Taylor-mode automatic differentiation, a key technique employed in machine learning packages to compute higher-order derivatives efficiently on computational graphs. To operationalize these concepts, we develop a Feynman diagram compiler that optimizes diagrams for various computational platforms, utilizing machine learning frameworks. Demonstrating this methodology's effectiveness, we apply it to the three-dimensional uniform electron gas problem, achieving unprecedented accuracy in calculating the quasiparticle effective mass at metal density. Our work demonstrates the synergy between QFT and machine learning, establishing a new avenue for applying AI techniques to complex quantum many-body problems.
[ "['Pengcheng Hou' 'Tao Wang' 'Daniel Cerkoney' 'Xiansheng Cai' 'Zhiyi Li'\n 'Youjin Deng' 'Lei Wang' 'Kun Chen']" ]
null
null
2403.18843
null
null
http://arxiv.org/pdf/2403.18843v1
2024-03-04T00:30:24Z
2024-03-04T00:30:24Z
JEP-KD: Joint-Embedding Predictive Architecture Based Knowledge Distillation for Visual Speech Recognition
Visual Speech Recognition (VSR) tasks are generally recognized to have a lower theoretical performance ceiling than Automatic Speech Recognition (ASR), owing to the inherent limitations of conveying semantic information visually. To mitigate this challenge, this paper introduces an advanced knowledge distillation approach using a Joint-Embedding Predictive Architecture (JEPA), named JEP-KD, designed to more effectively utilize audio features during model training. Central to JEP-KD is the inclusion of a generative network within the embedding layer, which enhances the video encoder's capacity for semantic feature extraction and brings it into closer alignment with the audio features from a pre-trained ASR model's encoder. This approach aims to progressively reduce the performance gap between VSR and ASR. Moreover, a comprehensive multimodal, multistage training regimen for the JEP-KD framework is established, bolstering the robustness and efficacy of the training process. Experiment results demonstrate that JEP-KD significantly improves the performance of VSR models and demonstrates versatility across different VSR platforms, indicating its potential for broader application within other multimodal tasks.
[ "['Chang Sun' 'Hong Yang' 'Bo Qin']" ]
null
null
2403.18846
null
null
http://arxiv.org/pdf/2403.18846v1
2024-03-08T04:08:40Z
2024-03-08T04:08:40Z
The Blind Normalized Stein Variational Gradient Descent-Based Detection for Intelligent Massive Random Access
The lack of an efficient preamble detection algorithm remains a challenge for solving preamble collision problems in intelligent massive random access (RA) in practical communication scenarios. To solve this problem, we present a novel early preamble detection scheme based on a maximum likelihood estimation (MLE) model at the first step of the grant-based RA procedure. A novel blind normalized Stein variational gradient descent (SVGD)-based detector is proposed to obtain an approximate solution to the MLE model. First, by exploring the relationship between the Hadamard transform and wavelet transform, a new modified Hadamard transform (MHT) is developed to separate high-frequencies from important components using the second-order derivative filter. Next, to eliminate noise and mitigate the vanishing gradients problem in the SVGD-based detectors, the block MHT layer is designed based on the MHT, scaling layer, soft-thresholding layer, inverse MHT and sparsity penalty. Then, the blind normalized SVGD algorithm is derived to perform preamble detection without prior knowledge of noise power and the number of active devices. The experimental results show the proposed block MHT layer outperforms other transform-based methods in terms of computation costs and denoising performance. Furthermore, with the assistance of the block MHT layer, the proposed blind normalized SVGD algorithm achieves a higher preamble detection accuracy and throughput than other state-of-the-art detection methods.
[ "['Xin Zhu' 'Ahmet Enis Cetin']" ]
null
null
2403.18853
null
null
http://arxiv.org/pdf/2403.18853v1
2024-03-18T10:13:24Z
2024-03-18T10:13:24Z
Spatio-seasonal risk assessment of upward lightning at tall objects using meteorological reanalysis data
This study investigates lightning at tall objects and evaluates the risk of upward lightning (UL) over the eastern Alps and its surrounding areas. While uncommon, UL poses a threat, especially to wind turbines, as the long-duration current of UL can cause significant damage. Current risk assessment methods overlook the impact of meteorological conditions, potentially underestimating UL risks. Therefore, this study employs random forests, a machine learning technique, to analyze the relationship between UL measured at Gaisberg Tower (Austria) and $35$ larger-scale meteorological variables. Of these, the larger-scale upward velocity, wind speed and direction at 10 meters and cloud physics variables contribute most information. The random forests predict the risk of UL across the study area at a 1 km$^2$ resolution. Strong near-surface winds combined with upward deflection by elevated terrain increase UL risk. The diurnal cycle of the UL risk as well as high-risk areas shift seasonally. They are concentrated north/northeast of the Alps in winter due to prevailing northerly winds, and expanding southward, impacting northern Italy in the transitional and summer months. The model performs best in winter, with the highest predicted UL risk coinciding with observed peaks in measured lightning at tall objects. The highest concentration is north of the Alps, where most wind turbines are located, leading to an increase in overall lightning activity. Comprehensive meteorological information is essential for UL risk assessment, as lightning densities are a poor indicator of lightning at tall objects.
[ "['Isabell Stucke' 'Deborah Morgenstern' 'Georg J. Mayr' 'Thorsten Simon'\n 'Achim Zeileis' 'Gerhard Diendorfer' 'Wolfgang Schulz' 'Hannes Pichler']" ]
null
null
2403.18855
null
null
http://arxiv.org/pdf/2403.18855v1
2024-03-18T20:47:38Z
2024-03-18T20:47:38Z
Directed Criteria Citation Recommendation and Ranking Through Link Prediction
We explore link prediction as a proxy for automatically surfacing documents from existing literature that might be topically or contextually relevant to a new document. Our model uses transformer-based graph embeddings to encode the meaning of each document, presented as a node within a citation network. We show that the semantic representations that our model generates can outperform other content-based methods in recommendation and ranking tasks. This provides a holistic approach to exploring citation graphs in domains where it is critical that these documents properly cite each other, so as to minimize the possibility of any inconsistencies
[ "['William Watson' 'Lawrence Yong']" ]
null
null
2403.18864
null
null
http://arxiv.org/pdf/2403.18864v1
2024-03-24T14:23:35Z
2024-03-24T14:23:35Z
Interpretable Machine Learning for Weather and Climate Prediction: A Survey
Advanced machine learning models have recently achieved high predictive accuracy for weather and climate prediction. However, these complex models often lack inherent transparency and interpretability, acting as "black boxes" that impede user trust and hinder further model improvements. As such, interpretable machine learning techniques have become crucial in enhancing the credibility and utility of weather and climate modeling. In this survey, we review current interpretable machine learning approaches applied to meteorological predictions. We categorize methods into two major paradigms: 1) Post-hoc interpretability techniques that explain pre-trained models, such as perturbation-based, game theory based, and gradient-based attribution methods. 2) Designing inherently interpretable models from scratch using architectures like tree ensembles and explainable neural networks. We summarize how each technique provides insights into the predictions, uncovering novel meteorological relationships captured by machine learning. Lastly, we discuss research challenges around achieving deeper mechanistic interpretations aligned with physical principles, developing standardized evaluation benchmarks, integrating interpretability into iterative model development workflows, and providing explainability for large foundation models.
[ "['Ruyi Yang' 'Jingyu Hu' 'Zihao Li' 'Jianli Mu' 'Tingzhao Yu'\n 'Jiangjiang Xia' 'Xuhong Li' 'Aritra Dasgupta' 'Haoyi Xiong']" ]
null
null
2403.18866
null
null
http://arxiv.org/pdf/2403.18866v1
2024-03-25T14:50:01Z
2024-03-25T14:50:01Z
Graph Bayesian Optimization for Multiplex Influence Maximization
Influence maximization (IM) is the problem of identifying a limited number of initial influential users within a social network to maximize the number of influenced users. However, previous research has mostly focused on individual information propagation, neglecting the simultaneous and interactive dissemination of multiple information items. In reality, when users encounter a piece of information, such as a smartphone product, they often associate it with related products in their minds, such as earphones or computers from the same brand. Additionally, information platforms frequently recommend related content to users, amplifying this cascading effect and leading to multiplex influence diffusion. This paper first formulates the Multiplex Influence Maximization (Multi-IM) problem using multiplex diffusion models with an information association mechanism. In this problem, the seed set is a combination of influential users and information. To effectively manage the combinatorial complexity, we propose Graph Bayesian Optimization for Multi-IM (GBIM). The multiplex diffusion process is thoroughly investigated using a highly effective global kernelized attention message-passing module. This module, in conjunction with Bayesian linear regression (BLR), produces a scalable surrogate model. A data acquisition module incorporating the exploration-exploitation trade-off is developed to optimize the seed set further. Extensive experiments on synthetic and real-world datasets have proven our proposed framework effective. The code is available at https://github.com/zirui-yuan/GBIM.
[ "['Zirui Yuan' 'Minglai Shao' 'Zhiqian Chen']" ]
null
null
2403.18870
null
null
http://arxiv.org/pdf/2403.18870v1
2024-03-26T11:23:08Z
2024-03-26T11:23:08Z
SugarcaneNet2024: An Optimized Weighted Average Ensemble Approach of LASSO Regularized Pre-trained Models for Sugarcane Disease Classification
Sugarcane, a key crop for the world's sugar industry, is prone to several diseases that have a substantial negative influence on both its yield and quality. To effectively manage and implement preventative initiatives, diseases must be detected promptly and accurately. In this study, we present a unique model called sugarcaneNet2024 that outperforms previous methods for automatically and quickly detecting sugarcane disease through leaf image processing. Our proposed model consolidates an optimized weighted average ensemble of seven customized and LASSO-regularized pre-trained models, particularly InceptionV3, InceptionResNetV2, DenseNet201, DenseNet169, Xception, and ResNet152V2. Initially, we added three more dense layers with 0.0001 LASSO regularization, three 30% dropout layers, and three batch normalizations with renorm enabled at the bottom of these pre-trained models to improve the performance. The accuracy of sugarcane leaf disease classification was greatly increased by this addition. Following this, several comparative studies between the average ensemble and individual models were carried out, indicating that the ensemble technique performed better. The average ensemble of all modified pre-trained models produced outstanding outcomes: 100%, 99%, 99%, and 99.45% for f1 score, precision, recall, and accuracy, respectively. Performance was further enhanced by the implementation of an optimized weighted average ensemble technique incorporated with grid search. This optimized sugarcaneNet2024 model performed the best for detecting sugarcane diseases, having achieved accuracy, precision, recall, and F1 score of 99.67%, 100%, 100%, and 100% , respectively.
[ "['Md. Simul Hasan Talukder' 'Sharmin Akter' 'Abdullah Hafez Nur']" ]
null
null
2403.18871
null
null
http://arxiv.org/abs/2403.18871v1
2024-03-26T11:40:06Z
2024-03-26T11:40:06Z
Clinical Domain Knowledge-Derived Template Improves Post Hoc AI Explanations in Pneumothorax Classification
Background: Pneumothorax is an acute thoracic disease caused by abnormal air collection between the lungs and chest wall. To address the opaqueness often associated with deep learning (DL) models, explainable artificial intelligence (XAI) methods have been introduced to outline regions related to pneumothorax diagnoses made by DL models. However, these explanations sometimes diverge from actual lesion areas, highlighting the need for further improvement. Method: We propose a template-guided approach to incorporate the clinical knowledge of pneumothorax into model explanations generated by XAI methods, thereby enhancing the quality of these explanations. Utilizing one lesion delineation created by radiologists, our approach first generates a template that represents potential areas of pneumothorax occurrence. This template is then superimposed on model explanations to filter out extraneous explanations that fall outside the template's boundaries. To validate its efficacy, we carried out a comparative analysis of three XAI methods with and without our template guidance when explaining two DL models in two real-world datasets. Results: The proposed approach consistently improved baseline XAI methods across twelve benchmark scenarios built on three XAI methods, two DL models, and two datasets. The average incremental percentages, calculated by the performance improvements over the baseline performance, were 97.8% in Intersection over Union (IoU) and 94.1% in Dice Similarity Coefficient (DSC) when comparing model explanations and ground-truth lesion areas. Conclusions: In the context of pneumothorax diagnoses, we proposed a template-guided approach for improving AI explanations. We anticipate that our template guidance will forge a fresh approach to elucidating AI models by integrating clinical domain expertise.
[ "['Han Yuan' 'Chuan Hong' 'Pengtao Jiang' 'Gangming Zhao'\n 'Nguyen Tuan Anh Tran' 'Xinxing Xu' 'Yet Yen Yan' 'Nan Liu']" ]
null
null
2403.18872
null
null
http://arxiv.org/pdf/2403.18872v1
2024-03-26T12:51:02Z
2024-03-26T12:51:02Z
Targeted Visualization of the Backbone of Encoder LLMs
Attention based Large Language Models (LLMs) are the state-of-the-art in natural language processing (NLP). The two most common architectures are encoders such as BERT, and decoders like the GPT models. Despite the success of encoder models, on which we focus in this work, they also bear several risks, including issues with bias or their susceptibility for adversarial attacks, signifying the necessity for explainable AI to detect such issues. While there does exist various local explainability methods focusing on the prediction of single inputs, global methods based on dimensionality reduction for classification inspection, which have emerged in other domains and that go further than just using t-SNE in the embedding space, are not widely spread in NLP. To reduce this gap, we investigate the application of DeepView, a method for visualizing a part of the decision function together with a data set in two dimensions, to the NLP domain. While in previous work, DeepView has been used to inspect deep image classification models, we demonstrate how to apply it to BERT-based NLP classifiers and investigate its usability in this domain, including settings with adversarially perturbed input samples and pre-trained, fine-tuned, and multi-task models.
[ "['Isaac Roberts' 'Alexander Schulz' 'Luca Hermes' 'Barbara Hammer']" ]
null
null
2403.18873
null
null
http://arxiv.org/pdf/2403.18873v1
2024-03-26T14:42:46Z
2024-03-26T14:42:46Z
Predicting risk of cardiovascular disease using retinal OCT imaging
We investigated the potential of optical coherence tomography (OCT) as an additional imaging technique to predict future cardiovascular disease (CVD). We utilised a self-supervised deep learning approach based on Variational Autoencoders (VAE) to learn low-dimensional representations of high-dimensional 3D OCT images and to capture distinct characteristics of different retinal layers within the OCT image. A Random Forest (RF) classifier was subsequently trained using the learned latent features and participant demographic and clinical data, to differentiate between patients at risk of CVD events (MI or stroke) and non-CVD cases. Our predictive model, trained on multimodal data, was assessed based on its ability to correctly identify individuals likely to suffer from a CVD event(MI or stroke), within a 5-year interval after image acquisition. Our self-supervised VAE feature selection and multimodal Random Forest classifier differentiate between patients at risk of future CVD events and the control group with an AUC of 0.75, outperforming the clinically established QRISK3 score (AUC= 0.597). The choroidal layer visible in OCT images was identified as an important predictor of future CVD events using a novel approach to model explanability. Retinal OCT imaging provides a cost-effective and non-invasive alternative to predict the risk of cardiovascular disease and is readily accessible in optometry practices and hospitals.
[ "['Cynthia Maldonado-Garcia' 'Rodrigo Bonazzola' 'Enzo Ferrante'\n 'Thomas H Julian' 'Panagiotis I Sergouniotis' 'Nishant Ravikumara'\n 'Alejandro F Frangi']" ]
null
null
2403.18878
null
null
http://arxiv.org/pdf/2403.18878v1
2024-03-27T10:46:24Z
2024-03-27T10:46:24Z
AIC-UNet: Anatomy-informed Cascaded UNet for Robust Multi-Organ Segmentation
Imposing key anatomical features, such as the number of organs, their shapes, sizes, and relative positions, is crucial for building a robust multi-organ segmentation model. Current attempts to incorporate anatomical features include broadening effective receptive fields (ERF) size with resource- and data-intensive modules such as self-attention or introducing organ-specific topology regularizers, which may not scale to multi-organ segmentation problems where inter-organ relation also plays a huge role. We introduce a new approach to impose anatomical constraints on any existing encoder-decoder segmentation model by conditioning model prediction with learnable anatomy prior. More specifically, given an abdominal scan, a part of the encoder spatially warps a learnable prior to align with the given input scan using thin plate spline (TPS) grid interpolation. The warped prior is then integrated during the decoding phase to guide the model for more anatomy-informed predictions. Code is available at hyperlink{https://anonymous.4open.science/r/AIC-UNet-7048}{https://anonymous.4open.science/r/AIC-UNet-7048}.
[ "['Young Seok Jeon' 'Hongfei Yang' 'Huazhu Fu' 'Mengling Feng']" ]
null
null
2403.18886
null
null
http://arxiv.org/pdf/2403.18886v2
2024-06-09T12:24:03Z
2024-03-27T17:59:21Z
Self-Expansion of Pre-trained Models with Mixture of Adapters for Continual Learning
Continual learning (CL) aims to continually accumulate knowledge from a non-stationary data stream without catastrophic forgetting of learned knowledge, requiring a balance between stability and adaptability. Relying on the generalizable representation in pre-trained models (PTMs), PTM-based CL methods perform effective continual adaptation on downstream tasks by adding learnable adapters or prompts upon the frozen PTMs. However, many existing PTM-based CL methods use restricted adaptation on a fixed set of these modules to avoid forgetting, suffering from limited CL ability. Periodically adding task-specific modules results in linear model growth rate and impaired knowledge reuse. We propose Self-Expansion of pre-trained models with Modularized Adaptation (SEMA), a novel approach to enhance the control of stability-plasticity balance in PTM-based CL. SEMA automatically decides to reuse or add adapter modules on demand in CL, depending on whether significant distribution shift that cannot be handled is detected at different representation levels. We design modular adapter consisting of a functional adapter and a representation descriptor. The representation descriptors are trained as a distribution shift indicator and used to trigger self-expansion signals. For better composing the adapters, an expandable weighting router is learned jointly for mixture of adapter outputs. SEMA enables better knowledge reuse and sub-linear expansion rate. Extensive experiments demonstrate the effectiveness of the proposed self-expansion method, achieving state-of-the-art performance compared to PTM-based CL methods without memory rehearsal.
[ "['Huiyi Wang' 'Haodong Lu' 'Lina Yao' 'Dong Gong']" ]
null
null
2403.18910
null
null
http://arxiv.org/pdf/2403.18910v2
2024-06-11T18:00:00Z
2024-03-27T18:02:49Z
A Geometric Explanation of the Likelihood OOD Detection Paradox
Likelihood-based deep generative models (DGMs) commonly exhibit a puzzling behaviour: when trained on a relatively complex dataset, they assign higher likelihood values to out-of-distribution (OOD) data from simpler sources. Adding to the mystery, OOD samples are never generated by these DGMs despite having higher likelihoods. This two-pronged paradox has yet to be conclusively explained, making likelihood-based OOD detection unreliable. Our primary observation is that high-likelihood regions will not be generated if they contain minimal probability mass. We demonstrate how this seeming contradiction of large densities yet low probability mass can occur around data confined to low-dimensional manifolds. We also show that this scenario can be identified through local intrinsic dimension (LID) estimation, and propose a method for OOD detection which pairs the likelihoods and LID estimates obtained from a pre-trained DGM. Our method can be applied to normalizing flows and score-based diffusion models, and obtains results which match or surpass state-of-the-art OOD detection benchmarks using the same DGM backbones. Our code is available at https://github.com/layer6ai-labs/dgm_ood_detection.
[ "['Hamidreza Kamkari' 'Brendan Leigh Ross' 'Jesse C. Cresswell'\n 'Anthony L. Caterini' 'Rahul G. Krishnan' 'Gabriel Loaiza-Ganem']" ]
null
null
2403.18915
null
null
http://arxiv.org/pdf/2403.18915v1
2024-03-27T18:08:14Z
2024-03-27T18:08:14Z
PLOT-TAL -- Prompt Learning with Optimal Transport for Few-Shot Temporal Action Localization
This paper introduces a novel approach to temporal action localization (TAL) in few-shot learning. Our work addresses the inherent limitations of conventional single-prompt learning methods that often lead to overfitting due to the inability to generalize across varying contexts in real-world videos. Recognizing the diversity of camera views, backgrounds, and objects in videos, we propose a multi-prompt learning framework enhanced with optimal transport. This design allows the model to learn a set of diverse prompts for each action, capturing general characteristics more effectively and distributing the representation to mitigate the risk of overfitting. Furthermore, by employing optimal transport theory, we efficiently align these prompts with action features, optimizing for a comprehensive representation that adapts to the multifaceted nature of video data. Our experiments demonstrate significant improvements in action localization accuracy and robustness in few-shot settings on the standard challenging datasets of THUMOS-14 and EpicKitchens100, highlighting the efficacy of our multi-prompt optimal transport approach in overcoming the challenges of conventional few-shot TAL methods.
[ "['Edward Fish' 'Jon Weinbren' 'Andrew Gilbert']" ]
null
null
2403.18921
null
null
http://arxiv.org/pdf/2403.18921v1
2024-03-27T18:12:24Z
2024-03-27T18:12:24Z
SMOF: Streaming Modern CNNs on FPGAs with Smart Off-Chip Eviction
Convolutional Neural Networks (CNNs) have demonstrated their effectiveness in numerous vision tasks. However, their high processing requirements necessitate efficient hardware acceleration to meet the application's performance targets. In the space of FPGAs, streaming-based dataflow architectures are often adopted by users, as significant performance gains can be achieved through layer-wise pipelining and reduced off-chip memory access by retaining data on-chip. However, modern topologies, such as the UNet, YOLO, and X3D models, utilise long skip connections, requiring significant on-chip storage and thus limiting the performance achieved by such system architectures. The paper addresses the above limitation by introducing weight and activation eviction mechanisms to off-chip memory along the computational pipeline, taking into account the available compute and memory resources. The proposed mechanism is incorporated into an existing toolflow, expanding the design space by utilising off-chip memory as a buffer. This enables the mapping of such modern CNNs to devices with limited on-chip memory, under the streaming architecture design approach. SMOF has demonstrated the capacity to deliver competitive and, in some cases, state-of-the-art performance across a spectrum of computer vision tasks, achieving up to 10.65 X throughput improvement compared to previous works.
[ "['Petros Toupas' 'Zhewen Yu' 'Christos-Savvas Bouganis'\n 'Dimitrios Tzovaras']" ]
null
null
2403.18923
null
null
http://arxiv.org/pdf/2403.18923v1
2024-02-15T20:27:33Z
2024-02-15T20:27:33Z
Nature-Guided Cognitive Evolution for Predicting Dissolved Oxygen Concentrations in North Temperate Lakes
Predicting dissolved oxygen (DO) concentrations in north temperate lakes requires a comprehensive study of phenological patterns across various ecosystems, which highlights the significance of selecting phenological features and feature interactions. Process-based models are limited by partial process knowledge or oversimplified feature representations, while machine learning models face challenges in efficiently selecting relevant feature interactions for different lake types and tasks, especially under the infrequent nature of DO data collection. In this paper, we propose a Nature-Guided Cognitive Evolution (NGCE) strategy, which represents a multi-level fusion of adaptive learning with natural processes. Specifically, we utilize metabolic process-based models to generate simulated DO labels. Using these simulated labels, we implement a multi-population cognitive evolutionary search, where models, mirroring natural organisms, adaptively evolve to select relevant feature interactions within populations for different lake types and tasks. These models are not only capable of undergoing crossover and mutation mechanisms within intra-populations but also, albeit infrequently, engage in inter-population crossover. The second stage involves refining these models by retraining them with real observed labels. We have tested the performance of our NGCE strategy in predicting daily DO concentrations across a wide range of lakes in the Midwest, USA. These lakes, varying in size, depth, and trophic status, represent a broad spectrum of north temperate lakes. Our findings demonstrate that NGCE not only produces accurate predictions with few observed labels but also, through gene maps of models, reveals sophisticated phenological patterns of different lakes.
[ "['Runlong Yu' 'Robert Ladwig' 'Xiang Xu' 'Peijun Zhu' 'Paul C. Hanson'\n 'Yiqun Xie' 'Xiaowei Jia']" ]
null
null
2403.18926
null
null
http://arxiv.org/pdf/2403.18926v2
2024-05-24T10:14:55Z
2024-02-27T08:18:02Z
XMoE: Sparse Models with Fine-grained and Adaptive Expert Selection
Sparse models, including sparse Mixture-of-Experts (MoE) models, have emerged as an effective approach for scaling Transformer models. However, they often suffer from computational inefficiency since a significant number of parameters are unnecessarily involved in computations via multiplying values by zero or low activation values. To address this issue, we present tool, a novel MoE designed to enhance both the efficacy and efficiency of sparse MoE models. tool leverages small experts and a threshold-based router to enable tokens to selectively engage only essential parameters. Our extensive experiments on language modeling and machine translation tasks demonstrate that tool can enhance model performance while decreasing the computation load at MoE layers by over 50% without sacrificing performance. Furthermore, we present the versatility of tool by applying it to dense models, enabling sparse computation during inference. We provide a comprehensive analysis and make our code available at https://github.com/ysngki/XMoE.
[ "['Yuanhang Yang' 'Shiyi Qi' 'Wenchao Gu' 'Chaozheng Wang' 'Cuiyun Gao'\n 'Zenglin Xu']" ]
null
null
2403.18929
null
null
http://arxiv.org/pdf/2403.18929v1
2024-02-16T18:05:09Z
2024-02-16T18:05:09Z
A Review of Neuroscience-Inspired Machine Learning
One major criticism of deep learning centers around the biological implausibility of the credit assignment schema used for learning -- backpropagation of errors. This implausibility translates into practical limitations, spanning scientific fields, including incompatibility with hardware and non-differentiable implementations, thus leading to expensive energy requirements. In contrast, biologically plausible credit assignment is compatible with practically any learning condition and is energy-efficient. As a result, it accommodates hardware and scientific modeling, e.g. learning with physical systems and non-differentiable behavior. Furthermore, it can lead to the development of real-time, adaptive neuromorphic processing systems. In addressing this problem, an interdisciplinary branch of artificial intelligence research that lies at the intersection of neuroscience, cognitive science, and machine learning has emerged. In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks, discussing the solutions they provide for different scientific fields as well as their advantages on CPUs, GPUs, and novel implementations of neuromorphic hardware. We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
[ "['Alexander Ororbia' 'Ankur Mali' 'Adam Kohan' 'Beren Millidge'\n 'Tommaso Salvatori']" ]
null
null
2403.18930
null
null
http://arxiv.org/pdf/2403.18930v1
2024-02-17T16:36:01Z
2024-02-17T16:36:01Z
Optimizing Wireless Networks with Deep Unfolding: Comparative Study on Two Deep Unfolding Mechanisms
In this work, we conduct a comparative study on two deep unfolding mechanisms to efficiently perform power control in the next generation wireless networks. The power control problem is formulated as energy efficiency over multiple interference links. The problem is nonconvex. We employ fractional programming transformation to design two solutions for the problem. The first solution is a numerical solution while the second solution is a closed-form solution. Based on the first solution, we design a semi-unfolding deep learning model where we combine the domain knowledge of the wireless communications and the recent advances in the data-driven deep learning. Moreover, on the highlights of the closed-form solution, fully deep unfolded deep learning model is designed in which we fully leveraged the expressive closed-form power control solution and deep learning advances. In the simulation results, we compare the performance of the proposed deep learning models and the iterative solutions in terms of accuracy and inference speed to show their suitability for the real-time application in next generation networks.
[ "['Abuzar B. M. Adam' 'Mohammed A. M. Elhassan' 'Elhadj Moustapha Diallo']" ]
null
null
2403.18946
null
null
http://arxiv.org/pdf/2403.18946v1
2024-02-20T23:59:45Z
2024-02-20T23:59:45Z
Random Aggregate Beamforming for Over-the-Air Federated Learning in Large-Scale Networks
At present, there is a trend to deploy ubiquitous artificial intelligence (AI) applications at the edge of the network. As a promising framework that enables secure edge intelligence, federated learning (FL) has received widespread attention, and over-the-air computing (AirComp) has been integrated to further improve the communication efficiency. In this paper, we consider a joint device selection and aggregate beamforming design with the objectives of minimizing the aggregate error and maximizing the number of selected devices. This yields a combinatorial problem, which is difficult to solve especially in large-scale networks. To tackle the problems in a cost-effective manner, we propose a random aggregate beamforming-based scheme, which generates the aggregator beamforming vector via random sampling rather than optimization. The implementation of the proposed scheme does not require the channel estimation. We additionally use asymptotic analysis to study the obtained aggregate error and the number of the selected devices when the number of devices becomes large. Furthermore, a refined method that runs with multiple randomizations is also proposed for performance improvement. Extensive simulation results are presented to demonstrate the effectiveness of the proposed random aggregate beamforming-based scheme as well as the refined method.
[ "['Chunmei Xu' 'Shengheng Liu' 'Yongming Huang' 'Bjorn Ottersten'\n 'Dusit Niyato']" ]
null
null
2403.18947
null
null
http://arxiv.org/pdf/2403.18947v2
2024-06-05T13:07:17Z
2024-02-21T15:17:20Z
Self-Supervised Interpretable End-to-End Learning via Latent Functional Modularity
We introduce MoNet, a novel functionally modular network for self-supervised and interpretable end-to-end learning. By leveraging its functional modularity with a latent-guided contrastive loss function, MoNet efficiently learns task-specific decision-making processes in latent space without requiring task-level supervision. Moreover, our method incorporates an online, post-hoc explainability approach that enhances the interpretability of end-to-end inferences without compromising sensorimotor control performance. In real-world indoor environments, MoNet demonstrates effective visual autonomous navigation, outperforming baseline models by 7% to 28% in task specificity analysis. We further explore the interpretability of our network through post-hoc analysis of perceptual saliency maps and latent decision vectors. This provides valuable insights into the incorporation of explainable artificial intelligence into robotic learning, encompassing both perceptual and behavioral perspectives. Supplementary materials are available at https://sites.google.com/view/monet-lgc.
[ "['Hyunki Seong' 'David Hyunchul Shim']" ]
null
null
2403.18953
null
null
http://arxiv.org/abs/2403.18953v2
2024-06-05T21:49:03Z
2024-03-04T17:35:17Z
Hybridizing Traditional and Next-Generation Reservoir Computing to Accurately and Efficiently Forecast Dynamical Systems
Reservoir computers (RCs) are powerful machine learning architectures for time series prediction. Recently, next generation reservoir computers (NGRCs) have been introduced, offering distinct advantages over RCs, such as reduced computational expense and lower training data requirements. However, NGRCs have their own practical difficulties, including sensitivity to sampling time and type of nonlinearities in the data. Here, we introduce a hybrid RC-NGRC approach for time series forecasting of dynamical systems. We show that our hybrid approach can produce accurate short term predictions and capture the long term statistics of chaotic dynamical systems in situations where the RC and NGRC components alone are insufficient, e.g., due to constraints from limited computational resources, sub-optimal hyperparameters, sparsely-sampled training data, etc. Under these conditions, we show for multiple model chaotic systems that the hybrid RC-NGRC method with a small reservoir can achieve prediction performance approaching that of a traditional RC with a much larger reservoir, illustrating that the hybrid approach can offer significant gains in computational efficiency over traditional RCs while simultaneously addressing some of the limitations of NGRCs. Our results suggest that hybrid RC-NGRC approach may be particularly beneficial in cases when computational efficiency is a high priority and an NGRC alone is not adequate.
[ "['Ravi Chepuri' 'Dael Amzalag' 'Thomas Antonsen Jr.' 'Michelle Girvan']" ]
null
null
2403.18955
null
null
http://arxiv.org/pdf/2403.18955v1
2024-03-03T13:49:49Z
2024-03-03T13:49:49Z
Structurally Prune Anything: Any Architecture, Any Framework, Any Time
Neural network pruning serves as a critical technique for enhancing the efficiency of deep learning models. Unlike unstructured pruning, which only sets specific parameters to zero, structured pruning eliminates entire channels, thus yielding direct computational and storage benefits. However, the diverse patterns for coupling parameters, such as residual connections and group convolutions, the diverse deep learning frameworks, and the various time stages at which pruning can be performed make existing pruning methods less adaptable to different architectures, frameworks, and pruning criteria. To address this, we introduce Structurally Prune Anything (SPA), a versatile structured pruning framework that can prune neural networks with any architecture, from any framework, and at any stage of training. SPA leverages a standardized computational graph and ONNX representation to prune diverse neural network architectures without the need for manual intervention. SPA employs a group-level importance estimation method, which groups dependent computational operators, estimates their importance, and prunes unimportant coupled channels. This enables the transfer of various existing pruning criteria into a structured group style. As a result, SPA supports pruning at any time, either before training, after training with fine-tuning, or after training without fine-tuning. In the context of the latter, we introduce Optimal Brain SPA (OBSPA), an algorithm that achieves state-of-the-art pruning results needing neither fine-tuning nor calibration data. In extensive experiments, SPA shows competitive to state-of-the-art pruning performance across various architectures, from popular frameworks, at different pruning times.
[ "['Xun Wang' 'John Rachwan' 'Stephan Günnemann' 'Bertrand Charpentier']" ]
null
null
2403.18957
null
null
http://arxiv.org/pdf/2403.18957v1
2024-03-27T19:02:13Z
2024-03-27T19:02:13Z
Moderating Illicit Online Image Promotion for Unsafe User-Generated Content Games Using Large Vision-Language Models
Online user-generated content games (UGCGs) are increasingly popular among children and adolescents for social interaction and more creative online entertainment. However, they pose a heightened risk of exposure to explicit content, raising growing concerns for the online safety of children and adolescents. Despite these concerns, few studies have addressed the issue of illicit image-based promotions of unsafe UGCGs on social media, which can inadvertently attract young users. This challenge arises from the difficulty of obtaining comprehensive training data for UGCG images and the unique nature of these images, which differ from traditional unsafe content. In this work, we take the first step towards studying the threat of illicit promotions of unsafe UGCGs. We collect a real-world dataset comprising 2,924 images that display diverse sexually explicit and violent content used to promote UGCGs by their game creators. Our in-depth studies reveal a new understanding of this problem and the urgent need for automatically flagging illicit UGCG promotions. We additionally create a cutting-edge system, UGCG-Guard, designed to aid social media platforms in effectively identifying images used for illicit UGCG promotions. This system leverages recently introduced large vision-language models (VLMs) and employs a novel conditional prompting strategy for zero-shot domain adaptation, along with chain-of-thought (CoT) reasoning for contextual identification. UGCG-Guard achieves outstanding results, with an accuracy rate of 94% in detecting these images used for the illicit promotion of such games in real-world scenarios.
[ "['Keyan Guo' 'Ayush Utkarsh' 'Wenbo Ding' 'Isabelle Ondracek'\n 'Ziming Zhao' 'Guo Freeman' 'Nishant Vishwamitra' 'Hongxin Hu']" ]
null
null
2403.18965
null
null
http://arxiv.org/pdf/2403.18965v1
2024-03-27T19:30:06Z
2024-03-27T19:30:06Z
LORD: Large Models based Opposite Reward Design for Autonomous Driving
Reinforcement learning (RL) based autonomous driving has emerged as a promising alternative to data-driven imitation learning approaches. However, crafting effective reward functions for RL poses challenges due to the complexity of defining and quantifying good driving behaviors across diverse scenarios. Recently, large pretrained models have gained significant attention as zero-shot reward models for tasks specified with desired linguistic goals. However, the desired linguistic goals for autonomous driving such as "drive safely" are ambiguous and incomprehensible by pretrained models. On the other hand, undesired linguistic goals like "collision" are more concrete and tractable. In this work, we introduce LORD, a novel large models based opposite reward design through undesired linguistic goals to enable the efficient use of large pretrained models as zero-shot reward models. Through extensive experiments, our proposed framework shows its efficiency in leveraging the power of large pretrained models for achieving safe and enhanced autonomous driving. Moreover, the proposed approach shows improved generalization capabilities as it outperforms counterpart methods across diverse and challenging driving scenarios.
[ "['Xin Ye' 'Feng Tao' 'Abhirup Mallik' 'Burhaneddin Yaman' 'Liu Ren']" ]
null
null
2403.18969
null
null
http://arxiv.org/pdf/2403.18969v2
2024-05-28T02:34:26Z
2024-03-27T19:35:41Z
A Survey on Large Language Models from Concept to Implementation
Recent advancements in Large Language Models (LLMs), particularly those built on Transformer architectures, have significantly broadened the scope of natural language processing (NLP) applications, transcending their initial use in chatbot technology. This paper investigates the multifaceted applications of these models, with an emphasis on the GPT series. This exploration focuses on the transformative impact of artificial intelligence (AI) driven tools in revolutionizing traditional tasks like coding and problem-solving, while also paving new paths in research and development across diverse industries. From code interpretation and image captioning to facilitating the construction of interactive systems and advancing computational domains, Transformer models exemplify a synergy of deep learning, data analysis, and neural network design. This survey provides an in-depth look at the latest research in Transformer models, highlighting their versatility and the potential they hold for transforming diverse application sectors, thereby offering readers a comprehensive understanding of the current and future landscape of Transformer-based LLMs in practical applications.
[ "['Chen Wang' 'Jin Zhao' 'Jiaqi Gong']" ]
null
null
2403.18978
null
null
http://arxiv.org/pdf/2403.18978v1
2024-03-27T19:52:55Z
2024-03-27T19:52:55Z
TextCraftor: Your Text Encoder Can be Image Quality Controller
Diffusion-based text-to-image generative models, e.g., Stable Diffusion, have revolutionized the field of content generation, enabling significant advancements in areas like image editing and video synthesis. Despite their formidable capabilities, these models are not without their limitations. It is still challenging to synthesize an image that aligns well with the input text, and multiple runs with carefully crafted prompts are required to achieve satisfactory results. To mitigate these limitations, numerous studies have endeavored to fine-tune the pre-trained diffusion models, i.e., UNet, utilizing various technologies. Yet, amidst these efforts, a pivotal question of text-to-image diffusion model training has remained largely unexplored: Is it possible and feasible to fine-tune the text encoder to improve the performance of text-to-image diffusion models? Our findings reveal that, instead of replacing the CLIP text encoder used in Stable Diffusion with other large language models, we can enhance it through our proposed fine-tuning approach, TextCraftor, leading to substantial improvements in quantitative benchmarks and human assessments. Interestingly, our technique also empowers controllable image generation through the interpolation of different text encoders fine-tuned with various rewards. We also demonstrate that TextCraftor is orthogonal to UNet finetuning, and can be combined to further improve generative quality.
[ "['Yanyu Li' 'Xian Liu' 'Anil Kag' 'Ju Hu' 'Yerlan Idelbayev'\n 'Dhritiman Sagar' 'Yanzhi Wang' 'Sergey Tulyakov' 'Jian Ren']" ]
null
null
2403.18985
null
null
http://arxiv.org/abs/2403.18985v2
2024-04-22T14:49:36Z
2024-03-27T20:07:39Z
Robustness and Visual Explanation for Black Box Image, Video, and ECG Signal Classification with Reinforcement Learning
We present a generic Reinforcement Learning (RL) framework optimized for crafting adversarial attacks on different model types spanning from ECG signal analysis (1D), image classification (2D), and video classification (3D). The framework focuses on identifying sensitive regions and inducing misclassifications with minimal distortions and various distortion types. The novel RL method outperforms state-of-the-art methods for all three applications, proving its efficiency. Our RL approach produces superior localization masks, enhancing interpretability for image classification and ECG analysis models. For applications such as ECG analysis, our platform highlights critical ECG segments for clinicians while ensuring resilience against prevalent distortions. This comprehensive tool aims to bolster both resilience with adversarial training and transparency across varied applications and data types.
[ "['Soumyendu Sarkar' 'Ashwin Ramesh Babu' 'Sajad Mousavi' 'Vineet Gundecha'\n 'Avisek Naug' 'Sahand Ghorbanpour']" ]
null
null
2403.18994
null
null
http://arxiv.org/pdf/2403.18994v1
2024-03-27T20:27:31Z
2024-03-27T20:27:31Z
Causal-StoNet: Causal Inference for High-Dimensional Complex Data
With the advancement of data science, the collection of increasingly complex datasets has become commonplace. In such datasets, the data dimension can be extremely high, and the underlying data generation process can be unknown and highly nonlinear. As a result, the task of making causal inference with high-dimensional complex data has become a fundamental problem in many disciplines, such as medicine, econometrics, and social science. However, the existing methods for causal inference are frequently developed under the assumption that the data dimension is low or that the underlying data generation process is linear or approximately linear. To address these challenges, this paper proposes a novel causal inference approach for dealing with high-dimensional complex data. The proposed approach is based on deep learning techniques, including sparse deep learning theory and stochastic neural networks, that have been developed in recent literature. By using these techniques, the proposed approach can address both the high dimensionality and unknown data generation process in a coherent way. Furthermore, the proposed approach can also be used when missing values are present in the datasets. Extensive numerical studies indicate that the proposed approach outperforms existing ones.
[ "['Yaxin Fang' 'Faming Liang']" ]
null
null
2403.18998
null
null
http://arxiv.org/pdf/2403.18998v3
2024-04-12T10:09:16Z
2024-03-27T20:38:04Z
Few-Shot Cross-System Anomaly Trace Classification for Microservice-based systems
Microservice-based systems (MSS) may experience failures in various fault categories due to their complex and dynamic nature. To effectively handle failures, AIOps tools utilize trace-based anomaly detection and root cause analysis. In this paper, we propose a novel framework for few-shot abnormal trace classification for MSS. Our framework comprises two main components: (1) Multi-Head Attention Autoencoder for constructing system-specific trace representations, which enables (2) Transformer Encoder-based Model-Agnostic Meta-Learning to perform effective and efficient few-shot learning for abnormal trace classification. The proposed framework is evaluated on two representative MSS, Trainticket and OnlineBoutique, with open datasets. The results show that our framework can adapt the learned knowledge to classify new, unseen abnormal traces of novel fault categories both within the same system it was initially trained on and even in the different MSS. Within the same MSS, our framework achieves an average accuracy of 93.26% and 85.2% across 50 meta-testing tasks for Trainticket and OnlineBoutique, respectively, when provided with 10 instances for each task. In a cross-system context, our framework gets an average accuracy of 92.19% and 84.77% for the same meta-testing tasks of the respective system, also with 10 instances provided for each task. Our work demonstrates the applicability of achieving few-shot abnormal trace classification for MSS and shows how it can enable cross-system adaptability. This opens an avenue for building more generalized AIOps tools that require less system-specific data labeling for anomaly detection and root cause analysis.
[ "['Yuqing Wang' 'Mika V. Mäntylä' 'Serge Demeyer' 'Mutlu Beyazit'\n 'Joanna Kisaakye' 'Jesse Nyyssölä']" ]
null
null
2403.19009
null
null
http://arxiv.org/pdf/2403.19009v1
2024-03-27T21:02:15Z
2024-03-27T21:02:15Z
Towards Sustainable SecureML: Quantifying Carbon Footprint of Adversarial Machine Learning
The widespread adoption of machine learning (ML) across various industries has raised sustainability concerns due to its substantial energy usage and carbon emissions. This issue becomes more pressing in adversarial ML, which focuses on enhancing model security against different network-based attacks. Implementing defenses in ML systems often necessitates additional computational resources and network security measures, exacerbating their environmental impacts. In this paper, we pioneer the first investigation into adversarial ML's carbon footprint, providing empirical evidence connecting greater model robustness to higher emissions. Addressing the critical need to quantify this trade-off, we introduce the Robustness Carbon Trade-off Index (RCTI). This novel metric, inspired by economic elasticity principles, captures the sensitivity of carbon emissions to changes in adversarial robustness. We demonstrate the RCTI through an experiment involving evasion attacks, analyzing the interplay between robustness against attacks, performance, and carbon emissions.
[ "['Syed Mhamudul Hasan' 'Abdur R. Shahid' 'Ahmed Imteaj']" ]
null
null
2403.19011
null
null
http://arxiv.org/pdf/2403.19011v2
2024-04-24T15:06:26Z
2024-03-27T21:06:26Z
Sequential Inference of Hospitalization Electronic Health Records Using Probabilistic Models
In the dynamic hospital setting, decision support can be a valuable tool for improving patient outcomes. Data-driven inference of future outcomes is challenging in this dynamic setting, where long sequences such as laboratory tests and medications are updated frequently. This is due in part to heterogeneity of data types and mixed-sequence types contained in variable length sequences. In this work we design a probabilistic unsupervised model for multiple arbitrary-length sequences contained in hospitalization Electronic Health Record (EHR) data. The model uses a latent variable structure and captures complex relationships between medications, diagnoses, laboratory tests, neurological assessments, and medications. It can be trained on original data, without requiring any lossy transformations or time binning. Inference algorithms are derived that use partial data to infer properties of the complete sequences, including their length and presence of specific values. We train this model on data from subjects receiving medical care in the Kaiser Permanente Northern California integrated healthcare delivery system. The results are evaluated against held-out data for predicting the length of sequences and presence of Intensive Care Unit (ICU) in hospitalization bed sequences. Our method outperforms a baseline approach, showing that in these experiments the trained model captures information in the sequences that is informative of their future values.
[ "['Alan D. Kaplan' 'Priyadip Ray' 'John D. Greene' 'Vincent X. Liu']" ]
null
null
2403.19014
null
null
http://arxiv.org/abs/2403.19014v1
2024-03-27T21:14:17Z
2024-03-27T21:14:17Z
Thelxinoë: Recognizing Human Emotions Using Pupillometry and Machine Learning
In this study, we present a method for emotion recognition in Virtual Reality (VR) using pupillometry. We analyze pupil diameter responses to both visual and auditory stimuli via a VR headset and focus on extracting key features in the time-domain, frequency-domain, and time-frequency domain from VR generated data. Our approach utilizes feature selection to identify the most impactful features using Maximum Relevance Minimum Redundancy (mRMR). By applying a Gradient Boosting model, an ensemble learning technique using stacked decision trees, we achieve an accuracy of 98.8% with feature engineering, compared to 84.9% without it. This research contributes significantly to the Thelxino"e framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions. Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
[ "['Darlene Barker' 'Haim Levkowitz']" ]
null
null
2403.19021
null
null
http://arxiv.org/pdf/2403.19021v2
2024-05-17T04:05:18Z
2024-03-27T21:22:37Z
IDGenRec: LLM-RecSys Alignment with Textual ID Learning
Generative recommendation based on Large Language Models (LLMs) have transformed the traditional ranking-based recommendation style into a text-to-text generation paradigm. However, in contrast to standard NLP tasks that inherently operate on human vocabulary, current research in generative recommendations struggles to effectively encode recommendation items within the text-to-text framework using concise yet meaningful ID representations. To better align LLMs with recommendation needs, we propose IDGen, representing each item as a unique, concise, semantically rich, platform-agnostic textual ID using human language tokens. This is achieved by training a textual ID generator alongside the LLM-based recommender, enabling seamless integration of personalized recommendations into natural language generation. Notably, as user history is expressed in natural language and decoupled from the original dataset, our approach suggests the potential for a foundational generative recommendation model. Experiments show that our framework consistently surpasses existing models in sequential recommendation under standard experimental setting. Then, we explore the possibility of training a foundation recommendation model with the proposed method on data collected from 19 different datasets and tested its recommendation performance on 6 unseen datasets across different platforms under a completely zero-shot setting. The results show that the zero-shot performance of the pre-trained foundation model is comparable to or even better than some traditional recommendation models based on supervised training, showing the potential of the IDGen paradigm serving as the foundation model for generative recommendation. Code and data are open-sourced at https://github.com/agiresearch/IDGenRec.
[ "['Juntao Tan' 'Shuyuan Xu' 'Wenyue Hua' 'Yingqiang Ge' 'Zelong Li'\n 'Yongfeng Zhang']" ]
null
null
2403.19024
null
null
http://arxiv.org/pdf/2403.19024v2
2024-05-08T05:41:07Z
2024-03-27T21:31:46Z
Exploiting Symmetry in Dynamics for Model-Based Reinforcement Learning with Asymmetric Rewards
Recent work in reinforcement learning has leveraged symmetries in the model to improve sample efficiency in training a policy. A commonly used simplifying assumption is that the dynamics and reward both exhibit the same symmetry. However, in many real-world environments, the dynamical model exhibits symmetry independent of the reward model: the reward may not satisfy the same symmetries as the dynamics. In this paper, we investigate scenarios where only the dynamics are assumed to exhibit symmetry, extending the scope of problems in reinforcement learning and learning in control theory where symmetry techniques can be applied. We use Cartan's moving frame method to introduce a technique for learning dynamics which, by construction, exhibit specified symmetries. We demonstrate through numerical experiments that the proposed method learns a more accurate dynamical model.
[ "['Yasin Sonmez' 'Neelay Junnarkar' 'Murat Arcak']" ]
null
null
2403.19031
null
null
http://arxiv.org/pdf/2403.19031v1
2024-03-27T22:05:10Z
2024-03-27T22:05:10Z
Evaluating Large Language Models for Health-Related Text Classification Tasks with Public Social Media Data
Large language models (LLMs) have demonstrated remarkable success in NLP tasks. However, there is a paucity of studies that attempt to evaluate their performances on social media-based health-related natural language processing tasks, which have traditionally been difficult to achieve high scores in. We benchmarked one supervised classic machine learning model based on Support Vector Machines (SVMs), three supervised pretrained language models (PLMs) based on RoBERTa, BERTweet, and SocBERT, and two LLM based classifiers (GPT3.5 and GPT4), across 6 text classification tasks. We developed three approaches for leveraging LLMs for text classification: employing LLMs as zero-shot classifiers, us-ing LLMs as annotators to annotate training data for supervised classifiers, and utilizing LLMs with few-shot examples for augmentation of manually annotated data. Our comprehensive experiments demonstrate that employ-ing data augmentation using LLMs (GPT-4) with relatively small human-annotated data to train lightweight supervised classification models achieves superior results compared to training with human-annotated data alone. Supervised learners also outperform GPT-4 and GPT-3.5 in zero-shot settings. By leveraging this data augmentation strategy, we can harness the power of LLMs to develop smaller, more effective domain-specific NLP models. LLM-annotated data without human guidance for training light-weight supervised classification models is an ineffective strategy. However, LLM, as a zero-shot classifier, shows promise in excluding false negatives and potentially reducing the human effort required for data annotation. Future investigations are imperative to explore optimal training data sizes and the optimal amounts of augmented data.
[ "['Yuting Guo' 'Anthony Ovadje' 'Mohammed Ali Al-Garadi' 'Abeed Sarker']" ]
null
null
2403.19040
null
null
http://arxiv.org/pdf/2403.19040v1
2024-03-27T22:26:50Z
2024-03-27T22:26:50Z
Visualizing High-Dimensional Temporal Data Using Direction-Aware t-SNE
Many real-world data sets contain a temporal component or involve transitions from state to state. For exploratory data analysis, we can represent these high-dimensional data sets in two-dimensional maps, using embeddings of the data objects under exploration and representing their temporal relationships with directed edges. Most existing dimensionality reduction techniques, such as t-SNE and UMAP, do not take into account the temporal or relational nature of the data when constructing the embeddings, resulting in temporally cluttered visualizations that obscure potentially interesting patterns. To address this problem, we propose two complementary, direction-aware loss terms in the optimization function of t-SNE that emphasize the temporal aspects of the data, guiding the optimization and the resulting embedding to reveal temporal patterns that might otherwise go unnoticed. The Directional Coherence Loss (DCL) encourages nearby arrows connecting two adjacent time series points to point in the same direction, while the Edge Length Loss (ELL) penalizes arrows - which effectively represent time gaps in the visualized embedding - based on their length. Both loss terms are differentiable and can be easily incorporated into existing dimensionality reduction techniques. By promoting local directionality of the directed edges, our procedure produces more temporally meaningful and less cluttered visualizations. We demonstrate the effectiveness of our approach on a toy dataset and two real-world datasets.
[ "['Pavlin G. Poličar' 'Blaž Zupan']" ]
null
null
2403.19050
null
null
http://arxiv.org/pdf/2403.19050v3
2024-06-19T19:53:26Z
2024-03-27T23:10:33Z
Detecting Generative Parroting through Overfitting Masked Autoencoders
The advent of generative AI models has revolutionized digital content creation, yet it introduces challenges in maintaining copyright integrity due to generative parroting, where models mimic their training data too closely. Our research presents a novel approach to tackle this issue by employing an overfitted Masked Autoencoder (MAE) to detect such parroted samples effectively. We establish a detection threshold based on the mean loss across the training dataset, allowing for the precise identification of parroted content in modified datasets. Preliminary evaluations demonstrate promising results, suggesting our method's potential to ensure ethical use and enhance the legal compliance of generative models.
[ "['Saeid Asgari Taghanaki' 'Joseph Lambourne']" ]
null
null
2403.19057
null
null
http://arxiv.org/pdf/2403.19057v1
2024-03-27T23:49:22Z
2024-03-27T23:49:22Z
Equity in Healthcare: Analyzing Disparities in Machine Learning Predictions of Diabetic Patient Readmissions
This study investigates how machine learning (ML) models can predict hospital readmissions for diabetic patients fairly and accurately across different demographics (age, gender, race). We compared models like Deep Learning, Generalized Linear Models, Gradient Boosting Machines (GBM), and Naive Bayes. GBM stood out with an F1-score of 84.3% and accuracy of 82.2%, accurately predicting readmissions across demographics. A fairness analysis was conducted across all the models. GBM minimized disparities in predictions, achieving balanced results across genders and races. It showed low False Discovery Rates (FDR) (6-7%) and False Positive Rates (FPR) (5%) for both genders. Additionally, FDRs remained low for racial groups, such as African Americans (8%) and Asians (7%). Similarly, FPRs were consistent across age groups (4%) for both patients under 40 and those above 40, indicating its precision and ability to reduce bias. These findings emphasize the importance of choosing ML models carefully to ensure both accuracy and fairness for all patients. By showcasing effectiveness of various models with fairness metrics, this study promotes personalized medicine and the need for fair ML algorithms in healthcare. This can ultimately reduce disparities and improve outcomes for diabetic patients of all backgrounds.
[ "['Zainab Al-Zanbouri' 'Gauri Sharma' 'Shaina Raza']" ]
null
null
2403.19060
null
null
http://arxiv.org/pdf/2403.19060v2
2024-03-29T00:25:03Z
2024-03-27T23:55:02Z
Towards Human-Centered Construction Robotics: An RL-Driven Companion Robot For Contextually Assisting Carpentry Workers
In the dynamic construction industry, traditional robotic integration has primarily focused on automating specific tasks, often overlooking the complexity and variability of human aspects in construction workflows. This paper introduces a human-centered approach with a "work companion rover" designed to assist construction workers within their existing practices, aiming to enhance safety and workflow fluency while respecting construction labor's skilled nature. We conduct an in-depth study on deploying a robotic system in carpentry formwork, showcasing a prototype that emphasizes mobility, safety, and comfortable worker-robot collaboration in dynamic environments through a contextual Reinforcement Learning (RL)-driven modular framework. Our research advances robotic applications in construction, advocating for collaborative models where adaptive robots support rather than replace humans, underscoring the potential for an interactive and collaborative human-robot workforce.
[ "['Yuning Wu' 'Jiaying Wei' 'Jean Oh' 'Daniel Cardoso Llach']" ]
null
null
2403.19076
null
null
http://arxiv.org/abs/2403.19076v2
2024-03-29T21:33:39Z
2024-03-28T00:34:56Z
Tiny Machine Learning: Progress and Futures
Tiny Machine Learning (TinyML) is a new frontier of machine learning. By squeezing deep learning models into billions of IoT devices and microcontrollers (MCUs), we expand the scope of AI applications and enable ubiquitous intelligence. However, TinyML is challenging due to hardware constraints: the tiny memory resource makes it difficult to hold deep learning models designed for cloud and mobile platforms. There is also limited compiler and inference engine support for bare-metal devices. Therefore, we need to co-design the algorithm and system stack to enable TinyML. In this review, we will first discuss the definition, challenges, and applications of TinyML. We then survey the recent progress in TinyML and deep learning on MCUs. Next, we will introduce MCUNet, showing how we can achieve ImageNet-scale AI applications on IoT devices with system-algorithm co-design. We will further extend the solution from inference to training and introduce tiny on-device training techniques. Finally, we present future directions in this area. Today's large model might be tomorrow's tiny model. The scope of TinyML should evolve and adapt over time.
[ "['Ji Lin' 'Ligeng Zhu' 'Wei-Ming Chen' 'Wei-Chen Wang' 'Song Han']" ]
null
null
2403.19082
null
null
http://arxiv.org/pdf/2403.19082v1
2024-03-28T01:14:25Z
2024-03-28T01:14:25Z
Enhancing Conformal Prediction Using E-Test Statistics
Conformal Prediction (CP) serves as a robust framework that quantifies uncertainty in predictions made by Machine Learning (ML) models. Unlike traditional point predictors, CP generates statistically valid prediction regions, also known as prediction intervals, based on the assumption of data exchangeability. Typically, the construction of conformal predictions hinges on p-values. This paper, however, ventures down an alternative path, harnessing the power of e-test statistics to augment the efficacy of conformal predictions by introducing a BB-predictor (bounded from the below predictor).
[ "['A. A. Balinsky' 'A. D. Balinsky']" ]
null
null
2403.19083
null
null
http://arxiv.org/pdf/2403.19083v1
2024-03-28T01:27:10Z
2024-03-28T01:27:10Z
Improving Cancer Imaging Diagnosis with Bayesian Networks and Deep Learning: A Bayesian Deep Learning Approach
With recent advancements in the development of artificial intelligence applications using theories and algorithms in machine learning, many accurate models can be created to train and predict on given datasets. With the realization of the importance of imaging interpretation in cancer diagnosis, this article aims to investigate the theory behind Deep Learning and Bayesian Network prediction models. Based on the advantages and drawbacks of each model, different approaches will be used to construct a Bayesian Deep Learning Model, combining the strengths while minimizing the weaknesses. Finally, the applications and accuracy of the resulting Bayesian Deep Learning approach in the health industry in classifying images will be analyzed.
[ "['Pei Xi' 'Lin']" ]
null
null
2403.19099
null
null
http://arxiv.org/pdf/2403.19099v1
2024-03-28T02:25:12Z
2024-03-28T02:25:12Z
Optimizing Quantum Convolutional Neural Network Architectures for Arbitrary Data Dimension
Quantum convolutional neural networks (QCNNs) represent a promising approach in quantum machine learning, paving new directions for both quantum and classical data analysis. This approach is particularly attractive due to the absence of the barren plateau problem, a fundamental challenge in training quantum neural networks (QNNs), and its feasibility. However, a limitation arises when applying QCNNs to classical data. The network architecture is most natural when the number of input qubits is a power of two, as this number is reduced by a factor of two in each pooling layer. The number of input qubits determines the dimensions (i.e. the number of features) of the input data that can be processed, restricting the applicability of QCNN algorithms to real-world data. To address this issue, we propose a QCNN architecture capable of handling arbitrary input data dimensions while optimizing the allocation of quantum resources such as ancillary qubits and quantum gates. This optimization is not only important for minimizing computational resources, but also essential in noisy intermediate-scale quantum (NISQ) computing, as the size of the quantum circuits that can be executed reliably is limited. Through numerical simulations, we benchmarked the classification performance of various QCNN architectures when handling arbitrary input data dimensions on the MNIST and Breast Cancer datasets. The results validate that the proposed QCNN architecture achieves excellent classification performance while utilizing a minimal resource overhead, providing an optimal solution when reliable quantum computation is constrained by noise and imperfections.
[ "['Changwon Lee' 'Israel F. Araujo' 'Dongha Kim' 'Junghan Lee'\n 'Siheon Park' 'Ju-Young Ryu' 'Daniel K. Park']" ]
null
null
2403.19103
null
null
http://arxiv.org/pdf/2403.19103v1
2024-03-28T02:35:53Z
2024-03-28T02:35:53Z
Automated Black-box Prompt Engineering for Personalized Text-to-Image Generation
Prompt engineering is effective for controlling the output of text-to-image (T2I) generative models, but it is also laborious due to the need for manually crafted prompts. This challenge has spurred the development of algorithms for automated prompt generation. However, these methods often struggle with transferability across T2I models, require white-box access to the underlying model, and produce non-intuitive prompts. In this work, we introduce PRISM, an algorithm that automatically identifies human-interpretable and transferable prompts that can effectively generate desired concepts given only black-box access to T2I models. Inspired by large language model (LLM) jailbreaking, PRISM leverages the in-context learning ability of LLMs to iteratively refine the candidate prompts distribution for given reference images. Our experiments demonstrate the versatility and effectiveness of PRISM in generating accurate prompts for objects, styles and images across multiple T2I models, including Stable Diffusion, DALL-E, and Midjourney.
[ "['Yutong He' 'Alexander Robey' 'Naoki Murata' 'Yiding Jiang'\n 'Joshua Williams' 'George J. Pappas' 'Hamed Hassani' 'Yuki Mitsufuji'\n 'Ruslan Salakhutdinov' 'J. Zico Kolter']" ]
null
null
2403.19107
null
null
http://arxiv.org/pdf/2403.19107v1
2024-03-28T02:51:33Z
2024-03-28T02:51:33Z
Synthetic Medical Imaging Generation with Generative Adversarial Networks For Plain Radiographs
In medical imaging, access to data is commonly limited due to patient privacy restrictions and the issue that it can be difficult to acquire enough data in the case of rare diseases.[1] The purpose of this investigation was to develop a reusable open-source synthetic image generation pipeline, the GAN Image Synthesis Tool (GIST), that is easy to use as well as easy to deploy. The pipeline helps to improve and standardize AI algorithms in the digital health space by generating high quality synthetic image data that is not linked to specific patients. Its image generation capabilities include the ability to generate imaging of pathologies or injuries with low incidence rates. This improvement of digital health AI algorithms could improve diagnostic accuracy, aid in patient care, decrease medicolegal claims, and ultimately decrease the overall cost of healthcare. The pipeline builds on existing Generative Adversarial Networks (GANs) algorithms, and preprocessing and evaluation steps were included for completeness. For this work, we focused on ensuring the pipeline supports radiography, with a focus on synthetic knee and elbow x-ray images. In designing the pipeline, we evaluated the performance of current GAN architectures, studying the performance on available x-ray data. We show that the pipeline is capable of generating high quality and clinically relevant images based on a lay person's evaluation and the Fr'echet Inception Distance (FID) metric.
[ "['John R. McNulty' 'Lee Kho' 'Alexandria L. Case' 'Charlie Fornaca'\n 'Drew Johnston' 'David Slater' 'Joshua M. Abzug' 'Sybil A. Russell']" ]
null
null
2403.19114
null
null
http://arxiv.org/pdf/2403.19114v1
2024-03-28T03:10:39Z
2024-03-28T03:10:39Z
Top Leaderboard Ranking = Top Coding Proficiency, Always? EvoEval: Evolving Coding Benchmarks via LLM
LLMs have become the go-to choice for code generation tasks, with an exponential increase in the training, development, and usage of LLMs specifically for code generation. To evaluate the ability of LLMs on code, both academic and industry practitioners rely on popular handcrafted benchmarks. However, prior benchmarks contain only a very limited set of problems, both in quantity and variety. Further, due to popularity and age, many benchmarks are prone to data leakage where example solutions can be readily found on the web and thus potentially in training data. Such limitations inevitably lead us to inquire: Is the leaderboard performance on existing benchmarks reliable and comprehensive enough to measure the program synthesis ability of LLMs? To address this, we introduce EvoEval -- a program synthesis benchmark suite created by evolving existing benchmarks into different targeted domains for a comprehensive evaluation of LLM coding abilities. Our study on 51 LLMs shows that compared to the high performance obtained on standard benchmarks like HumanEval, there is a significant drop in performance (on average 39.4%) when using EvoEval. Additionally, the decrease in performance can range from 19.6% to 47.7%, leading to drastic ranking changes amongst LLMs and showing potential overfitting of existing benchmarks. Furthermore, we showcase various insights, including the brittleness of instruction-following models when encountering rewording or subtle changes as well as the importance of learning problem composition and decomposition. EvoEval not only provides comprehensive benchmarks, but can be used to further evolve arbitrary problems to keep up with advances and the ever-changing landscape of LLMs for code. We have open-sourced our benchmarks, tools, and complete LLM generations at https://github.com/evo-eval/evoeval
[ "['Chunqiu Steven Xia' 'Yinlin Deng' 'Lingming Zhang']" ]
null
null
2403.19143
null
null
http://arxiv.org/pdf/2403.19143v1
2024-03-28T04:35:27Z
2024-03-28T04:35:27Z
Tiny Graph Neural Networks for Radio Resource Management
The surge in demand for efficient radio resource management has necessitated the development of sophisticated yet compact neural network architectures. In this paper, we introduce a novel approach to Graph Neural Networks (GNNs) tailored for radio resource management by presenting a new architecture: the Low Rank Message Passing Graph Neural Network (LR-MPGNN). The cornerstone of LR-MPGNN is the implementation of a low-rank approximation technique that substitutes the conventional linear layers with their low-rank counterparts. This innovative design significantly reduces the model size and the number of parameters. We evaluate the performance of the proposed LR-MPGNN model based on several key metrics: model size, number of parameters, weighted sum rate of the communication system, and the distribution of eigenvalues of weight matrices. Our extensive evaluations demonstrate that the LR-MPGNN model achieves a sixtyfold decrease in model size, and the number of model parameters can be reduced by up to 98%. Performance-wise, the LR-MPGNN demonstrates robustness with a marginal 2% reduction in the best-case scenario in the normalized weighted sum rate compared to the original MPGNN model. Additionally, the distribution of eigenvalues of the weight matrices in the LR-MPGNN model is more uniform and spans a wider range, suggesting a strategic redistribution of weights.
[ "['Ahmad Ghasemi' 'Hossein Pishro-Nik']" ]
null
null
2403.19149
null
null
http://arxiv.org/pdf/2403.19149v1
2024-03-28T05:07:41Z
2024-03-28T05:07:41Z
Topological Cycle Graph Attention Network for Brain Functional Connectivity
This study, we introduce a novel Topological Cycle Graph Attention Network (CycGAT), designed to delineate a functional backbone within brain functional graph--key pathways essential for signal transmissio--from non-essential, redundant connections that form cycles around this core structure. We first introduce a cycle incidence matrix that establishes an independent cycle basis within a graph, mapping its relationship with edges. We propose a cycle graph convolution that leverages a cycle adjacency matrix, derived from the cycle incidence matrix, to specifically filter edge signals in a domain of cycles. Additionally, we strengthen the representation power of the cycle graph convolution by adding an attention mechanism, which is further augmented by the introduction of edge positional encodings in cycles, to enhance the topological awareness of CycGAT. We demonstrate CycGAT's localization through simulation and its efficacy on an ABCD study's fMRI data (n=8765), comparing it with baseline models. CycGAT outperforms these models, identifying a functional backbone with significantly fewer cycles, crucial for understanding neural circuits related to general intelligence. Our code will be released once accepted.
[ "['Jinghan Huang' 'Nanguang Chen' 'Anqi Qiu']" ]
null
null
2403.19150
null
null
http://arxiv.org/pdf/2403.19150v1
2024-03-28T05:08:25Z
2024-03-28T05:08:25Z
Towards Understanding Dual BN In Hybrid Adversarial Training
There is a growing concern about applying batch normalization (BN) in adversarial training (AT), especially when the model is trained on both adversarial samples and clean samples (termed Hybrid-AT). With the assumption that adversarial and clean samples are from two different domains, a common practice in prior works is to adopt Dual BN, where BN and BN are used for adversarial and clean branches, respectively. A popular belief for motivating Dual BN is that estimating normalization statistics of this mixture distribution is challenging and thus disentangling it for normalization achieves stronger robustness. In contrast to this belief, we reveal that disentangling statistics plays a less role than disentangling affine parameters in model training. This finding aligns with prior work (Rebuffi et al., 2023), and we build upon their research for further investigations. We demonstrate that the domain gap between adversarial and clean samples is not very large, which is counter-intuitive considering the significant influence of adversarial perturbation on the model accuracy. We further propose a two-task hypothesis which serves as the empirical foundation and a unified framework for Hybrid-AT improvement. We also investigate Dual BN in test-time and reveal that affine parameters characterize the robustness during inference. Overall, our work sheds new light on understanding the mechanism of Dual BN in Hybrid-AT and its underlying justification.
[ "['Chenshuang Zhang' 'Chaoning Zhang' 'Kang Zhang' 'Axi Niu' 'Junmo Kim'\n 'In So Kweon']" ]
null
null
2403.19159
null
null
http://arxiv.org/pdf/2403.19159v1
2024-03-28T06:03:47Z
2024-03-28T06:03:47Z
Disentangling Length from Quality in Direct Preference Optimization
Reinforcement Learning from Human Feedback (RLHF) has been a crucial component in the recent success of Large Language Models. However, RLHF is know to exploit biases in human preferences, such as verbosity. A well-formatted and eloquent answer is often more highly rated by users, even when it is less helpful and objective. A number of approaches have been developed to control those biases in the classical RLHF literature, but the problem remains relatively under-explored for Direct Alignment Algorithms such as Direct Preference Optimization (DPO). Unlike classical RLHF, DPO does not train a separate reward model or use reinforcement learning directly, so previous approaches developed to control verbosity cannot be directly applied to this setting. Our work makes several contributions. For the first time, we study the length problem in the DPO setting, showing significant exploitation in DPO and linking it to out-of-distribution bootstrapping. We then develop a principled but simple regularization strategy that prevents length exploitation, while still maintaining improvements in model quality. We demonstrate these effects across datasets on summarization and dialogue, where we achieve up to 20% improvement in win rates when controlling for length, despite the GPT4 judge's well-known verbosity bias.
[ "['Ryan Park' 'Rafael Rafailov' 'Stefano Ermon' 'Chelsea Finn']" ]
null
null
2403.19163
null
null
http://arxiv.org/pdf/2403.19163v1
2024-03-28T06:18:12Z
2024-03-28T06:18:12Z
D'OH: Decoder-Only random Hypernetworks for Implicit Neural Representations
Deep implicit functions have been found to be an effective tool for efficiently encoding all manner of natural signals. Their attractiveness stems from their ability to compactly represent signals with little to no off-line training data. Instead, they leverage the implicit bias of deep networks to decouple hidden redundancies within the signal. In this paper, we explore the hypothesis that additional compression can be achieved by leveraging the redundancies that exist between layers. We propose to use a novel run-time decoder-only hypernetwork - that uses no offline training data - to better model this cross-layer parameter redundancy. Previous applications of hyper-networks with deep implicit functions have applied feed-forward encoder/decoder frameworks that rely on large offline datasets that do not generalize beyond the signals they were trained on. We instead present a strategy for the initialization of run-time deep implicit functions for single-instance signals through a Decoder-Only randomly projected Hypernetwork (D'OH). By directly changing the dimension of a latent code to approximate a target implicit neural architecture, we provide a natural way to vary the memory footprint of neural representations without the costly need for neural architecture search on a space of alternative low-rate structures.
[ "['Cameron Gordon' 'Lachlan Ewen MacDonald' 'Hemanth Saratchandran'\n 'Simon Lucey']" ]
null
null
2403.19165
null
null
http://arxiv.org/pdf/2403.19165v2
2024-04-01T10:20:09Z
2024-03-28T06:24:04Z
Evaluating Fair Feature Selection in Machine Learning for Healthcare
With the universal adoption of machine learning in healthcare, the potential for the automation of societal biases to further exacerbate health disparities poses a significant risk. We explore algorithmic fairness from the perspective of feature selection. Traditional feature selection methods identify features for better decision making by removing resource-intensive, correlated, or non-relevant features but overlook how these factors may differ across subgroups. To counter these issues, we evaluate a fair feature selection method that considers equal importance to all demographic groups. We jointly considered a fairness metric and an error metric within the feature selection process to ensure a balance between minimizing both bias and global classification error. We tested our approach on three publicly available healthcare datasets. On all three datasets, we observed improvements in fairness metrics coupled with a minimal degradation of balanced accuracy. Our approach addresses both distributive and procedural fairness within the fair machine learning context.
[ "['Md Rahat Shahriar Zawad' 'Peter Washington']" ]
null
null
2403.19178
null
null
http://arxiv.org/pdf/2403.19178v1
2024-03-28T07:08:26Z
2024-03-28T07:08:26Z
Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning
While centralized servers pose a risk of being a single point of failure, decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities. Merging distributed computing with cryptographic techniques, decentralized technologies introduce a novel computing paradigm. Blockchain ensures secure, transparent, and tamper-proof data management by validating and recording transactions via consensus across network nodes. Federated Learning (FL), as a distributed machine learning framework, enables participants to collaboratively train models while safeguarding data privacy by avoiding direct raw data exchange. Despite the growing interest in decentralized methods, their application in FL remains underexplored. This paper presents a thorough investigation into Blockchain-based FL (BCFL), spotlighting the synergy between blockchain's security features and FL's privacy-preserving model training capabilities. First, we present the taxonomy of BCFL from three aspects, including decentralized, separate networks, and reputation-based architectures. Then, we summarize the general architecture of BCFL systems, providing a comprehensive perspective on FL architectures informed by blockchain. Afterward, we analyze the application of BCFL in healthcare, IoT, and other privacy-sensitive areas. Finally, we identify future research directions of BCFL.
[ "['Ji Liu' 'Chunlu Chen' 'Yu Li' 'Lin Sun' 'Yulun Song' 'Jingbo Zhou'\n 'Bo Jing' 'Dejing Dou']" ]
null
null
2403.19181
null
null
http://arxiv.org/pdf/2403.19181v2
2024-06-24T13:22:22Z
2024-03-28T07:22:16Z
Make Large Language Model a Better Ranker
Large Language Models (LLMs) demonstrate robust capabilities across various fields, leading to a paradigm shift in LLM-enhanced Recommender System (RS). Research to date focuses on point-wise and pair-wise recommendation paradigms, which are inefficient for LLM-based recommenders due to high computational costs. However, existing list-wise approaches also fall short in ranking tasks due to misalignment between ranking objectives and next-token prediction. Moreover, these LLM-based methods struggle to effectively address the order relation among candidates, particularly given the scale of ratings. To address these challenges, this paper introduces the large language model framework with Aligned Listwise Ranking Objectives (ALRO). ALRO is designed to bridge the gap between the capabilities of LLMs and the nuanced requirements of ranking tasks. Specifically, ALRO employs explicit feedback in a listwise manner by introducing soft lambda loss, a customized adaptation of lambda loss designed for optimizing order relations. This mechanism provides more accurate optimization goals, enhancing the ranking process. Additionally, ALRO incorporates a permutation-sensitive learning mechanism that addresses position bias, a prevalent issue in generative models, without imposing additional computational burdens during inference. Our evaluative studies reveal that ALRO outperforms both existing embedding-based recommendation methods and LLM-based recommendation baselines.
[ "['Wenshuo Chao' 'Zhi Zheng' 'Hengshu Zhu' 'Hao Liu']" ]
null
null
2403.19205
null
null
http://arxiv.org/pdf/2403.19205v1
2024-03-28T08:06:48Z
2024-03-28T08:06:48Z
From Activation to Initialization: Scaling Insights for Optimizing Neural Fields
In the realm of computer vision, Neural Fields have gained prominence as a contemporary tool harnessing neural networks for signal representation. Despite the remarkable progress in adapting these networks to solve a variety of problems, the field still lacks a comprehensive theoretical framework. This article aims to address this gap by delving into the intricate interplay between initialization and activation, providing a foundational basis for the robust optimization of Neural Fields. Our theoretical insights reveal a deep-seated connection among network initialization, architectural choices, and the optimization process, emphasizing the need for a holistic approach when designing cutting-edge Neural Fields.
[ "['Hemanth Saratchandran' 'Sameera Ramasinghe' 'Simon Lucey']" ]
null
null
2403.19211
null
null
http://arxiv.org/pdf/2403.19211v1
2024-03-28T08:19:33Z
2024-03-28T08:19:33Z
Dual-Personalizing Adapter for Federated Foundation Models
Recently, foundation models, particularly large language models (LLMs), have demonstrated an impressive ability to adapt to various tasks by fine-tuning large amounts of instruction data. Notably, federated foundation models emerge as a privacy preservation method to fine-tune models collaboratively under federated learning (FL) settings by leveraging many distributed datasets with non-IID data. To alleviate communication and computation overhead, parameter-efficient methods are introduced for efficiency, and some research adapted personalization methods to federated foundation models for better user preferences alignment. However, a critical gap in existing research is the neglect of test-time distribution shifts in real-world applications. Therefore, to bridge this gap, we propose a new setting, termed test-time personalization, which not only concentrates on the targeted local task but also extends to other tasks that exhibit test-time distribution shifts. To address challenges in this new setting, we explore a simple yet effective solution to learn a comprehensive foundation model. Specifically, a dual-personalizing adapter architecture (FedDPA) is proposed, comprising a global adapter and a local adapter for addressing test-time distribution shifts and personalization, respectively. Additionally, we introduce an instance-wise dynamic weighting mechanism to optimize the balance between the global and local adapters, enhancing overall performance. The effectiveness of the proposed method has been evaluated on benchmark datasets across different NLP tasks.
[ "['Yiyuan Yang' 'Guodong Long' 'Tao Shen' 'Jing Jiang'\n 'Michael Blumenstein']" ]
null
null
2403.19232
null
null
http://arxiv.org/pdf/2403.19232v1
2024-03-28T08:44:36Z
2024-03-28T08:44:36Z
AZ-NAS: Assembling Zero-Cost Proxies for Network Architecture Search
Training-free network architecture search (NAS) aims to discover high-performing networks with zero-cost proxies, capturing network characteristics related to the final performance. However, network rankings estimated by previous training-free NAS methods have shown weak correlations with the performance. To address this issue, we propose AZ-NAS, a novel approach that leverages the ensemble of various zero-cost proxies to enhance the correlation between a predicted ranking of networks and the ground truth substantially in terms of the performance. To achieve this, we introduce four novel zero-cost proxies that are complementary to each other, analyzing distinct traits of architectures in the views of expressivity, progressivity, trainability, and complexity. The proxy scores can be obtained simultaneously within a single forward and backward pass, making an overall NAS process highly efficient. In order to integrate the rankings predicted by our proxies effectively, we introduce a non-linear ranking aggregation method that highlights the networks highly-ranked consistently across all the proxies. Experimental results conclusively demonstrate the efficacy and efficiency of AZ-NAS, outperforming state-of-the-art methods on standard benchmarks, all while maintaining a reasonable runtime cost.
[ "['Junghyup Lee' 'Bumsub Ham']" ]
null
null
2403.19243
null
null
http://arxiv.org/pdf/2403.19243v1
2024-03-28T08:58:20Z
2024-03-28T08:58:20Z
Sine Activated Low-Rank Matrices for Parameter Efficient Learning
Low-rank decomposition has emerged as a vital tool for enhancing parameter efficiency in neural network architectures, gaining traction across diverse applications in machine learning. These techniques significantly lower the number of parameters, striking a balance between compactness and performance. However, a common challenge has been the compromise between parameter efficiency and the accuracy of the model, where reduced parameters often lead to diminished accuracy compared to their full-rank counterparts. In this work, we propose a novel theoretical framework that integrates a sinusoidal function within the low-rank decomposition process. This approach not only preserves the benefits of the parameter efficiency characteristic of low-rank methods but also increases the decomposition's rank, thereby enhancing model accuracy. Our method proves to be an adaptable enhancement for existing low-rank models, as evidenced by its successful application in Vision Transformers (ViT), Large Language Models (LLMs), Neural Radiance Fields (NeRF), and 3D shape modeling. This demonstrates the wide-ranging potential and efficiency of our proposed technique.
[ "['Yiping Ji' 'Hemanth Saratchandran' 'Cameron Gordon' 'Zeyu Zhang'\n 'Simon Lucey']" ]
null
null
2403.19246
null
null
http://arxiv.org/pdf/2403.19246v1
2024-03-28T09:06:23Z
2024-03-28T09:06:23Z
MPXGAT: An Attention based Deep Learning Model for Multiplex Graphs Embedding
Graph representation learning has rapidly emerged as a pivotal field of study. Despite its growing popularity, the majority of research has been confined to embedding single-layer graphs, which fall short in representing complex systems with multifaceted relationships. To bridge this gap, we introduce MPXGAT, an innovative attention-based deep learning model tailored to multiplex graph embedding. Leveraging the robustness of Graph Attention Networks (GATs), MPXGAT captures the structure of multiplex networks by harnessing both intra-layer and inter-layer connections. This exploitation facilitates accurate link prediction within and across the network's multiple layers. Our comprehensive experimental evaluation, conducted on various benchmark datasets, confirms that MPXGAT consistently outperforms state-of-the-art competing algorithms.
[ "['Marco Bongiovanni' 'Luca Gallo' 'Roberto Grasso' 'Alfredo Pulvirenti']" ]
null
null
2403.19253
null
null
http://arxiv.org/pdf/2403.19253v1
2024-03-28T09:20:15Z
2024-03-28T09:20:15Z
Inferring Latent Temporal Sparse Coordination Graph for Multi-Agent Reinforcement Learning
Effective agent coordination is crucial in cooperative Multi-Agent Reinforcement Learning (MARL). While agent cooperation can be represented by graph structures, prevailing graph learning methods in MARL are limited. They rely solely on one-step observations, neglecting crucial historical experiences, leading to deficient graphs that foster redundant or detrimental information exchanges. Additionally, high computational demands for action-pair calculations in dense graphs impede scalability. To address these challenges, we propose inferring a Latent Temporal Sparse Coordination Graph (LTS-CG) for MARL. The LTS-CG leverages agents' historical observations to calculate an agent-pair probability matrix, where a sparse graph is sampled from and used for knowledge exchange between agents, thereby simultaneously capturing agent dependencies and relation uncertainty. The computational complexity of this procedure is only related to the number of agents. This graph learning process is further augmented by two innovative characteristics: Predict-Future, which enables agents to foresee upcoming observations, and Infer-Present, ensuring a thorough grasp of the environmental context from limited data. These features allow LTS-CG to construct temporal graphs from historical and real-time information, promoting knowledge exchange during policy learning and effective collaboration. Graph learning and agent training occur simultaneously in an end-to-end manner. Our demonstrated results on the StarCraft II benchmark underscore LTS-CG's superior performance.
[ "['Wei Duan' 'Jie Lu' 'Junyu Xuan']" ]
null
null
2403.19262
null
null
http://arxiv.org/pdf/2403.19262v1
2024-03-28T09:36:55Z
2024-03-28T09:36:55Z
Removing the need for ground truth UWB data collection: self-supervised ranging error correction using deep reinforcement learning
Indoor positioning using UWB technology has gained interest due to its centimeter-level accuracy potential. However, multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags. Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets, making them impractical for real-world deployments. This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data. A reinforcement learning agent uses the channel impulse response as a state and predicts corrections to minimize the error between corrected and estimated ranges. The agent learns, self-supervised, by iteratively improving corrections that are generated by combining the predictability of trajectories with filtering and smoothening. Experiments on real-world UWB measurements demonstrate comparable performance to state-of-the-art supervised methods, overcoming data dependency and lack of generalizability limitations. This makes self-supervised deep reinforcement learning a promising solution for practical and scalable UWB-ranging error correction.
[ "['Dieter Coppens' 'Ben Van Herbruggen' 'Adnan Shahid' 'Eli De Poorter']" ]
null
null
2403.19273
null
null
http://arxiv.org/pdf/2403.19273v1
2024-03-28T09:57:50Z
2024-03-28T09:57:50Z
A Machine Learning Approach for Crop Yield and Disease Prediction Integrating Soil Nutrition and Weather Factors
The development of an intelligent agricultural decision-supporting system for crop selection and disease forecasting in Bangladesh is the main objective of this work. The economy of the nation depends heavily on agriculture. However, choosing crops with better production rates and efficiently controlling crop disease are obstacles that farmers have to face. These issues are addressed in this research by utilizing machine learning methods and real-world datasets. The recommended approach uses a variety of datasets on the production of crops, soil conditions, agro-meteorological regions, crop disease, and meteorological factors. These datasets offer insightful information on disease trends, soil nutrition demand of crops, and agricultural production history. By incorporating this knowledge, the model first recommends the list of primarily selected crops based on the soil nutrition of a particular user location. Then the predictions of meteorological variables like temperature, rainfall, and humidity are made using SARIMAX models. These weather predictions are then used to forecast the possibilities of diseases for the primary crops list by utilizing the support vector classifier. Finally, the developed model makes use of the decision tree regression model to forecast crop yield and provides a final crop list along with associated possible disease forecast. Utilizing the outcome of the model, farmers may choose the best productive crops as well as prevent crop diseases and reduce output losses by taking preventive actions. Consequently, planning and decision-making processes are supported and farmers can predict possible crop yields. Overall, by offering a detailed decision support system for crop selection and disease prediction, this work can play a vital role in advancing agricultural practices in Bangladesh.
[ "['Forkan Uddin Ahmed' 'Annesha Das' 'Md Zubair']" ]
null
null
2403.19279
null
null
http://arxiv.org/pdf/2403.19279v1
2024-03-28T10:02:10Z
2024-03-28T10:02:10Z
Fine-Tuning Language Models with Reward Learning on Policy
Reinforcement learning from human feedback (RLHF) has emerged as an effective approach to aligning large language models (LLMs) to human preferences. RLHF contains three steps, i.e., human preference collecting, reward learning, and policy optimization, which are usually performed serially. Despite its popularity, however, (fixed) reward models may suffer from inaccurate off-distribution, since policy optimization continuously shifts LLMs' data distribution. Repeatedly collecting new preference data from the latest LLMs may alleviate this issue, which unfortunately makes the resulting system more complicated and difficult to optimize. In this paper, we propose reward learning on policy (RLP), an unsupervised framework that refines a reward model using policy samples to keep it on-distribution. Specifically, an unsupervised multi-view learning method is introduced to learn robust representations of policy samples. Meanwhile, a synthetic preference generation approach is developed to simulate high-quality preference data with policy outputs. Extensive experiments on three benchmark datasets show that RLP consistently outperforms the state-of-the-art. Our code is available at url{https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/rlp}.
[ "['Hao Lang' 'Fei Huang' 'Yongbin Li']" ]
null
null
2403.19289
null
null
http://arxiv.org/pdf/2403.19289v3
2024-06-07T08:00:30Z
2024-03-28T10:19:36Z
Uplift Modeling Under Limited Supervision
Estimating causal effects in e-commerce tends to involve costly treatment assignments which can be impractical in large-scale settings. Leveraging machine learning to predict such treatment effects without actual intervention is a standard practice to diminish the risk. However, existing methods for treatment effect prediction tend to rely on training sets of substantial size, which are built from real experiments and are thus inherently risky to create. In this work we propose a graph neural network to diminish the required training set size, relying on graphs that are common in e-commerce data. Specifically, we view the problem as node regression with a restricted number of labeled instances, develop a two-model neural architecture akin to previous causal effect estimators, and test varying message-passing layers for encoding. Furthermore, as an extra step, we combine the model with an acquisition function to guide the creation of the training set in settings with extremely low experimental budget. The framework is flexible since each step can be used separately with other models or treatment policies. The experiments on real large-scale networks indicate a clear advantage of our methodology over the state of the art, which in many cases performs close to random, underlining the need for models that can generalize with limited supervision to reduce experimental risks.
[ "['George Panagopoulos' 'Daniele Malitesta' 'Fragkiskos D. Malliaros'\n 'Jun Pang']" ]
null
null
2403.19294
null
null
http://arxiv.org/pdf/2403.19294v1
2024-03-28T10:31:23Z
2024-03-28T10:31:23Z
FlowDepth: Decoupling Optical Flow for Self-Supervised Monocular Depth Estimation
Self-supervised multi-frame methods have currently achieved promising results in depth estimation. However, these methods often suffer from mismatch problems due to the moving objects, which break the static assumption. Additionally, unfairness can occur when calculating photometric errors in high-freq or low-texture regions of the images. To address these issues, existing approaches use additional semantic priori black-box networks to separate moving objects and improve the model only at the loss level. Therefore, we propose FlowDepth, where a Dynamic Motion Flow Module (DMFM) decouples the optical flow by a mechanism-based approach and warps the dynamic regions thus solving the mismatch problem. For the unfairness of photometric errors caused by high-freq and low-texture regions, we use Depth-Cue-Aware Blur (DCABlur) and Cost-Volume sparsity loss respectively at the input and the loss level to solve the problem. Experimental results on the KITTI and Cityscapes datasets show that our method outperforms the state-of-the-art methods.
[ "['Yiyang Sun' 'Zhiyuan Xu' 'Xiaonian Wang' 'Jing Yao']" ]
null
null
2403.19316
null
null
http://arxiv.org/abs/2403.19316v1
2024-03-28T11:17:00Z
2024-03-28T11:17:00Z
Hypergraph-based Multi-View Action Recognition using Event Cameras
Action recognition from video data forms a cornerstone with wide-ranging applications. Single-view action recognition faces limitations due to its reliance on a single viewpoint. In contrast, multi-view approaches capture complementary information from various viewpoints for improved accuracy. Recently, event cameras have emerged as innovative bio-inspired sensors, leading to advancements in event-based action recognition. However, existing works predominantly focus on single-view scenarios, leaving a gap in multi-view event data exploitation, particularly in challenges like information deficit and semantic misalignment. To bridge this gap, we introduce HyperMV, a multi-view event-based action recognition framework. HyperMV converts discrete event data into frame-like representations and extracts view-related features using a shared convolutional network. By treating segments as vertices and constructing hyperedges using rule-based and KNN-based strategies, a multi-view hypergraph neural network that captures relationships across viewpoint and temporal features is established. The vertex attention hypergraph propagation is also introduced for enhanced feature fusion. To prompt research in this area, we present the largest multi-view event-based action dataset $text{THU}^{text{MV-EACT}}text{-50}$, comprising 50 actions from 6 viewpoints, which surpasses existing datasets by over tenfold. Experimental results show that HyperMV significantly outperforms baselines in both cross-subject and cross-view scenarios, and also exceeds the state-of-the-arts in frame-based multi-view action recognition.
[ "['Yue Gao' 'Jiaxuan Lu' 'Siqi Li' 'Yipeng Li' 'Shaoyi Du']" ]
null
null
2403.19326
null
null
http://arxiv.org/pdf/2403.19326v1
2024-03-28T11:33:02Z
2024-03-28T11:33:02Z
MedBN: Robust Test-Time Adaptation against Malicious Test Samples
Test-time adaptation (TTA) has emerged as a promising solution to address performance decay due to unforeseen distribution shifts between training and test data. While recent TTA methods excel in adapting to test data variations, such adaptability exposes a model to vulnerability against malicious examples, an aspect that has received limited attention. Previous studies have uncovered security vulnerabilities within TTA even when a small proportion of the test batch is maliciously manipulated. In response to the emerging threat, we propose median batch normalization (MedBN), leveraging the robustness of the median for statistics estimation within the batch normalization layer during test-time inference. Our method is algorithm-agnostic, thus allowing seamless integration with existing TTA frameworks. Our experimental results on benchmark datasets, including CIFAR10-C, CIFAR100-C and ImageNet-C, consistently demonstrate that MedBN outperforms existing approaches in maintaining robust performance across different attack scenarios, encompassing both instant and cumulative attacks. Through extensive experiments, we show that our approach sustains the performance even in the absence of attacks, achieving a practical balance between robustness and performance.
[ "['Hyejin Park' 'Jeongyeon Hwang' 'Sunung Mun' 'Sangdon Park' 'Jungseul Ok']" ]
null
null
2403.19339
null
null
http://arxiv.org/pdf/2403.19339v1
2024-03-28T11:57:06Z
2024-03-28T11:57:06Z
An Interactive Human-Machine Learning Interface for Collecting and Learning from Complex Annotations
Human-Computer Interaction has been shown to lead to improvements in machine learning systems by boosting model performance, accelerating learning and building user confidence. In this work, we aim to alleviate the expectation that human annotators adapt to the constraints imposed by traditional labels by allowing for extra flexibility in the form that supervision information is collected. For this, we propose a human-machine learning interface for binary classification tasks which enables human annotators to utilise counterfactual examples to complement standard binary labels as annotations for a dataset. Finally we discuss the challenges in future extensions of this work.
[ "['Jonathan Erskine' 'Matt Clifford' 'Alexander Hepburn'\n 'Raúl Santos-Rodríguez']" ]
null
null
2403.19355
null
null
http://arxiv.org/pdf/2403.19355v1
2024-03-28T12:11:29Z
2024-03-28T12:11:29Z
Artificial Intelligence (AI) Based Prediction of Mortality, for COVID-19 Patients
For severely affected COVID-19 patients, it is crucial to identify high-risk patients and predict survival and need for intensive care (ICU). Most of the proposed models are not well reported making them less reproducible and prone to high risk of bias particularly in presence of imbalance data/class. In this study, the performances of nine machine and deep learning algorithms in combination with two widely used feature selection methods were investigated to predict last status representing mortality, ICU requirement, and ventilation days. Fivefold cross-validation was used for training and validation purposes. To minimize bias, the training and testing sets were split maintaining similar distributions. Only 10 out of 122 features were found to be useful in prediction modelling with Acute kidney injury during hospitalization feature being the most important one. The algorithms performances depend on feature numbers and data pre-processing techniques. LSTM performs the best in predicting last status and ICU requirement with 90%, 92%, 86% and 95% accuracy, sensitivity, specificity, and AUC respectively. DNN performs the best in predicting Ventilation days with 88% accuracy. Considering all the factors and limitations including absence of exact time point of clinical onset, LSTM with carefully selected features can accurately predict last status and ICU requirement. DNN performs the best in predicting Ventilation days. Appropriate machine learning algorithm with carefully selected features and balance data can accurately predict mortality, ICU requirement and ventilation support. Such model can be very useful in emergency and pandemic where prompt and precise
[ "['Mahbubunnabi Tamala' 'Mohammad Marufur Rahmanb' 'Maryam Alhasimc'\n 'Mobarak Al Mulhimd' 'Mohamed Derichee']" ]
null
null
2403.19381
null
null
http://arxiv.org/pdf/2403.19381v1
2024-03-28T12:42:25Z
2024-03-28T12:42:25Z
On Uncertainty Quantification for Near-Bayes Optimal Algorithms
Bayesian modelling allows for the quantification of predictive uncertainty which is crucial in safety-critical applications. Yet for many machine learning (ML) algorithms, it is difficult to construct or implement their Bayesian counterpart. In this work we present a promising approach to address this challenge, based on the hypothesis that commonly used ML algorithms are efficient across a wide variety of tasks and may thus be near Bayes-optimal w.r.t. an unknown task distribution. We prove that it is possible to recover the Bayesian posterior defined by the task distribution, which is unknown but optimal in this setting, by building a martingale posterior using the algorithm. We further propose a practical uncertainty quantification method that apply to general ML algorithms. Experiments based on a variety of non-NN and NN algorithms demonstrate the efficacy of our method.
[ "['Ziyu Wang' 'Chris Holmes']" ]
null
null
2403.19401
null
null
http://arxiv.org/abs/2403.19401v1
2024-03-28T13:24:18Z
2024-03-28T13:24:18Z
Hardness of Learning Boolean Functions from Label Proportions
In recent years the framework of learning from label proportions (LLP) has been gaining importance in machine learning. In this setting, the training examples are aggregated into subsets or bags and only the average label per bag is available for learning an example-level predictor. This generalizes traditional PAC learning which is the special case of unit-sized bags. The computational learning aspects of LLP were studied in recent works (Saket, NeurIPS'21; Saket, NeurIPS'22) which showed algorithms and hardness for learning halfspaces in the LLP setting. In this work we focus on the intractability of LLP learning Boolean functions. Our first result shows that given a collection of bags of size at most $2$ which are consistent with an OR function, it is NP-hard to find a CNF of constantly many clauses which satisfies any constant-fraction of the bags. This is in contrast with the work of (Saket, NeurIPS'21) which gave a $(2/5)$-approximation for learning ORs using a halfspace. Thus, our result provides a separation between constant clause CNFs and halfspaces as hypotheses for LLP learning ORs. Next, we prove the hardness of satisfying more than $1/2 + o(1)$ fraction of such bags using a $t$-DNF (i.e. DNF where each term has $leq t$ literals) for any constant $t$. In usual PAC learning such a hardness was known (Khot-Saket, FOCS'08) only for learning noisy ORs. We also study the learnability of parities and show that it is NP-hard to satisfy more than $(q/2^{q-1} + o(1))$-fraction of $q$-sized bags which are consistent with a parity using a parity, while a random parity based algorithm achieves a $(1/2^{q-2})$-approximation.
[ "['Venkatesan Guruswami' 'Rishi Saket']" ]
null
null
2403.19405
null
null
http://arxiv.org/pdf/2403.19405v1
2024-03-28T13:29:29Z
2024-03-28T13:29:29Z
Tabular Learning: Encoding for Entity and Context Embeddings
Examining the effect of different encoding techniques on entity and context embeddings, the goal of this work is to challenge commonly used Ordinal encoding for tabular learning. Applying different preprocessing methods and network architectures over several datasets resulted in a benchmark on how the encoders influence the learning outcome of the networks. By keeping the test, validation and training data consistent, results have shown that ordinal encoding is not the most suited encoder for categorical data in terms of preprocessing the data and thereafter, classifying the target variable correctly. A better outcome was achieved, encoding the features based on string similarities by computing a similarity matrix as input for the network. This is the case for both, entity and context embeddings, where the transformer architecture showed improved performance for Ordinal and Similarity encoding with regard to multi-label classification tasks.
[ "['Fredy Reusser']" ]
null
null
2403.19418
null
null
http://arxiv.org/pdf/2403.19418v1
2024-03-28T13:49:43Z
2024-03-28T13:49:43Z
Constants of Motion for Conserved and Non-conserved Dynamics
This paper begins with a dynamical model that was obtained by applying a machine learning technique (FJet) to time-series data; this dynamical model is then analyzed with Lie symmetry techniques to obtain constants of motion. This analysis is performed on both the conserved and non-conserved cases of the 1D and 2D harmonic oscillators. For the 1D oscillator, constants are found in the cases where the system is underdamped, overdamped, and critically damped. The novel existence of such a constant for a non-conserved model is interpreted as a manifestation of the conservation of energy of the {em total} system (i.e., oscillator plus dissipative environment). For the 2D oscillator, constants are found for the isotropic and anisotropic cases, including when the frequencies are incommensurate; it is also generalized to arbitrary dimensions. In addition, a constant is identified which generalizes angular momentum for all ratios of the frequencies. The approach presented here can produce {em multiple} constants of motion from a {em single}, generic data set.
[ "['Michael F. Zimmer']" ]
null
null
2403.19419
null
null
http://arxiv.org/pdf/2403.19419v1
2024-03-28T13:50:24Z
2024-03-28T13:50:24Z
Fairness in Ranking: Robustness through Randomization without the Protected Attribute
There has been great interest in fairness in machine learning, especially in relation to classification problems. In ranking-related problems, such as in online advertising, recommender systems, and HR automation, much work on fairness remains to be done. Two complications arise: first, the protected attribute may not be available in many applications. Second, there are multiple measures of fairness of rankings, and optimization-based methods utilizing a single measure of fairness of rankings may produce rankings that are unfair with respect to other measures. In this work, we propose a randomized method for post-processing rankings, which do not require the availability of the protected attribute. In an extensive numerical study, we show the robustness of our methods with respect to P-Fairness and effectiveness with respect to Normalized Discounted Cumulative Gain (NDCG) from the baseline ranking, improving on previously proposed methods.
[ "['Andrii Kliachkin' 'Eleni Psaroudaki' 'Jakub Marecek' 'Dimitris Fotakis']" ]
null
null
2403.19421
null
null
http://arxiv.org/pdf/2403.19421v1
2024-03-28T13:52:12Z
2024-03-28T13:52:12Z
Scaling up ridge regression for brain encoding in a massive individual fMRI dataset
Brain encoding with neuroimaging data is an established analysis aimed at predicting human brain activity directly from complex stimuli features such as movie frames. Typically, these features are the latent space representation from an artificial neural network, and the stimuli are image, audio, or text inputs. Ridge regression is a popular prediction model for brain encoding due to its good out-of-sample generalization performance. However, training a ridge regression model can be highly time-consuming when dealing with large-scale deep functional magnetic resonance imaging (fMRI) datasets that include many space-time samples of brain activity. This paper evaluates different parallelization techniques to reduce the training time of brain encoding with ridge regression on the CNeuroMod Friends dataset, one of the largest deep fMRI resource currently available. With multi-threading, our results show that the Intel Math Kernel Library (MKL) significantly outperforms the OpenBLAS library, being 1.9 times faster using 32 threads on a single machine. We then evaluated the Dask multi-CPU implementation of ridge regression readily available in scikit-learn (MultiOutput), and we proposed a new "batch" version of Dask parallelization, motivated by a time complexity analysis. In line with our theoretical analysis, MultiOutput parallelization was found to be impractical, i.e., slower than multi-threading on a single machine. In contrast, the Batch-MultiOutput regression scaled well across compute nodes and threads, providing speed-ups of up to 33 times with 8 compute nodes and 32 threads compared to a single-threaded scikit-learn execution. Batch parallelization using Dask thus emerges as a scalable approach for brain encoding with ridge regression on high-performance computing systems using scikit-learn and large fMRI datasets.
[ "['Sana Ahmadi' 'Pierre Bellec' 'Tristan Glatard']" ]
null
null
2403.19441
null
null
http://arxiv.org/abs/2403.19441v1
2024-03-28T14:11:40Z
2024-03-28T14:11:40Z
A Novel Stochastic Transformer-based Approach for Post-Traumatic Stress Disorder Detection using Audio Recording of Clinical Interviews
Post-traumatic stress disorder (PTSD) is a mental disorder that can be developed after witnessing or experiencing extremely traumatic events. PTSD can affect anyone, regardless of ethnicity, or culture. An estimated one in every eleven people will experience PTSD during their lifetime. The Clinician-Administered PTSD Scale (CAPS) and the PTSD Check List for Civilians (PCL-C) interviews are gold standards in the diagnosis of PTSD. These questionnaires can be fooled by the subject's responses. This work proposes a deep learning-based approach that achieves state-of-the-art performances for PTSD detection using audio recordings during clinical interviews. Our approach is based on MFCC low-level features extracted from audio recordings of clinical interviews, followed by deep high-level learning using a Stochastic Transformer. Our proposed approach achieves state-of-the-art performances with an RMSE of 2.92 on the eDAIC dataset thanks to the stochastic depth, stochastic deep learning layers, and stochastic activation function.
[ "['Mamadou Dia' 'Ghazaleh Khodabandelou' 'Alice Othmani']" ]
null
null
2403.19442
null
null
http://arxiv.org/pdf/2403.19442v1
2024-03-28T14:11:40Z
2024-03-28T14:11:40Z
Exploiting Individual Graph Structures to Enhance Ecological Momentary Assessment (EMA) Forecasting
In the evolving field of psychopathology, the accurate assessment and forecasting of data derived from Ecological Momentary Assessment (EMA) is crucial. EMA offers contextually-rich psychopathological measurements over time, that practically lead to Multivariate Time Series (MTS) data. Thus, many challenges arise in analysis from the temporal complexities inherent in emotional, behavioral, and contextual EMA data as well as their inter-dependencies. To address both of these aspects, this research investigates the performance of Recurrent and Temporal Graph Neural Networks (GNNs). Overall, GNNs, by incorporating additional information from graphs reflecting the inner relationships between the variables, notably enhance the results by decreasing the Mean Squared Error (MSE) to 0.84 compared to the baseline LSTM model at 1.02. Therefore, the effect of constructing graphs with different characteristics on GNN performance is also explored. Additionally, GNN-learned graphs, which are dynamically refined during the training process, were evaluated. Using such graphs showed a similarly good performance. Thus, graph learning proved also promising for other GNN methods, potentially refining the pre-defined graphs.
[ "['Mandani Ntekouli' 'Gerasimos Spanakis' 'Lourens Waldorp' 'Anne Roefs']" ]
null
null
2403.19444
null
null
http://arxiv.org/pdf/2403.19444v1
2024-03-28T14:15:13Z
2024-03-28T14:15:13Z
Transparent and Clinically Interpretable AI for Lung Cancer Detection in Chest X-Rays
The rapidly advancing field of Explainable Artificial Intelligence (XAI) aims to tackle the issue of trust regarding the use of complex black-box deep learning models in real-world applications. Existing post-hoc XAI techniques have recently been shown to have poor performance on medical data, producing unreliable explanations which are infeasible for clinical use. To address this, we propose an ante-hoc approach based on concept bottleneck models which introduces for the first time clinical concepts into the classification pipeline, allowing the user valuable insight into the decision-making process. On a large public dataset of chest X-rays and associated medical reports, we focus on the binary classification task of lung cancer detection. Our approach yields improved classification performance in lung cancer detection when compared to baseline deep learning models (F1 > 0.9), while also generating clinically relevant and more reliable explanations than existing techniques. We evaluate our approach against post-hoc image XAI techniques LIME and SHAP, as well as CXR-LLaVA, a recent textual XAI tool which operates in the context of question answering on chest X-rays.
[ "['Amy Rafferty' 'Rishi Ramaesh' 'Ajitha Rajan']" ]
null
null
2403.19448
null
null
http://arxiv.org/pdf/2403.19448v1
2024-03-28T14:16:23Z
2024-03-28T14:16:23Z
Fisher-Rao Gradient Flows of Linear Programs and State-Action Natural Policy Gradients
Kakade's natural policy gradient method has been studied extensively in the last years showing linear convergence with and without regularization. We study another natural gradient method which is based on the Fisher information matrix of the state-action distributions and has received little attention from the theoretical side. Here, the state-action distributions follow the Fisher-Rao gradient flow inside the state-action polytope with respect to a linear potential. Therefore, we study Fisher-Rao gradient flows of linear programs more generally and show linear convergence with a rate that depends on the geometry of the linear program. Equivalently, this yields an estimate on the error induced by entropic regularization of the linear program which improves existing results. We extend these results and show sublinear convergence for perturbed Fisher-Rao gradient flows and natural gradient flows up to an approximation error. In particular, these general results cover the case of state-action natural policy gradients.
[ "['Johannes Müller' 'Semih Çaycı' 'Guido Montúfar']" ]
null
null
2403.19462
null
null
http://arxiv.org/pdf/2403.19462v1
2024-03-28T14:34:02Z
2024-03-28T14:34:02Z
Offline Imitation Learning from Multiple Baselines with Applications to Compiler Optimization
This work studies a Reinforcement Learning (RL) problem in which we are given a set of trajectories collected with K baseline policies. Each of these policies can be quite suboptimal in isolation, and have strong performance in complementary parts of the state space. The goal is to learn a policy which performs as well as the best combination of baselines on the entire state space. We propose a simple imitation learning based algorithm, show a sample complexity bound on its accuracy and prove that the the algorithm is minimax optimal by showing a matching lower bound. Further, we apply the algorithm in the setting of machine learning guided compiler optimization to learn policies for inlining programs with the objective of creating a small binary. We demonstrate that we can learn a policy that outperforms an initial policy learned via standard RL through a few iterations of our approach.
[ "['Teodor V. Marinov' 'Alekh Agarwal' 'Mircea Trofin']" ]
null
null
2403.19470
null
null
http://arxiv.org/pdf/2403.19470v1
2024-03-28T14:54:31Z
2024-03-28T14:54:31Z
Deep decomposition method for the limited aperture inverse obstacle scattering problem
In this paper, we consider a deep learning approach to the limited aperture inverse obstacle scattering problem. It is well known that traditional deep learning relies solely on data, which may limit its performance for the inverse problem when only indirect observation data and a physical model are available. A fundamental question arises in light of these limitations: is it possible to enable deep learning to work on inverse problems without labeled data and to be aware of what it is learning? This work proposes a deep decomposition method (DDM) for such purposes, which does not require ground truth labels. It accomplishes this by providing physical operators associated with the scattering model to the neural network architecture. Additionally, a deep learning based data completion scheme is implemented in DDM to prevent distorting the solution of the inverse problem for limited aperture data. Furthermore, apart from addressing the ill-posedness imposed by the inverse problem itself, DDM is a physics-aware machine learning technique that can have interpretability property. The convergence result of DDM is theoretically proven. Numerical experiments are presented to demonstrate the validity of the proposed DDM even when the incident and observation apertures are extremely limited.
[ "['Yunwen Yin' 'Liang Yan']" ]
null
null
2403.19480
null
null
http://arxiv.org/pdf/2403.19480v1
2024-03-28T15:08:51Z
2024-03-28T15:08:51Z
$H$-Consistency Guarantees for Regression
We present a detailed study of $H$-consistency bounds for regression. We first present new theorems that generalize the tools previously given to establish $H$-consistency bounds. This generalization proves essential for analyzing $H$-consistency bounds specific to regression. Next, we prove a series of novel $H$-consistency bounds for surrogate loss functions of the squared loss, under the assumption of a symmetric distribution and a bounded hypothesis set. This includes positive results for the Huber loss, all $ell_p$ losses, $p geq 1$, the squared $epsilon$-insensitive loss, as well as a negative result for the $epsilon$-insensitive loss used in squared Support Vector Regression (SVR). We further leverage our analysis of $H$-consistency for regression and derive principled surrogate losses for adversarial regression (Section 5). This readily establishes novel algorithms for adversarial regression, for which we report favorable experimental results in Section 6.
[ "['Anqi Mao' 'Mehryar Mohri' 'Yutao Zhong']" ]