categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.16948 | null | null | http://arxiv.org/pdf/2406.16948v1 | 2024-06-19T11:36:29Z | 2024-06-19T11:36:29Z | Energy-Efficient Seizure Detection Suitable for low-power Applications | Epilepsy is the most common, chronic, neurological disease worldwide and is typically accompanied by reoccurring seizures. Neuro implants can be used for effective treatment by suppressing an upcoming seizure upon detection. Due to the restricted size and limited battery lifetime of those medical devices, the employed approach also needs to be limited in size and have low energy requirements. We present an energy-efficient seizure detection approach involving a TC-ResNet and time-series analysis which is suitable for low-power edge devices. The presented approach allows for accurate seizure detection without preceding feature extraction while considering the stringent hardware requirements of neural implants. The approach is validated using the CHB-MIT Scalp EEG Database with a 32-bit floating point model and a hardware suitable 4-bit fixed point model. The presented method achieves an accuracy of 95.28%, a sensitivity of 92.34% and an AUC score of 0.9384 on this dataset with 4-bit fixed point representation. Furthermore, the power consumption of the model is measured with the low-power AI accelerator UltraTrail, which only requires 495 nW on average. Due to this low-power consumption this classification approach is suitable for real-time seizure detection on low-power wearable devices such as neural implants. | [
"['Julia Werner' 'Bhavya Kohli' 'Paul Palomero Bernardo' 'Christoph Gerum'\n 'Oliver Bringmann']"
] |
null | null | 2406.16949 | null | null | http://arxiv.org/pdf/2406.16949v1 | 2024-06-19T12:39:02Z | 2024-06-19T12:39:02Z | Fair Differentiable Neural Network Architecture Search for Long-Tailed
Data with Self-Supervised Learning | Recent advancements in artificial intelligence (AI) have positioned deep learning (DL) as a pivotal technology in fields like computer vision, data mining, and natural language processing. A critical factor in DL performance is the selection of neural network architecture. Traditional predefined architectures often fail to adapt to different data distributions, making it challenging to achieve optimal performance. Neural architecture search (NAS) offers a solution by automatically designing architectures tailored to specific datasets. However, the effectiveness of NAS diminishes on long-tailed datasets, where a few classes have abundant samples, and many have few, leading to biased models.In this paper, we explore to improve the searching and training performance of NAS on long-tailed datasets. Specifically, we first discuss the related works about NAS and the deep learning method for long-tailed datasets.Then, we focus on an existing work, called SSF-NAS, which integrates the self-supervised learning and fair differentiable NAS to making NAS achieve better performance on long-tailed datasets.An detailed description about the fundamental techniques for SSF-NAS is provided in this paper, including DARTS, FairDARTS, and Barlow Twins. Finally, we conducted a series of experiments on the CIFAR10-LT dataset for performance evaluation, where the results are align with our expectation. | [
"['Jiaming Yan']"
] |
null | null | 2406.16955 | null | null | http://arxiv.org/pdf/2406.16955v2 | 2024-06-28T19:51:25Z | 2024-06-20T20:40:50Z | SRViT: Vision Transformers for Estimating Radar Reflectivity from
Satellite Observations at Scale | We introduce a transformer-based neural network to generate high-resolution (3km) synthetic radar reflectivity fields at scale from geostationary satellite imagery. This work aims to enhance short-term convective-scale forecasts of high-impact weather events and aid in data assimilation for numerical weather prediction over the United States. Compared to convolutional approaches, which have limited receptive fields, our results show improved sharpness and higher accuracy across various composite reflectivity thresholds. Additional case studies over specific atmospheric phenomena support our quantitative findings, while a novel attribution method is introduced to guide domain experts in understanding model outputs. | [
"['Jason Stock' 'Kyle Hilburn' 'Imme Ebert-Uphoff' 'Charles Anderson']"
] |
null | null | 2406.16956 | null | null | http://arxiv.org/pdf/2406.16956v1 | 2024-06-20T23:10:41Z | 2024-06-20T23:10:41Z | Data-Driven Computing Methods for Nonlinear Physics Systems with
Geometric Constraints | In a landscape where scientific discovery is increasingly driven by data, the integration of machine learning (ML) with traditional scientific methodologies has emerged as a transformative approach. This paper introduces a novel, data-driven framework that synergizes physics-based priors with advanced ML techniques to address the computational and practical limitations inherent in first-principle-based methods and brute-force machine learning methods. Our framework showcases four algorithms, each embedding a specific physics-based prior tailored to a particular class of nonlinear systems, including separable and nonseparable Hamiltonian systems, hyperbolic partial differential equations, and incompressible fluid dynamics. The intrinsic incorporation of physical laws preserves the system's intrinsic symmetries and conservation laws, ensuring solutions are physically plausible and computationally efficient. The integration of these priors also enhances the expressive power of neural networks, enabling them to capture complex patterns typical in physical phenomena that conventional methods often miss. As a result, our models outperform existing data-driven techniques in terms of prediction accuracy, robustness, and predictive capability, particularly in recognizing features absent from the training set, despite relying on small datasets, short training periods, and small sample sizes. | [
"['Yunjin Tong']"
] |
null | null | 2406.16959 | null | null | http://arxiv.org/pdf/2406.16959v1 | 2024-06-21T03:21:22Z | 2024-06-21T03:21:22Z | Recurrent Stochastic Configuration Networks for Temporal Data Analytics | Temporal data modelling techniques with neural networks are useful in many domain applications, including time-series forecasting and control engineering. This paper aims at developing a recurrent version of stochastic configuration networks (RSCNs) for problem solving, where we have no underlying assumption on the dynamic orders of the input variables. Given a collection of historical data, we first build an initial RSCN model in the light of a supervisory mechanism, followed by an online update of the output weights by using a projection algorithm. Some theoretical results are established, including the echo state property, the universal approximation property of RSCNs for both the offline and online learnings, and the convergence of the output weights. The proposed RSCN model is remarkably distinguished from the well-known echo state networks (ESNs) in terms of the way of assigning the input random weight matrix and a special structure of the random feedback matrix. A comprehensive comparison study among the long short-term memory (LSTM) network, the original ESN, and several state-of-the-art ESN methods such as the simple cycle reservoir (SCR), the polynomial ESN (PESN), the leaky-integrator ESN (LIESN) and RSCN is carried out. Numerical results clearly indicate that the proposed RSCN performs favourably over all of the datasets. | [
"['Dianhui Wang' 'Gang Dang']"
] |
null | null | 2406.16961 | null | null | http://arxiv.org/pdf/2406.16961v1 | 2024-06-21T23:12:59Z | 2024-06-21T23:12:59Z | Anime Popularity Prediction Before Huge Investments: a Multimodal
Approach Using Deep Learning | In the japanese anime industry, predicting whether an upcoming product will be popular is crucial. This paper presents a dataset and methods on predicting anime popularity using a multimodal textimage dataset constructed exclusively from freely available internet sources. The dataset was built following rigorous standards based on real-life investment experiences. A deep neural network architecture leveraging GPT-2 and ResNet-50 to embed the data was employed to investigate the correlation between the multimodal text-image input and a popularity score, discovering relevant strengths and weaknesses in the dataset. To measure the accuracy of the model, mean squared error (MSE) was used, obtaining a best result of 0.011 when considering all inputs and the full version of the deep neural network, compared to the benchmark MSE 0.412 obtained with traditional TF-IDF and PILtotensor vectorizations. This is the first proposal to address such task with multimodal datasets, revealing the substantial benefit of incorporating image information, even when a relatively small model (ResNet-50) was used to embed them. | [
"['Jesús Armenta-Segura' 'Grigori Sidorov']"
] |
null | null | 2406.16962 | null | null | http://arxiv.org/pdf/2406.16962v1 | 2024-06-22T00:49:40Z | 2024-06-22T00:49:40Z | MetaGreen: Meta-Learning Inspired Transformer Selection for Green
Semantic Communication | Semantic Communication can transform the way we transmit information, prioritizing meaningful and effective content over individual symbols or bits. This evolution promises significant benefits, including reduced latency, lower bandwidth usage, and higher throughput compared to traditional communication. However, the development of Semantic Communication faces a crucial challenge: the need for universal metrics to benchmark the joint effects of semantic information loss and energy consumption. This research introduces an innovative solution: the ``Energy-Optimized Semantic Loss'' (EOSL) function, a novel multi-objective loss function that effectively balances semantic information loss and energy consumption. Through comprehensive experiments on transformer models, including energy benchmarking, we demonstrate the remarkable effectiveness of EOSL-based model selection. We have established that EOSL-based transformer model selection achieves up to 83% better similarity-to-power ratio (SPR) compared to BLEU score-based selection and 67% better SPR compared to solely lowest power usage-based selection. Furthermore, we extend the applicability of EOSL to diverse and varying contexts, inspired by the principles of Meta-Learning. By cumulatively applying EOSL, we enable the model selection system to adapt to this change, leveraging historical EOSL values to guide the learning process. This work lays the foundation for energy-efficient model selection and the development of green semantic communication. | [
"['Shubhabrata Mukherjee' 'Cory Beard' 'Sejun Song']"
] |
null | null | 2406.16963 | null | null | http://arxiv.org/pdf/2406.16963v1 | 2024-06-22T02:47:24Z | 2024-06-22T02:47:24Z | Large Language Models for Link Stealing Attacks Against Graph Neural
Networks | Graph data contains rich node features and unique edge information, which have been applied across various domains, such as citation networks or recommendation systems. Graph Neural Networks (GNNs) are specialized for handling such data and have shown impressive performance in many applications. However, GNNs may contain of sensitive information and susceptible to privacy attacks. For example, link stealing is a type of attack in which attackers infer whether two nodes are linked or not. Previous link stealing attacks primarily relied on posterior probabilities from the target GNN model, neglecting the significance of node features. Additionally, variations in node classes across different datasets lead to different dimensions of posterior probabilities. The handling of these varying data dimensions posed a challenge in using a single model to effectively conduct link stealing attacks on different datasets. To address these challenges, we introduce Large Language Models (LLMs) to perform link stealing attacks on GNNs. LLMs can effectively integrate textual features and exhibit strong generalizability, enabling attacks to handle diverse data dimensions across various datasets. We design two distinct LLM prompts to effectively combine textual features and posterior probabilities of graph nodes. Through these designed prompts, we fine-tune the LLM to adapt to the link stealing attack task. Furthermore, we fine-tune the LLM using multiple datasets and enable the LLM to learn features from different datasets simultaneously. Experimental results show that our approach significantly enhances the performance of existing link stealing attack tasks in both white-box and black-box scenarios. Our method can execute link stealing attacks across different datasets using only a single model, making link stealing attacks more applicable to real-world scenarios. | [
"['Faqian Guan' 'Tianqing Zhu' 'Hui Sun' 'Wanlei Zhou' 'Philip S. Yu']"
] |
null | null | 2406.16964 | null | null | http://arxiv.org/pdf/2406.16964v1 | 2024-06-22T03:33:38Z | 2024-06-22T03:33:38Z | Are Language Models Actually Useful for Time Series Forecasting? | Large language models (LLMs) are being applied to time series tasks, particularly time series forecasting. However, are language models actually useful for time series? After a series of ablation studies on three recent and popular LLM-based time series forecasting methods, we find that removing the LLM component or replacing it with a basic attention layer does not degrade the forecasting results -- in most cases the results even improved. We also find that despite their significant computational cost, pretrained LLMs do no better than models trained from scratch, do not represent the sequential dependencies in time series, and do not assist in few-shot settings. Additionally, we explore time series encoders and reveal that patching and attention structures perform similarly to state-of-the-art LLM-based forecasters. | [
"['Mingtian Tan' 'Mike A. Merrill' 'Vinayak Gupta' 'Tim Althoff'\n 'Thomas Hartvigsen']"
] |
null | null | 2406.16965 | null | null | http://arxiv.org/pdf/2406.16965v1 | 2024-06-22T04:36:09Z | 2024-06-22T04:36:09Z | Present and Future of AI in Renewable Energy Domain : A Comprehensive
Survey | Artificial intelligence (AI) has become a crucial instrument for streamlining processes in various industries, including electrical power systems, as a result of recent digitalization. Algorithms for artificial intelligence are data-driven models that are based on statistical learning theory and are used as a tool to take use of the data that the power system and its users generate. Initially, we perform a thorough literature analysis of artificial intelligence (AI) applications related to renewable energy (RE). Next, we present a thorough analysis of renewable energy factories and assess their suitability, along with a list of the most widely used and appropriate AI algorithms. Nine AI-based strategies are identified here to assist Renewable Energy (RE) in contemporary power systems. This survey paper comprises an extensive review of the several AI techniques used for renewable energy as well as a methodical analysis of the literature for the study of various intelligent system application domains across different disciplines of renewable energy. This literature review identifies the performance and outcomes of nine different research methods by assessing them, and it aims to distill valuable insights into their strengths and limitations. This study also addressed three main topics: using AI technology for renewable power generation, utilizing AI for renewable energy forecasting, and optimizing energy systems. Additionally, it explored AI's superiority over conventional models in controllability, data handling, cyberattack prevention, smart grid implementation, robotics- AI's significance in shaping the future of the energy industry. Furthermore, this article outlines future directions in the integration of AI for renewable energy. | [
"['Abdur Rashid' 'Parag Biswas' 'Angona Biswas' 'MD Abdullah Al Nasim'\n 'Kishor Datta Gupta' 'Roy George']"
] |
null | null | 2406.16966 | null | null | http://arxiv.org/pdf/2406.16966v1 | 2024-06-22T04:49:39Z | 2024-06-22T04:49:39Z | Mitigating Noisy Supervision Using Synthetic Samples with Soft Labels | Noisy labels are ubiquitous in real-world datasets, especially in the large-scale ones derived from crowdsourcing and web searching. It is challenging to train deep neural networks with noisy datasets since the networks are prone to overfitting the noisy labels during training, resulting in poor generalization performance. During an early learning phase, deep neural networks have been observed to fit the clean samples before memorizing the mislabeled samples. In this paper, we dig deeper into the representation distributions in the early learning phase and find that, regardless of their noisy labels, learned representations of images from the same category still congregate together. Inspired by it, we propose a framework that trains the model with new synthetic samples to mitigate the impact of noisy labels. Specifically, we propose a mixing strategy to create the synthetic samples by aggregating original samples with their top-K nearest neighbours, wherein the weights are calculated using a mixture model learning from the per-sample loss distribution. To enhance the performance in the presence of extreme label noise, we estimate the soft targets by gradually correcting the noisy labels. Furthermore, we demonstrate that the estimated soft targets yield a more accurate approximation to ground truth labels and the proposed method produces a superior quality of learned representations with more separated and clearly bounded clusters. The extensive experiments in two benchmarks (CIFAR-10 and CIFAR-100) and two larg-scale real-world datasets (Clothing1M and Webvision) demonstrate that our approach outperforms the state-of-the-art methods and robustness of the learned representation. | [
"['Yangdi Lu' 'Wenbo He']"
] |
null | null | 2406.16968 | null | null | http://arxiv.org/pdf/2406.16968v2 | 2024-06-26T01:54:51Z | 2024-06-22T09:28:02Z | Multimodal Physiological Signals Representation Learning via Multiscale
Contrasting for Depression Recognition | Depression recognition based on physiological signals such as functional near-infrared spectroscopy (fNIRS) and electroencephalogram (EEG) has made considerable progress. However, most existing studies ignore the complementarity and semantic consistency of multimodal physiological signals under the same stimulation task in complex spatio-temporal patterns. In this paper, we introduce a multimodal physiological signals representation learning framework using Siamese architecture via multiscale contrasting for depression recognition (MRLMC). First, fNIRS and EEG are transformed into different but correlated data based on a time-domain data augmentation strategy. Then, we design a spatio-temporal contrasting module to learn the representation of fNIRS and EEG through weight-sharing multiscale spatio-temporal convolution. Furthermore, to enhance the learning of semantic representation associated with stimulation tasks, a semantic consistency contrast module is proposed, aiming to maximize the semantic similarity of fNIRS and EEG. Extensive experiments on publicly available and self-collected multimodal physiological signals datasets indicate that MRLMC outperforms the state-of-the-art models. Moreover, our proposed framework is capable of transferring to multimodal time series downstream tasks. | [
"['Kai Shao' 'Rui Wang' 'Yixue Hao' 'Long Hu' 'Min Chen'\n 'Hans Arno Jacobsen']"
] |
null | null | 2406.16971 | null | null | http://arxiv.org/pdf/2406.16971v1 | 2024-06-22T13:44:01Z | 2024-06-22T13:44:01Z | Flexible Tails for Normalizing Flows | Normalizing flows are a flexible class of probability distributions, expressed as transformations of a simple base distribution. A limitation of standard normalizing flows is representing distributions with heavy tails, which arise in applications to both density estimation and variational inference. A popular current solution to this problem is to use a heavy tailed base distribution. Examples include the tail adaptive flow (TAF) methods of Laszkiewicz et al (2022). We argue this can lead to poor performance due to the difficulty of optimising neural networks, such as normalizing flows, under heavy tailed input. This problem is demonstrated in our paper. We propose an alternative: use a Gaussian base distribution and a final transformation layer which can produce heavy tails. We call this approach tail transform flow (TTF). Experimental results show this approach outperforms current methods, especially when the target distribution has large dimension or tail weight. | [
"['Tennessee Hickling' 'Dennis Prangle']"
] |
null | null | 2406.16972 | null | null | http://arxiv.org/pdf/2406.16972v1 | 2024-06-22T15:46:03Z | 2024-06-22T15:46:03Z | An Efficient NAS-based Approach for Handling Imbalanced Datasets | Class imbalance is a common issue in real-world data distributions, negatively impacting the training of accurate classifiers. Traditional approaches to mitigate this problem fall into three main categories: class re-balancing, information transfer, and representation learning. This paper introduces a novel approach to enhance performance on long-tailed datasets by optimizing the backbone architecture through neural architecture search (NAS). Our research shows that an architecture's accuracy on a balanced dataset does not reliably predict its performance on imbalanced datasets. This necessitates a complete NAS run on long-tailed datasets, which can be computationally expensive. To address this computational challenge, we focus on existing work, called IMB-NAS, which proposes efficiently adapting a NAS super-network trained on a balanced source dataset to an imbalanced target dataset. A detailed description of the fundamental techniques for IMB-NAS is provided in this paper, including NAS and architecture transfer. Among various adaptation strategies, we find that the most effective approach is to retrain the linear classification head with reweighted loss while keeping the backbone NAS super-network trained on the balanced source dataset frozen. Finally, we conducted a series of experiments on the imbalanced CIFAR dataset for performance evaluation. Our conclusions are the same as those proposed in the IMB-NAS paper. | [
"['Zhiwei Yao']"
] |
null | null | 2406.16974 | null | null | http://arxiv.org/pdf/2406.16974v1 | 2024-06-22T18:35:58Z | 2024-06-22T18:35:58Z | SHDB-AF: a Japanese Holter ECG database of atrial fibrillation | Atrial fibrillation (AF) is a common atrial arrhythmia that impairs quality of life and causes embolic stroke, heart failure and other complications. Recent advancements in machine learning (ML) and deep learning (DL) have shown potential for enhancing diagnostic accuracy. It is essential for DL models to be robust and generalizable across variations in ethnicity, age, sex, and other factors. Although a number of ECG database have been made available to the research community, none includes a Japanese population sample. Saitama Heart Database Atrial Fibrillation (SHDB-AF) is a novel open-sourced Holter ECG database from Japan, containing data from 100 unique patients with paroxysmal AF. Each record in SHDB-AF is 24 hours long and sampled at 200 Hz, totaling 24 million seconds of ECG data. | [
"['Kenta Tsutsui' 'Shany Biton Brimer' 'Noam Ben-Moshe' 'Jean Marc Sellal'\n 'Julien Oster' 'Hitoshi Mori' 'Yoshifumi Ikeda' 'Takahide Arai'\n 'Shintaro Nakano' 'Ritsushi Kato' 'Joachim A. Behar']"
] |
null | null | 2406.16975 | null | null | http://arxiv.org/pdf/2406.16975v1 | 2024-06-23T00:38:19Z | 2024-06-23T00:38:19Z | A Review of Global Sensitivity Analysis Methods and a comparative case
study on Digit Classification | Global sensitivity analysis (GSA) aims to detect influential input factors that lead a model to arrive at a certain decision and is a significant approach for mitigating the computational burden of processing high dimensional data. In this paper, we provide a comprehensive review and a comparison on global sensitivity analysis methods. Additionally, we propose a methodology for evaluating the efficacy of these methods by conducting a case study on MNIST digit dataset. Our study goes through the underlying mechanism of widely used GSA methods and highlights their efficacy through a comprehensive methodology. | [
"['Zahra Sadeghi' 'Stan Matwin']"
] |
null | null | 2406.16976 | null | null | http://arxiv.org/pdf/2406.16976v2 | 2024-07-02T16:12:38Z | 2024-06-23T06:22:49Z | Efficient Evolutionary Search Over Chemical Space with Large Language
Models | Molecular discovery, when formulated as an optimization problem, presents significant computational challenges because optimization objectives can be non-differentiable. Evolutionary Algorithms (EAs), often used to optimize black-box objectives in molecular discovery, traverse chemical space by performing random mutations and crossovers, leading to a large number of expensive objective evaluations. In this work, we ameliorate this shortcoming by incorporating chemistry-aware Large Language Models (LLMs) into EAs. Namely, we redesign crossover and mutation operations in EAs using LLMs trained on large corpora of chemical information. We perform extensive empirical studies on both commercial and open-source models on multiple tasks involving property optimization, molecular rediscovery, and structure-based drug design, demonstrating that the joint usage of LLMs with EAs yields superior performance over all baseline models across single- and multi-objective settings. We demonstrate that our algorithm improves both the quality of the final solution and convergence speed, thereby reducing the number of required objective evaluations. Our code is available at http://github.com/zoom-wang112358/MOLLEO | [
"['Haorui Wang' 'Marta Skreta' 'Cher-Tian Ser' 'Wenhao Gao' 'Lingkai Kong'\n 'Felix Strieth-Kalthoff' 'Chenru Duan' 'Yuchen Zhuang' 'Yue Yu'\n 'Yanqiao Zhu' 'Yuanqi Du' 'Alán Aspuru-Guzik' 'Kirill Neklyudov'\n 'Chao Zhang']"
] |
null | null | 2406.16978 | null | null | http://arxiv.org/pdf/2406.16978v1 | 2024-06-23T15:30:40Z | 2024-06-23T15:30:40Z | MetaFollower: Adaptable Personalized Autonomous Car Following | Car-following (CF) modeling, a fundamental component in microscopic traffic simulation, has attracted increasing interest of researchers in the past decades. In this study, we propose an adaptable personalized car-following framework -MetaFollower, by leveraging the power of meta-learning. Specifically, we first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events. Afterward, the pre-trained model can be fine-tuned on new drivers with only a few CF trajectories to achieve personalized CF adaptation. We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability. Unlike conventional adaptive cruise control (ACC) systems that rely on predefined settings and constant parameters without considering heterogeneous driving characteristics, MetaFollower can accurately capture and simulate the intricate dynamics of car-following behavior while considering the unique driving styles of individual drivers. We demonstrate the versatility and adaptability of MetaFollower by showcasing its ability to adapt to new drivers with limited training data quickly. To evaluate the performance of MetaFollower, we conduct rigorous experiments comparing it with both data-driven and physics-based models. The results reveal that our proposed framework outperforms baseline models in predicting car-following behavior with higher accuracy and safety. To the best of our knowledge, this is the first car-following model aiming to achieve fast adaptation by considering both driver and temporal heterogeneity based on meta-learning. | [
"['Xianda Chen' 'Kehua Chen' 'Meixin Zhu' 'Hao' 'Yang' 'Shaojie Shen'\n 'Xuesong Wang' 'Yinhai Wang']"
] |
null | null | 2406.16979 | null | null | http://arxiv.org/pdf/2406.16979v1 | 2024-06-23T18:10:16Z | 2024-06-23T18:10:16Z | Understanding and Diagnosing Deep Reinforcement Learning | Deep neural policies have recently been installed in a diverse range of settings, from biotechnology to automated financial systems. However, the utilization of deep neural networks to approximate the value function leads to concerns on the decision boundary stability, in particular, with regard to the sensitivity of policy decision making to indiscernible, non-robust features due to highly non-convex and complex deep neural manifolds. These concerns constitute an obstruction to understanding the reasoning made by deep neural policies, and their foundational limitations. Hence, it is crucial to develop techniques that aim to understand the sensitivities in the learnt representations of neural network policies. To achieve this we introduce a theoretically founded method that provides a systematic analysis of the unstable directions in the deep neural policy decision boundary across both time and space. Through experiments in the Arcade Learning Environment (ALE), we demonstrate the effectiveness of our technique for identifying correlated directions of instability, and for measuring how sample shifts remold the set of sensitive directions in the neural policy landscape. Most importantly, we demonstrate that state-of-the-art robust training techniques yield learning of disjoint unstable directions, with dramatically larger oscillations over time, when compared to standard training. We believe our results reveal the fundamental properties of the decision process made by reinforcement learning policies, and can help in constructing reliable and robust deep neural policies. | [
"['Ezgi Korkmaz']"
] |
null | null | 2406.16981 | null | null | http://arxiv.org/pdf/2406.16981v1 | 2024-06-23T18:41:43Z | 2024-06-23T18:41:43Z | Research on Feature Extraction Data Processing System For MRI of Brain
Diseases Based on Computer Deep Learning | Most of the existing wavelet image processing techniques are carried out in the form of single-scale reconstruction and multiple iterations. However, processing high-quality fMRI data presents problems such as mixed noise and excessive computation time. This project proposes the use of matrix operations by combining mixed noise elimination methods with wavelet analysis to replace traditional iterative algorithms. Functional magnetic resonance imaging (fMRI) of the auditory cortex of a single subject is analyzed and compared to the wavelet domain signal processing technology based on repeated times and the world's most influential SPM8. Experiments show that this algorithm is the fastest in computing time, and its detection effect is comparable to the traditional iterative algorithm. However, this has a higher practical value for the processing of FMRI data. In addition, the wavelet analysis method proposed signal processing to speed up the calculation rate. | [
"['Lingxi Xiao' 'Jinxin Hu' 'Yutian Yang' 'Yinqiu Feng' 'Zichao Li'\n 'Zexi Chen']"
] |
null | null | 2406.16982 | null | null | http://arxiv.org/pdf/2406.16982v1 | 2024-06-23T18:44:03Z | 2024-06-23T18:44:03Z | Research on Disease Prediction Model Construction Based on Computer AI
deep Learning Technology | The prediction of disease risk factors can screen vulnerable groups for effective prevention and treatment, so as to reduce their morbidity and mortality. Machine learning has a great demand for high-quality labeling information, and labeling noise in medical big data poses a great challenge to efficient disease risk warning methods. Therefore, this project intends to study the robust learning algorithm and apply it to the early warning of infectious disease risk. A dynamic truncated loss model is proposed, which combines the traditional mutual entropy implicit weight feature with the mean variation feature. It is robust to label noise. A lower bound on training loss is constructed, and a method based on sampling rate is proposed to reduce the gradient of suspected samples to reduce the influence of noise on training results. The effectiveness of this method under different types of noise was verified by using a stroke screening data set as an example. This method enables robust learning of data containing label noise. | [
"['Yang Lin' 'Muqing Li' 'Ziyi Zhu' 'Yinqiu Feng' 'Lingxi Xiao' 'Zexi Chen']"
] |
null | null | 2406.16983 | null | null | http://arxiv.org/pdf/2406.16983v1 | 2024-06-23T19:44:00Z | 2024-06-23T19:44:00Z | On Instabilities of Unsupervised Denoising Diffusion Models in Magnetic
Resonance Imaging Reconstruction | Denoising diffusion models offer a promising approach to accelerating magnetic resonance imaging (MRI) and producing diagnostic-level images in an unsupervised manner. However, our study demonstrates that even tiny worst-case potential perturbations transferred from a surrogate model can cause these models to generate fake tissue structures that may mislead clinicians. The transferability of such worst-case perturbations indicates that the robustness of image reconstruction may be compromised due to MR system imperfections or other sources of noise. Moreover, at larger perturbation strengths, diffusion models exhibit Gaussian noise-like artifacts that are distinct from those observed in supervised models and are more challenging to detect. Our results highlight the vulnerability of current state-of-the-art diffusion-based reconstruction models to possible worst-case perturbations and underscore the need for further research to improve their robustness and reliability in clinical settings. | [
"['Tianyu Han' 'Sven Nebelung' 'Firas Khader' 'Jakob Nikolas Kather'\n 'Daniel Truhn']"
] |
null | null | 2406.16985 | null | null | http://arxiv.org/pdf/2406.16985v1 | 2024-06-23T22:56:34Z | 2024-06-23T22:56:34Z | Unveiling LLM Mechanisms Through Neural ODEs and Control Theory | This study presents a novel approach that leverages Neural Ordinary Differential Equations (Neural ODEs) to unravel the intricate relationships between inputs and outputs in Large Language Models (LLMs), and employs robust control to fine-tune outputs to meet predefined standards. Central to our methodology is the transformation of LLM inputs and outputs into a lower-dimensional latent space, facilitating a detailed examination of the information processing pathways within LLMs. Neural ODEs play a pivotal role in this investigation by providing a dynamic model that captures the continuous evolution of data within the LLMs. Additionally, robust control mechanisms are applied to strategically adjust the model's outputs, ensuring they not only maintain high quality and reliability but also adhere to specific performance criteria. This fusion of Neural ODEs and robust control represents a significant advancement in LLM interpretability, offering a comprehensive framework that elucidates the previously opaque mechanisms of these complex models. Our empirical results validate the effectiveness of this integrated approach, making a substantial contribution to the field of explainable AI by merging advanced machine learning techniques with the critical need for transparency and control in AI outputs. | [
"['Yukun Zhang']"
] |
null | null | 2406.16986 | null | null | http://arxiv.org/pdf/2406.16986v1 | 2024-06-24T01:43:30Z | 2024-06-24T01:43:30Z | Machine Unlearning with Minimal Gradient Dependence for High Unlearning
Ratios | In the context of machine unlearning, the primary challenge lies in effectively removing traces of private data from trained models while maintaining model performance and security against privacy attacks like membership inference attacks. Traditional gradient-based unlearning methods often rely on extensive historical gradients, which becomes impractical with high unlearning ratios and may reduce the effectiveness of unlearning. Addressing these limitations, we introduce Mini-Unlearning, a novel approach that capitalizes on a critical observation: unlearned parameters correlate with retrained parameters through contraction mapping. Our method, Mini-Unlearning, utilizes a minimal subset of historical gradients and leverages this contraction mapping to facilitate scalable, efficient unlearning. This lightweight, scalable method significantly enhances model accuracy and strengthens resistance to membership inference attacks. Our experiments demonstrate that Mini-Unlearning not only works under higher unlearning ratios but also outperforms existing techniques in both accuracy and security, offering a promising solution for applications requiring robust unlearning capabilities. | [
"['Tao Huang' 'Ziyang Chen' 'Jiayang Meng' 'Qingyu Huang' 'Xu Yang'\n 'Xun Yi' 'Ibrahim Khalil']"
] |
null | null | 2406.16987 | null | null | http://arxiv.org/pdf/2406.16987v1 | 2024-06-24T03:40:41Z | 2024-06-24T03:40:41Z | AI for Equitable Tennis Training: Leveraging AI for Equitable and
Accurate Classification of Tennis Skill Levels and Training Phases | Numerous studies have demonstrated the manifold benefits of tennis, such as increasing overall physical and mental health. Unfortunately, many children and youth from low-income families are unable to engage in this sport mainly due to financial constraints such as private lesson expenses as well as logistical concerns to and back from such lessons and clinics. While several tennis self-training systems exist, they are often tailored for professionals and are prohibitively expensive. The present study aims to classify tennis players' skill levels and classify tennis strokes into phases characterized by motion attributes for a future development of an AI-based tennis self-training model for affordable and convenient applications running on devices used in daily life such as an iPhone or an Apple Watch for tennis skill improvement. We collected motion data, including Motion Yaw, Roll and Pitch from inertial measurement units (IMUs) worn by participating junior tennis players. For this pilot study, data from twelve participants were processed using Support Vector Machine (SVM) algorithms. The SVM models demonstrated an overall accuracy of 77% in classifying players as beginners or intermediates, with low rates of false positives and false negatives, effectively distinguishing skill levels. Additionally, the tennis swings were successfully classified into five phases based on the collected motion data. These findings indicate that SVM-based classification can be a reliable foundation for developing an equitable and accessible AI-driven tennis training system. | [
"['Gyanna Gao' 'Hao-Yu Liao' 'Zhenhong Hu']"
] |
null | null | 2406.16988 | null | null | http://arxiv.org/pdf/2406.16988v1 | 2024-06-24T04:31:17Z | 2024-06-24T04:31:17Z | MD tree: a model-diagnostic tree grown on loss landscape | This paper considers "model diagnosis", which we formulate as a classification problem. Given a pre-trained neural network (NN), the goal is to predict the source of failure from a set of failure modes (such as a wrong hyperparameter, inadequate model size, and insufficient data) without knowing the training configuration of the pre-trained NN. The conventional diagnosis approach uses training and validation errors to determine whether the model is underfitting or overfitting. However, we show that rich information about NN performance is encoded in the optimization loss landscape, which provides more actionable insights than validation-based measurements. Therefore, we propose a diagnosis method called MD tree based on loss landscape metrics and experimentally demonstrate its advantage over classical validation-based approaches. We verify the effectiveness of MD tree in multiple practical scenarios: (1) use several models trained on one dataset to diagnose a model trained on another dataset, essentially a few-shot dataset transfer problem; (2) use small models (or models trained with small data) to diagnose big models (or models trained with big data), essentially a scale transfer problem. In a dataset transfer task, MD tree achieves an accuracy of 87.7%, outperforming validation-based approaches by 14.88%. Our code is available at https://github.com/YefanZhou/ModelDiagnosis. | [
"['Yefan Zhou' 'Jianlong Chen' 'Qinxue Cao' 'Konstantin Schürholt'\n 'Yaoqing Yang']"
] |
null | null | 2406.16989 | null | null | http://arxiv.org/pdf/2406.16989v1 | 2024-06-24T05:24:41Z | 2024-06-24T05:24:41Z | Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine
Learning | Low-Rank Adaptation (LoRA) offers an efficient way to fine-tune large language models (LLMs). Its modular and plug-and-play nature allows the integration of various domain-specific LoRAs, enhancing LLM capabilities. Open-source platforms like Huggingface and Modelscope have introduced a new computational paradigm, Uploadable Machine Learning (UML). In UML, contributors use decentralized data to train specialized adapters, which are then uploaded to a central platform to improve LLMs. This platform uses these domain-specific adapters to handle mixed-task requests requiring personalized service. Previous research on LoRA composition either focuses on specific tasks or fixes the LoRA selection during training. However, in UML, the pool of LoRAs is dynamically updated with new uploads, requiring a generalizable selection mechanism for unseen LoRAs. Additionally, the mixed-task nature of downstream requests necessitates personalized services. To address these challenges, we propose Retrieval-Augmented Mixture of LoRA Experts (RAMoLE), a framework that adaptively retrieves and composes multiple LoRAs based on input prompts. RAMoLE has three main components: LoraRetriever for identifying and retrieving relevant LoRAs, an on-the-fly MoLE mechanism for coordinating the retrieved LoRAs, and efficient batch inference for handling heterogeneous requests. Experimental results show that RAMoLE consistently outperforms baselines, highlighting its effectiveness and scalability. | [
"['Ziyu Zhao' 'Leilei Gan' 'Guoyin Wang' 'Yuwei Hu' 'Tao Shen'\n 'Hongxia Yang' 'Kun Kuang' 'Fei Wu']"
] |
null | null | 2406.16992 | null | null | http://arxiv.org/pdf/2406.16992v1 | 2024-06-24T07:32:58Z | 2024-06-24T07:32:58Z | Make Graph Neural Networks Great Again: A Generic Integration Paradigm
of Topology-Free Patterns for Traffic Speed Prediction | Urban traffic speed prediction aims to estimate the future traffic speed for improving urban transportation services. Enormous efforts have been made to exploit Graph Neural Networks (GNNs) for modeling spatial correlations and temporal dependencies of traffic speed evolving patterns, regularized by graph topology.While achieving promising results, current traffic speed prediction methods still suffer from ignoring topology-free patterns, which cannot be captured by GNNs. To tackle this challenge, we propose a generic model for enabling the current GNN-based methods to preserve topology-free patterns. Specifically, we first develop a Dual Cross-Scale Transformer (DCST) architecture, including a Spatial Transformer and a Temporal Transformer, to preserve the cross-scale topology-free patterns and associated dynamics, respectively. Then, to further integrate both topology-regularized/-free patterns, we propose a distillation-style learning framework, in which the existing GNN-based methods are considered as the teacher model, and the proposed DCST architecture is considered as the student model. The teacher model would inject the learned topology-regularized patterns into the student model for integrating topology-free patterns. The extensive experimental results demonstrated the effectiveness of our methods. | [
"['Yicheng Zhou' 'Pengfei Wang' 'Hao Dong' 'Denghui Zhang' 'Dingqi Yang'\n 'Yanjie Fu' 'Pengyang Wang']"
] |
null | null | 2406.16997 | null | null | http://arxiv.org/pdf/2406.16997v1 | 2024-06-24T10:05:01Z | 2024-06-24T10:05:01Z | Wavelet Attention GRU for Efficient Industrial Gas Recognition with
Novel Metrics | Gas recognition technology has received considerable attention from researchers in recent years. Nevertheless, the gas recognition area has faced obstacles in implementing deep learning-based recognition solutions due to the absence of standardized protocols. To tackle this problem, we suggest using two sets of specialized evaluation measures for gas recognition algorithms. These metrics will make it easier to examine the performance of these algorithms on various datasets. In addition, we provide a new model called the Wavelet Attention GRU (WAG), which is based on the wavelet attention mechanism. This method facilitates the more efficient retrieval of sensor signals. Compared to other models, WAG significantly decreases the number of sensors needed by 75% while obtaining an identification accuracy of 98.33%. This suggests that WAG is a potential approach for advancing gas recognition algorithms. | [
"['Ding Wang']"
] |
null | null | 2406.16999 | null | null | http://arxiv.org/pdf/2406.16999v1 | 2024-06-24T12:25:04Z | 2024-06-24T12:25:04Z | Identifying Easy Instances to Improve Efficiency of ML Pipelines for
Algorithm-Selection | Algorithm-selection (AS) methods are essential in order to obtain the best performance from a portfolio of solvers over large sets of instances. However, many AS methods rely on an analysis phase, e.g. where features are computed by sampling solutions and used as input in a machine-learning model. For AS to be efficient, it is therefore important that this analysis phase is not computationally expensive. We propose a method for identifying easy instances which can be solved quickly using a generalist solver without any need for algorithm-selection. This saves computational budget associated with feature-computation which can then be used elsewhere in an AS pipeline, e.g., enabling additional function evaluations on hard problems. Experiments on the BBOB dataset in two settings (batch and streaming) show that identifying easy instances results in substantial savings in function evaluations. Re-allocating the saved budget to hard problems provides gains in performance compared to both the virtual best solver (VBS) computed with the original budget, the single best solver (SBS) and a trained algorithm-selector. | [
"['Quentin Renau' 'Emma Hart']"
] |
null | null | 2406.17001 | null | null | http://arxiv.org/pdf/2406.17001v1 | 2024-06-24T14:12:03Z | 2024-06-24T14:12:03Z | Deep Learning for Prediction and Classifying the Dynamical behaviour of
Piecewise Smooth Maps | This paper explores the prediction of the dynamics of piecewise smooth maps using various deep learning models. We have shown various novel ways of predicting the dynamics of piecewise smooth maps using deep learning models. Moreover, we have used machine learning models such as Decision Tree Classifier, Logistic Regression, K-Nearest Neighbor, Random Forest, and Support Vector Machine for predicting the border collision bifurcation in the 1D normal form map and the 1D tent map. Further, we classified the regular and chaotic behaviour of the 1D tent map and the 2D Lozi map using deep learning models like Convolutional Neural Network (CNN), ResNet50, and ConvLSTM via cobweb diagram and phase portraits. We also classified the chaotic and hyperchaotic behaviour of the 3D piecewise smooth map using deep learning models such as the Feed Forward Neural Network (FNN), Long Short-Term Memory (LSTM), and Recurrent Neural Network (RNN). Finally, deep learning models such as Long Short-Term Memory (LSTM) and Recurrent Neural Network (RNN) are used for reconstructing the two parametric charts of 2D border collision bifurcation normal form map. | [
"['Vismaya V S' 'Bharath V Nair' 'Sishu Shankar Muni']"
] |
null | null | 2406.17002 | null | null | http://arxiv.org/pdf/2406.17002v2 | 2024-06-26T15:27:16Z | 2024-06-24T14:37:17Z | Benchmarking mortality risk prediction from electrocardiograms | Several recent high-impact studies leverage large hospital-owned electrocardiographic (ECG) databases to model and predict patient mortality. MIMIC-IV, released September 2023, is the first comparable public dataset and includes 800,000 ECGs from a U.S. hospital system. Previously, the largest public ECG dataset was Code-15, containing 345,000 ECGs collected during routine care in Brazil. These datasets now provide an excellent resource for a broader audience to explore ECG survival modeling. Here, we benchmark survival model performance on Code-15 and MIMIC-IV with two neural network architectures, compare four deep survival modeling approaches to Cox regressions trained on classifier outputs, and evaluate performance at one to ten years. Our results yield AUROC and concordance scores comparable to past work (circa 0.8) and reasonable AUPRC scores (MIMIC-IV: 0.4-0.5, Code-15: 0.05-0.13) considering the fraction of ECG samples linked to a mortality (MIMIC-IV: 27%, Code-15: 4%). When evaluating models on the opposite dataset, AUROC and concordance values drop by 0.1-0.15, which may be due to cohort differences. All code and results are made public. | [
"['Platon Lukyanenko' 'Joshua Mayourian' 'Mingxuan Liu' 'John K. Triedman'\n 'Sunil J. Ghelani' 'William G. La Cava']"
] |
null | null | 2406.17008 | null | null | http://arxiv.org/pdf/2406.17008v1 | 2024-06-24T17:59:33Z | 2024-06-24T17:59:33Z | Meta-learning and Data Augmentation for Stress Testing Forecasting
Models | The effectiveness of univariate forecasting models is often hampered by conditions that cause them stress. A model is considered to be under stress if it shows a negative behaviour, such as higher-than-usual errors or increased uncertainty. Understanding the factors that cause stress to forecasting models is important to improve their reliability, transparency, and utility. This paper addresses this problem by contributing with a novel framework called MAST (Meta-learning and data Augmentation for Stress Testing). The proposed approach aims to model and characterize stress in univariate time series forecasting models, focusing on conditions where they exhibit large errors. In particular, MAST is a meta-learning approach that predicts the probability that a given model will perform poorly on a given time series based on a set of statistical time series features. MAST also encompasses a novel data augmentation technique based on oversampling to improve the metadata concerning stress. We conducted experiments using three benchmark datasets that contain a total of 49.794 time series to validate the performance of MAST. The results suggest that the proposed approach is able to identify conditions that lead to large errors. The method and experiments are publicly available in a repository. | [
"['Ricardo Inácio' 'Vitor Cerqueira' 'Marília Barandas' 'Carlos Soares']"
] |
null | null | 2406.17051 | null | null | http://arxiv.org/pdf/2406.17051v2 | 2024-06-28T05:50:11Z | 2024-06-24T18:13:09Z | Leveraging Knowledge Distillation for Lightweight Skin Cancer
Classification: Balancing Accuracy and Computational Efficiency | Skin cancer is a major concern to public health, accounting for one-third of the reported cancers. If not detected early, the cancer has the potential for severe consequences. Recognizing the critical need for effective skin cancer classification, we address the limitations of existing models, which are often too large to deploy in areas with limited computational resources. In response, we present a knowledge distillation based approach for creating a lightweight yet high-performing classifier. The proposed solution involves fusing three models, namely ResNet152V2, ConvNeXtBase, and ViT Base, to create an effective teacher model. The teacher model is then employed to guide a lightweight student model of size 2.03 MB. This student model is further compressed to 469.77 KB using 16-bit quantization, enabling smooth incorporation into edge devices. With six-stage image preprocessing, data augmentation, and a rigorous ablation study, the model achieves an impressive accuracy of 98.75% on the HAM10000 dataset and 98.94% on the Kaggle dataset in classifying benign and malignant skin cancers. With its high accuracy and compact size, our model appears to be a potential choice for accurate skin cancer classification, particularly in resource-constrained settings. | [
"['Niful Islam' 'Khan Md Hasib' 'Fahmida Akter Joti' 'Asif Karim'\n 'Sami Azam']"
] |
null | null | 2406.17055 | null | null | http://arxiv.org/pdf/2406.17055v2 | 2024-07-01T17:29:54Z | 2024-06-24T18:15:27Z | Large Language Models Assume People are More Rational than We Really are | In order for AI systems to communicate effectively with people, they must understand how we make decisions. However, people's decisions are not always rational, so the implicit internal models of human decision-making in Large Language Models (LLMs) must account for this. Previous empirical evidence seems to suggest that these implicit models are accurate -- LLMs offer believable proxies of human behavior, acting how we expect humans would in everyday interactions. However, by comparing LLM behavior and predictions to a large dataset of human decisions, we find that this is actually not the case: when both simulating and predicting people's choices, a suite of cutting-edge LLMs (GPT-4o & 4-Turbo, Llama-3-8B & 70B, Claude 3 Opus) assume that people are more rational than we really are. Specifically, these models deviate from human behavior and align more closely with a classic model of rational choice -- expected value theory. Interestingly, people also tend to assume that other people are rational when interpreting their behavior. As a consequence, when we compare the inferences that LLMs and people draw from the decisions of others using another psychological dataset, we find that these inferences are highly correlated. Thus, the implicit decision-making models of LLMs appear to be aligned with the human expectation that other people will act rationally, rather than with how people actually act. | [
"['Ryan Liu' 'Jiayi Geng' 'Joshua C. Peterson' 'Ilia Sucholutsky'\n 'Thomas L. Griffiths']"
] |
null | null | 2406.17058 | null | null | http://arxiv.org/pdf/2406.17058v1 | 2024-06-24T18:18:58Z | 2024-06-24T18:18:58Z | Bayesian Deep ICE | Deep Independent Component Estimation (DICE) has many applications in modern day machine learning as a feature engineering extraction method. We provide a novel latent variable representation of independent component analysis that enables both point estimates via expectation-maximization (EM) and full posterior sampling via Markov Chain Monte Carlo (MCMC) algorithms. Our methodology also applies to flow-based methods for nonlinear feature extraction. We discuss how to implement conditional posteriors and envelope-based methods for optimization. Through this representation hierarchy, we unify a number of hitherto disjoint estimation procedures. We illustrate our methodology and algorithms on a numerical example. Finally, we conclude with directions for future research. | [
"['Jyotishka Datta' 'Nicholas G. Polson']"
] |
null | null | 2406.17073 | null | null | http://arxiv.org/abs/2406.17073v2 | 2024-06-27T18:15:16Z | 2024-06-24T18:59:24Z | Meta-GCN: A Dynamically Weighted Loss Minimization Method for Dealing
with the Data Imbalance in Graph Neural Networks | Although many real-world applications, such as disease prediction, and fault detection suffer from class imbalance, most existing graph-based classification methods ignore the skewness of the distribution of classes; therefore, tend to be biased towards the majority class(es). Conventional methods typically tackle this problem through the assignment of weights to each one of the class samples based on a function of their loss, which can lead to over-fitting on outliers. In this paper, we propose a meta-learning algorithm, named Meta-GCN, for adaptively learning the example weights by simultaneously minimizing the unbiased meta-data set loss and optimizing the model weights through the use of a small unbiased meta-data set. Through experiments, we have shown that Meta-GCN outperforms state-of-the-art frameworks and other baselines in terms of accuracy, the area under the receiver operating characteristic (AUC-ROC) curve, and macro F1-Score for classification tasks on two different datasets. | [
"['Mahdi Mohammadizadeh' 'Arash Mozhdehi' 'Yani Ioannou' 'Xin Wang']"
] |
null | null | 2406.17086 | null | null | http://arxiv.org/pdf/2406.17086v1 | 2024-06-24T19:16:24Z | 2024-06-24T19:16:24Z | BrainMAE: A Region-aware Self-supervised Learning Framework for Brain
Signals | The human brain is a complex, dynamic network, which is commonly studied using functional magnetic resonance imaging (fMRI) and modeled as network of Regions of interest (ROIs) for understanding various brain functions. Recent studies utilize deep learning approaches to learn the brain network representation based on functional connectivity (FC) profile, broadly falling into two main categories. The Fixed-FC approaches, utilizing the FC profile which represents the linear temporal relation within the brain network, are limited by failing to capture informative brain temporal dynamics. On the other hand, the Dynamic-FC approaches, modeling the evolving FC profile over time, often exhibit less satisfactory performance due to challenges in handling the inherent noisy nature of fMRI data. To address these challenges, we propose Brain Masked Auto-Encoder (BrainMAE) for learning representations directly from fMRI time-series data. Our approach incorporates two essential components: a region-aware graph attention mechanism designed to capture the relationships between different brain ROIs, and a novel self-supervised masked autoencoding framework for effective model pre-training. These components enable the model to capture rich temporal dynamics of brain activity while maintaining resilience to inherent noise in fMRI data. Our experiments demonstrate that BrainMAE consistently outperforms established baseline methods by significant margins in four distinct downstream tasks. Finally, leveraging the model's inherent interpretability, our analysis of model-generated representations reveals findings that resonate with ongoing research in the field of neuroscience. | [
"['Yifan Yang' 'Yutong Mao' 'Xufu Liu' 'Xiao Liu']"
] |
null | null | 2406.17090 | null | null | http://arxiv.org/pdf/2406.17090v1 | 2024-06-24T19:27:34Z | 2024-06-24T19:27:34Z | Exploring Biomarker Relationships in Both Type 1 and Type 2 Diabetes
Mellitus Through a Bayesian Network Analysis Approach | Understanding the complex relationships of biomarkers in diabetes is pivotal for advancing treatment strategies, a pressing need in diabetes research. This study applies Bayesian network structure learning to analyze the Shanghai Type 1 and Type 2 diabetes mellitus datasets, revealing complex relationships among key diabetes-related biomarkers. The constructed Bayesian network presented notable predictive accuracy, particularly for Type 2 diabetes mellitus, with root mean squared error (RMSE) of 18.23 mg/dL, as validated through leave-one-domain experiments and Clarke error grid analysis. This study not only elucidates the intricate dynamics of diabetes through a deeper understanding of biomarker interplay but also underscores the significant potential of integrating data-driven and knowledge-driven methodologies in the realm of personalized diabetes management. Such an approach paves the way for more custom and effective treatment strategies, marking a notable advancement in the field. | [
"['Yuyang Sun' 'Jingyu Lei' 'Panagiotis Kosmas']"
] |
null | null | 2406.17096 | null | null | http://arxiv.org/pdf/2406.17096v1 | 2024-06-24T19:35:26Z | 2024-06-24T19:35:26Z | Model-Free Robust Reinforcement Learning with Sample Complexity Analysis | Distributionally Robust Reinforcement Learning (DR-RL) aims to derive a policy optimizing the worst-case performance within a predefined uncertainty set. Despite extensive research, previous DR-RL algorithms have predominantly favored model-based approaches, with limited availability of model-free methods offering convergence guarantees or sample complexities. This paper proposes a model-free DR-RL algorithm leveraging the Multi-level Monte Carlo (MLMC) technique to close such a gap. Our innovative approach integrates a threshold mechanism that ensures finite sample requirements for algorithmic implementation, a significant improvement than previous model-free algorithms. We develop algorithms for uncertainty sets defined by total variation, Chi-square divergence, and KL divergence, and provide finite sample analyses under all three cases. Remarkably, our algorithms represent the first model-free DR-RL approach featuring finite sample complexity for total variation and Chi-square divergence uncertainty sets, while also offering an improved sample complexity and broader applicability compared to existing model-free DR-RL algorithms for the KL divergence model. The complexities of our method establish the tightest results for all three uncertainty models in model-free DR-RL, underscoring the effectiveness and efficiency of our algorithm, and highlighting its potential for practical applications. | [
"['Yudan Wang' 'Shaofeng Zou' 'Yue Wang']"
] |
null | null | 2406.17098 | null | null | http://arxiv.org/pdf/2406.17098v1 | 2024-06-24T19:36:45Z | 2024-06-24T19:36:45Z | Learning Temporal Distances: Contrastive Successor Features Can Provide
a Metric Structure for Decision-Making | Temporal distances lie at the heart of many algorithms for planning, control, and reinforcement learning that involve reaching goals, allowing one to estimate the transit time between two states. However, prior attempts to define such temporal distances in stochastic settings have been stymied by an important limitation: these prior approaches do not satisfy the triangle inequality. This is not merely a definitional concern, but translates to an inability to generalize and find shortest paths. In this paper, we build on prior work in contrastive learning and quasimetrics to show how successor features learned by contrastive learning (after a change of variables) form a temporal distance that does satisfy the triangle inequality, even in stochastic settings. Importantly, this temporal distance is computationally efficient to estimate, even in high-dimensional and stochastic settings. Experiments in controlled settings and benchmark suites demonstrate that an RL algorithm based on these new temporal distances exhibits combinatorial generalization (i.e., "stitching") and can sometimes learn more quickly than prior methods, including those based on quasimetrics. | [
"['Vivek Myers' 'Chongyi Zheng' 'Anca Dragan' 'Sergey Levine'\n 'Benjamin Eysenbach']"
] |
null | null | 2406.17102 | null | null | http://arxiv.org/pdf/2406.17102v1 | 2024-06-24T19:42:16Z | 2024-06-24T19:42:16Z | Achieving Fairness Across Local and Global Models in Federated Learning | Achieving fairness across diverse clients in Federated Learning (FL) remains a significant challenge due to the heterogeneity of the data and the inaccessibility of sensitive attributes from clients' private datasets. This study addresses this issue by introducing texttt{EquiFL}, a novel approach designed to enhance both local and global fairness in federated learning environments. texttt{EquiFL} incorporates a fairness term into the local optimization objective, effectively balancing local performance and fairness. The proposed coordination mechanism also prevents bias from propagating across clients during the collaboration phase. Through extensive experiments across multiple benchmarks, we demonstrate that texttt{EquiFL} not only strikes a better balance between accuracy and fairness locally at each client but also achieves global fairness. The results also indicate that texttt{EquiFL} ensures uniform performance distribution among clients, thus contributing to performance fairness. Furthermore, we showcase the benefits of texttt{EquiFL} in a real-world distributed dataset from a healthcare application, specifically in predicting the effects of treatments on patients across various hospital locations. | [
"['Disha Makhija' 'Xing Han' 'Joydeep Ghosh' 'Yejin Kim']"
] |
null | null | 2406.17103 | null | null | http://arxiv.org/pdf/2406.17103v2 | 2024-07-15T03:22:05Z | 2024-06-24T19:42:22Z | Maximum Likelihood Estimation of the Direction of Sound In A Reverberant
Noisy Environment | We describe a new method for estimating the direction of sound in a reverberant environment from basic principles of sound propagation. The method utilizes SNR-adaptive features from time-delay and energy of the directional components after acoustic wave decomposition of the observed sound field to estimate the line-of-sight direction under noisy and reverberant conditions. The effectiveness of the approach is established with measured data of different microphone array configurations under various usage scenarios. | [
"['Mohamed F. Mansour']"
] |
null | null | 2406.17112 | null | null | http://arxiv.org/pdf/2406.17112v1 | 2024-06-24T19:54:58Z | 2024-06-24T19:54:58Z | Integrating Generative AI with Network Digital Twins for Enhanced
Network Operations | As telecommunications networks become increasingly complex, the integration of advanced technologies such as network digital twins and generative artificial intelligence (AI) emerges as a pivotal solution to enhance network operations and resilience. This paper explores the synergy between network digital twins, which provide a dynamic virtual representation of physical networks, and generative AI, particularly focusing on Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). We propose a novel architectural framework that incorporates these technologies to significantly improve predictive maintenance, network scenario simulation, and real-time data-driven decision-making. Through extensive simulations, we demonstrate how generative AI can enhance the accuracy and operational efficiency of network digital twins, effectively handling real-world complexities such as unpredictable traffic loads and network failures. The findings suggest that this integration not only boosts the capability of digital twins in scenario forecasting and anomaly detection but also facilitates a more adaptive and intelligent network management system. | [
"['Kassi Muhammad' 'Teef David' 'Giulia Nassisid' 'Tina Farus']"
] |
null | null | 2406.17114 | null | null | http://arxiv.org/pdf/2406.17114v1 | 2024-06-24T20:01:43Z | 2024-06-24T20:01:43Z | Inception: Efficiently Computable Misinformation Attacks on Markov Games | We study security threats to Markov games due to information asymmetry and misinformation. We consider an attacker player who can spread misinformation about its reward function to influence the robust victim player's behavior. Given a fixed fake reward function, we derive the victim's policy under worst-case rationality and present polynomial-time algorithms to compute the attacker's optimal worst-case policy based on linear programming and backward induction. Then, we provide an efficient inception ("planting an idea in someone's mind") attack algorithm to find the optimal fake reward function within a restricted set of reward functions with dominant strategies. Importantly, our methods exploit the universal assumption of rationality to compute attacks efficiently. Thus, our work exposes a security vulnerability arising from standard game assumptions under misinformation. | [
"['Jeremy McMahan' 'Young Wu' 'Yudong Chen' 'Xiaojin Zhu' 'Qiaomin Xie']"
] |
null | null | 2406.17119 | null | null | http://arxiv.org/pdf/2406.17119v2 | 2024-07-08T17:23:22Z | 2024-06-24T20:13:23Z | Accelerating Phase Field Simulations Through a Hybrid Adaptive Fourier
Neural Operator with U-Net Backbone | Prolonged contact between a corrosive liquid and metal alloys can cause progressive dealloying. For such liquid-metal dealloying (LMD) process, phase field models have been developed. However, the governing equations often involve coupled non-linear partial differential equations (PDE), which are challenging to solve numerically. In particular, stiffness in the PDEs requires an extremely small time steps (e.g. $10^{-12}$ or smaller). This computational bottleneck is especially problematic when running LMD simulation until a late time horizon is required. This motivates the development of surrogate models capable of leaping forward in time, by skipping several consecutive time steps at-once. In this paper, we propose U-Shaped Adaptive Fourier Neural Operators (U-AFNO), a machine learning (ML) model inspired by recent advances in neural operator learning. U-AFNO employs U-Nets for extracting and reconstructing local features within the physical fields, and passes the latent space through a vision transformer (ViT) implemented in the Fourier space (AFNO). We use U-AFNOs to learn the dynamics mapping the field at a current time step into a later time step. We also identify global quantities of interest (QoI) describing the corrosion process (e.g. the deformation of the liquid-metal interface) and show that our proposed U-AFNO model is able to accurately predict the field dynamics, in-spite of the chaotic nature of LMD. Our model reproduces the key micro-structure statistics and QoIs with a level of accuracy on-par with the high-fidelity numerical solver. We also investigate the opportunity of using hybrid simulations, in which we alternate forward leap in time using the U-AFNO with high-fidelity time stepping. We demonstrate that while advantageous for some surrogate model design choices, our proposed U-AFNO model in fully auto-regressive settings consistently outperforms hybrid schemes. | [
"['Christophe Bonneville' 'Nathan Bieberdorf' 'Arun Hegde' 'Mark Asta'\n 'Habib N. Najm' 'Laurent Capolungo' 'Cosmin Safta']"
] |
null | null | 2406.17124 | null | null | http://arxiv.org/pdf/2406.17124v1 | 2024-06-24T20:21:38Z | 2024-06-24T20:21:38Z | Investigating Confidence Estimation Measures for Speaker Diarization | Speaker diarization systems segment a conversation recording based on the speakers' identity. Such systems can misclassify the speaker of a portion of audio due to a variety of factors, such as speech pattern variation, background noise, and overlapping speech. These errors propagate to, and can adversely affect, downstream systems that rely on the speaker's identity, such as speaker-adapted speech recognition. One of the ways to mitigate these errors is to provide segment-level diarization confidence scores to downstream systems. In this work, we investigate multiple methods for generating diarization confidence scores, including those derived from the original diarization system and those derived from an external model. Our experiments across multiple datasets and diarization systems demonstrate that the most competitive confidence score methods can isolate ~30% of the diarization errors within segments with the lowest ~10% of confidence scores. | [
"['Anurag Chowdhury' 'Abhinav Misra' 'Mark C. Fuhs' 'Monika Woszczyna']"
] |
null | null | 2406.17125 | null | null | http://arxiv.org/pdf/2406.17125v1 | 2024-06-24T20:27:13Z | 2024-06-24T20:27:13Z | A Wiener process perspective on local intrinsic dimension estimation
methods | Local intrinsic dimension (LID) estimation methods have received a lot of attention in recent years thanks to the progress in deep neural networks and generative modeling. In opposition to old non-parametric methods, new methods use generative models to approximate diffused dataset density and scale the methods to high-dimensional datasets like images. In this paper, we investigate the recent state-of-the-art parametric LID estimation methods from the perspective of the Wiener process. We explore how these methods behave when their assumptions are not met. We give an extended mathematical description of those methods and their error as a function of the probability density of the data. | [
"['Piotr Tempczyk' 'Łukasz Garncarek' 'Dominik Filipiak' 'Adam Kurpisz']"
] |
null | null | 2406.17126 | null | null | http://arxiv.org/pdf/2406.17126v1 | 2024-06-24T20:29:16Z | 2024-06-24T20:29:16Z | MM-SpuBench: Towards Better Understanding of Spurious Biases in
Multimodal LLMs | Spurious bias, a tendency to use spurious correlations between non-essential input attributes and target variables for predictions, has revealed a severe robustness pitfall in deep learning models trained on single modality data. Multimodal Large Language Models (MLLMs), which integrate both vision and language models, have demonstrated strong capability in joint vision-language understanding. However, whether spurious biases are prevalent in MLLMs remains under-explored. We mitigate this gap by analyzing the spurious biases in a multimodal setting, uncovering the specific test data patterns that can manifest this problem when biases in the vision model cascade into the alignment between visual and text tokens in MLLMs. To better understand this problem, we introduce MM-SpuBench, a comprehensive visual question-answering (VQA) benchmark designed to evaluate MLLMs' reliance on nine distinct categories of spurious correlations from five open-source image datasets. The VQA dataset is built from human-understandable concept information (attributes). Leveraging this benchmark, we conduct a thorough evaluation of current state-of-the-art MLLMs. Our findings illuminate the persistence of the reliance on spurious correlations from these models and underscore the urge for new methodologies to mitigate spurious biases. To support the MLLM robustness research, we release our VQA benchmark at https://huggingface.co/datasets/mmbench/MM-SpuBench. | [
"['Wenqian Ye' 'Guangtao Zheng' 'Yunsheng Ma' 'Xu Cao' 'Bolin Lai'\n 'James M. Rehg' 'Aidong Zhang']"
] |
null | null | 2406.17131 | null | null | http://arxiv.org/pdf/2406.17131v1 | 2024-06-24T20:41:37Z | 2024-06-24T20:41:37Z | Bayesian temporal biclustering with applications to multi-subject
neuroscience studies | We consider the problem of analyzing multivariate time series collected on multiple subjects, with the goal of identifying groups of subjects exhibiting similar trends in their recorded measurements over time as well as time-varying groups of associated measurements. To this end, we propose a Bayesian model for temporal biclustering featuring nested partitions, where a time-invariant partition of subjects induces a time-varying partition of measurements. Our approach allows for data-driven determination of the number of subject and measurement clusters as well as estimation of the number and location of changepoints in measurement partitions. To efficiently perform model fitting and posterior estimation with Markov Chain Monte Carlo, we derive a blocked update of measurements' cluster-assignment sequences. We illustrate the performance of our model in two applications to functional magnetic resonance imaging data and to an electroencephalogram dataset. The results indicate that the proposed model can combine information from potentially many subjects to discover a set of interpretable, dynamic patterns. Experiments on simulated data compare the estimation performance of the proposed model against ground-truth values and other statistical methods, showing that it performs well at identifying ground-truth subject and measurement clusters even when no subject or time dependence is present. | [
"['Federica Zoe Ricci' 'Erik B. Sudderth' 'Jaylen Lee' 'Megan A. K. Peters'\n 'Marina Vannucci' 'Michele Guindani']"
] |
null | null | 2406.17145 | null | null | http://arxiv.org/pdf/2406.17145v1 | 2024-06-24T21:32:51Z | 2024-06-24T21:32:51Z | GraphPipe: Improving Performance and Scalability of DNN Training with
Graph Pipeline Parallelism | Deep neural networks (DNNs) continue to grow rapidly in size, making them infeasible to train on a single device. Pipeline parallelism is commonly used in existing DNN systems to support large-scale DNN training by partitioning a DNN into multiple stages, which concurrently perform DNN training for different micro-batches in a pipeline fashion. However, existing pipeline-parallel approaches only consider sequential pipeline stages and thus ignore the topology of a DNN, resulting in missed model-parallel opportunities. This paper presents graph pipeline parallelism (GPP), a new pipeline-parallel scheme that partitions a DNN into pipeline stages whose dependencies are identified by a directed acyclic graph. GPP generalizes existing sequential pipeline parallelism and preserves the inherent topology of a DNN to enable concurrent execution of computationally-independent operators, resulting in reduced memory requirement and improved GPU performance. In addition, we develop GraphPipe, a distributed system that exploits GPP strategies to enable performant and scalable DNN training. GraphPipe partitions a DNN into a graph of stages, optimizes micro-batch schedules for these stages, and parallelizes DNN training using the discovered GPP strategies. Evaluation on a variety of DNNs shows that GraphPipe outperforms existing pipeline-parallel systems such as PipeDream and Piper by up to 1.6X. GraphPipe also reduces the search time by 9-21X compared to PipeDream and Piper. | [
"['Byungsoo Jeon' 'Mengdi Wu' 'Shiyi Cao' 'Sunghyun Kim' 'Sunghyun Park'\n 'Neeraj Aggarwal' 'Colin Unger' 'Daiyaan Arfeen' 'Peiyuan Liao'\n 'Xupeng Miao' 'Mohammad Alizadeh' 'Gregory R. Ganger' 'Tianqi Chen'\n 'Zhihao Jia']"
] |
null | null | 2406.17147 | null | null | http://arxiv.org/pdf/2406.17147v1 | 2024-06-24T21:38:13Z | 2024-06-24T21:38:13Z | Quantifying Heterogeneous Ecosystem Services With Multi-Label Soft
Classification | Understanding and quantifying ecosystem services are crucial for sustainable environmental management, conservation efforts, and policy-making. The advancement of remote sensing technology and machine learning techniques has greatly facilitated this process. Yet, ground truth labels, such as biodiversity, are very difficult and expensive to measure. In addition, more easily obtainable proxy labels, such as land use, often fail to capture the complex heterogeneity of the ecosystem. In this paper, we demonstrate how land use proxy labels can be implemented with a soft, multi-label classifier to predict ecosystem services with complex heterogeneity. | [
"['Zhihui Tian' 'John Upchurch' 'G. Austin Simon' 'José Dubeux'\n 'Alina Zare' 'Chang Zhao' 'Joel B. Harley']"
] |
null | null | 2406.17150 | null | null | http://arxiv.org/pdf/2406.17150v1 | 2024-06-24T21:44:37Z | 2024-06-24T21:44:37Z | Peirce in the Machine: How Mixture of Experts Models Perform Hypothesis
Construction | Mixture of experts is a prediction aggregation method in machine learning that aggregates the predictions of specialized experts. This method often outperforms Bayesian methods despite the Bayesian having stronger inductive guarantees. We argue that this is due to the greater functional capacity of mixture of experts. We prove that in a limiting case of mixture of experts will have greater capacity than equivalent Bayesian methods, which we vouchsafe through experiments on non-limiting cases. Finally, we conclude that mixture of experts is a type of abductive reasoning in the Peircian sense of hypothesis construction. | [
"['Bruce Rushing']"
] |
null | null | 2406.17162 | null | null | http://arxiv.org/pdf/2406.17162v1 | 2024-06-24T22:29:30Z | 2024-06-24T22:29:30Z | Virtual Mines -- Component-level recycling of printed circuit boards
using deep learning | This contribution gives an overview of an ongoing project using machine learning and computer vision components for improving the electronic waste recycling process. In circular economy, the "virtual mines" concept refers to production cycles where interesting raw materials are reclaimed in an efficient and cost-effective manner from end-of-life items. In particular, the growth of e-waste, due to the increasingly shorter life cycle of hi-tech goods, is a global problem. In this paper, we describe a pipeline based on deep learning model to recycle printed circuit boards at the component level. A pre-trained YOLOv5 model is used to analyze the results of the locally developed dataset. With a different distribution of class instances, YOLOv5 managed to achieve satisfactory precision and recall, with the ability to optimize with large component instances. | [
"['Muhammad Mohsin' 'Stefano Rovetta' 'Francesco Masulli' 'Alberto Cabri']"
] |
null | null | 2406.17163 | null | null | http://arxiv.org/pdf/2406.17163v1 | 2024-06-24T22:30:26Z | 2024-06-24T22:30:26Z | Paraphrase and Aggregate with Large Language Models for Minimizing
Intent Classification Errors | Large language models (LLM) have achieved remarkable success in natural language generation but lesser focus has been given to their applicability in decision making tasks such as classification. We show that LLMs like LLaMa can achieve high performance on large multi-class classification tasks but still make classification errors and worse, generate out-of-vocabulary class labels. To address these critical issues, we introduce Paraphrase and AGgregate (PAG)-LLM approach wherein an LLM generates multiple paraphrases of the input query (parallel queries), performs multi-class classification for the original query and each paraphrase, and at the end aggregate all the classification labels based on their confidence scores. We evaluate PAG-LLM on two large multi-class classication datasets: CLINC, and Banking and show 22.7% and 15.1% error reduction. We show that PAG-LLM is especially effective for hard examples where LLM is uncertain, and reduces the critical misclassification and hallucinated label generation errors | [
"['Vikas Yadav' 'Zheng Tang' 'Vijay Srinivasan']"
] |
null | null | 2406.17167 | null | null | http://arxiv.org/pdf/2406.17167v1 | 2024-06-24T23:00:58Z | 2024-06-24T23:00:58Z | Learning on Transformers is Provable Low-Rank and Sparse: A One-layer
Analysis | Efficient training and inference algorithms, such as low-rank adaption and model pruning, have shown impressive performance for learning Transformer-based large foundation models. However, due to the technical challenges of the non-convex optimization caused by the complicated architecture of Transformers, the theoretical study of why these methods can be applied to learn Transformers is mostly elusive. To the best of our knowledge, this paper shows the first theoretical analysis of the property of low-rank and sparsity of one-layer Transformers by characterizing the trained model after convergence using stochastic gradient descent. By focusing on a data model based on label-relevant and label-irrelevant patterns, we quantify that the gradient updates of trainable parameters are low-rank, which depends on the number of label-relevant patterns. We also analyze how model pruning affects the generalization while improving computation efficiency and conclude that proper magnitude-based pruning has a slight effect on the testing performance. We implement numerical experiments to support our findings. | [
"['Hongkang Li' 'Meng Wang' 'Shuai Zhang' 'Sijia Liu' 'Pin-Yu Chen']"
] |
null | null | 2406.17168 | null | null | http://arxiv.org/pdf/2406.17168v1 | 2024-06-24T23:02:18Z | 2024-06-24T23:02:18Z | Reinforcement Learning via Auxiliary Task Distillation | We present Reinforcement Learning via Auxiliary Task Distillation (AuxDistill), a new method that enables reinforcement learning (RL) to perform long-horizon robot control problems by distilling behaviors from auxiliary RL tasks. AuxDistill achieves this by concurrently carrying out multi-task RL with auxiliary tasks, which are easier to learn and relevant to the main task. A weighted distillation loss transfers behaviors from these auxiliary tasks to solve the main task. We demonstrate that AuxDistill can learn a pixels-to-actions policy for a challenging multi-stage embodied object rearrangement task from the environment reward without demonstrations, a learning curriculum, or pre-trained skills. AuxDistill achieves $2.3 times$ higher success than the previous state-of-the-art baseline in the Habitat Object Rearrangement benchmark and outperforms methods that use pre-trained skills and expert demonstrations. | [
"['Abhinav Narayan Harish' 'Larry Heck' 'Josiah P. Hanna' 'Zsolt Kira'\n 'Andrew Szot']"
] |
null | null | 2406.17172 | null | null | http://arxiv.org/pdf/2406.17172v1 | 2024-06-24T23:15:19Z | 2024-06-24T23:15:19Z | Robust Zero Trust Architecture: Joint Blockchain based Federated
learning and Anomaly Detection based Framework | This paper introduces a robust zero-trust architecture (ZTA) tailored for the decentralized system that empowers efficient remote work and collaboration within IoT networks. Using blockchain-based federated learning principles, our proposed framework includes a robust aggregation mechanism designed to counteract malicious updates from compromised clients, enhancing the security of the global learning process. Moreover, secure and reliable trust computation is essential for remote work and collaboration. The robust ZTA framework integrates anomaly detection and trust computation, ensuring secure and reliable device collaboration in a decentralized fashion. We introduce an adaptive algorithm that dynamically adjusts to varying user contexts, using unsupervised clustering to detect novel anomalies, like zero-day attacks. To ensure a reliable and scalable trust computation, we develop an algorithm that dynamically adapts to varying user contexts by employing incremental anomaly detection and clustering techniques to identify and share local and global anomalies between nodes. Future directions include scalability improvements, Dirichlet process for advanced anomaly detection, privacy-preserving techniques, and the integration of post-quantum cryptographic methods to safeguard against emerging quantum threats. | [
"['Shiva Raj Pokhrel' 'Luxing Yang' 'Sutharshan Rajasegarar' 'Gang Li']"
] |
null | null | 2406.17173 | null | null | http://arxiv.org/pdf/2406.17173v2 | 2024-06-26T20:54:45Z | 2024-06-24T23:23:18Z | Diff3Dformer: Leveraging Slice Sequence Diffusion for Enhanced 3D CT
Classification with Transformer Networks | The manifestation of symptoms associated with lung diseases can vary in different depths for individual patients, highlighting the significance of 3D information in CT scans for medical image classification. While Vision Transformer has shown superior performance over convolutional neural networks in image classification tasks, their effectiveness is often demonstrated on sufficiently large 2D datasets and they easily encounter overfitting issues on small medical image datasets. To address this limitation, we propose a Diffusion-based 3D Vision Transformer (Diff3Dformer), which utilizes the latent space of the Diffusion model to form the slice sequence for 3D analysis and incorporates clustering attention into ViT to aggregate repetitive information within 3D CT scans, thereby harnessing the power of the advanced transformer in 3D classification tasks on small datasets. Our method exhibits improved performance on two different scales of small datasets of 3D lung CT scans, surpassing the state of the art 3D methods and other transformer-based approaches that emerged during the COVID-19 pandemic, demonstrating its robust and superior performance across different scales of data. Experimental results underscore the superiority of our proposed method, indicating its potential for enhancing medical image classification tasks in real-world scenarios. | [
"['Zihao Jin' 'Yingying Fang' 'Jiahao Huang' 'Caiwen Xu' 'Simon Walsh'\n 'Guang Yang']"
] |
null | null | 2406.17182 | null | null | http://arxiv.org/pdf/2406.17182v1 | 2024-06-24T23:42:18Z | 2024-06-24T23:42:18Z | Debiased Recommendation with Noisy Feedback | Ratings of a user to most items in recommender systems are usually missing not at random (MNAR), largely because users are free to choose which items to rate. To achieve unbiased learning of the prediction model under MNAR data, three typical solutions have been proposed, including error-imputation-based (EIB), inverse-propensity-scoring (IPS), and doubly robust (DR) methods. However, these methods ignore an alternative form of bias caused by the inconsistency between the observed ratings and the users' true preferences, also known as noisy feedback or outcome measurement errors (OME), e.g., due to public opinion or low-quality data collection process. In this work, we study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data. First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios. Next, we theoretically prove the unbiasedness and generalization bound of the proposed estimators. We further propose an alternate denoising training approach to achieve unbiased learning of the prediction model under MNAR data with OME. Extensive experiments are conducted on three real-world datasets and one semi-synthetic dataset to show the effectiveness of our proposed approaches. The code is available at https://github.com/haoxuanli-pku/KDD24-OME-DR. | [
"['Haoxuan Li' 'Chunyuan Zheng' 'Wenjie Wang' 'Hao Wang' 'Fuli Feng'\n 'Xiao-Hua Zhou']"
] |
null | null | 2406.17184 | null | null | http://arxiv.org/pdf/2406.17184v1 | 2024-06-24T23:43:56Z | 2024-06-24T23:43:56Z | Minimax Optimality in Contextual Dynamic Pricing with General Valuation
Models | Dynamic pricing, the practice of adjusting prices based on contextual factors, has gained significant attention due to its impact on revenue maximization. In this paper, we address the contextual dynamic pricing problem, which involves pricing decisions based on observable product features and customer characteristics. We propose a novel algorithm that achieves improved regret bounds while minimizing assumptions about the problem. Our algorithm discretizes the unknown noise distribution and combines the upper confidence bounds with a layered data partitioning technique to effectively regulate regret in each episode. These techniques effectively control the regret associated with pricing decisions, leading to the minimax optimality. Specifically, our algorithm achieves a regret upper bound of $tilde{mathcal{O}}(rho_{mathcal{V}}^{frac{1}{3}}(delta) T^{frac{2}{3}})$, where $rho_{mathcal{V}}(delta)$ represents the estimation error of the valuation function. Importantly, this bound matches the lower bound up to logarithmic terms, demonstrating the minimax optimality of our approach. Furthermore, our method extends beyond linear valuation models commonly used in dynamic pricing by considering general function spaces. We simplify the estimation process by reducing it to general offline regression oracles, making implementation more straightforward. | [
"['Xueping Gong' 'Jiheng Zhang']"
] |
null | null | 2406.17188 | null | null | http://arxiv.org/pdf/2406.17188v1 | 2024-06-25T00:02:01Z | 2024-06-25T00:02:01Z | Geometric Median (GM) Matching for Robust Data Pruning | Data pruning, the combinatorial task of selecting a small and informative subset from a large dataset, is crucial for mitigating the enormous computational costs associated with training data-hungry modern deep learning models at scale. Since large-scale data collections are invariably noisy, developing data pruning strategies that remain robust even in the presence of corruption is critical in practice. Unfortunately, the existing heuristics for (robust) data pruning lack theoretical coherence and rely on heroic assumptions, that are, often unattainable, by the very nature of the problem setting. Moreover, these strategies often yield sub-optimal neural scaling laws even compared to random sampling, especially in scenarios involving strong corruption and aggressive pruning rates -- making provably robust data pruning an open challenge. In response, in this work, we propose Geometric Median ($gm$) Matching -- a herding~citep{welling2009herding} style greedy algorithm -- that yields a $k$-subset such that the mean of the subset approximates the geometric median of the (potentially) noisy dataset. Theoretically, we show that $gm$ Matching enjoys an improved $gO(1/k)$ scaling over $gO(1/sqrt{k})$ scaling of uniform sampling; while achieving the optimal breakdown point of 1/2 even under arbitrary corruption. Extensive experiments across popular deep learning benchmarks indicate that $gm$ Matching consistently outperforms prior state-of-the-art; the gains become more profound at high rates of corruption and aggressive pruning rates; making $gm$ Matching a strong baseline for future research in robust data pruning. | [
"['Anish Acharya' 'Inderjit S Dhillon' 'Sujay Sanghavi']"
] |
null | null | 2406.17190 | null | null | http://arxiv.org/pdf/2406.17190v1 | 2024-06-25T00:15:54Z | 2024-06-25T00:15:54Z | Sound Tagging in Infant-centric Home Soundscapes | Certain environmental noises have been associated with negative developmental outcomes for infants and young children. Though classifying or tagging sound events in a domestic environment is an active research area, previous studies focused on data collected from a non-stationary microphone placed in the environment or from the perspective of adults. Further, many of these works ignore infants or young children in the environment or have data collected from only a single family where noise from the fixed sound source can be moderate at the infant's position or vice versa. Thus, despite the recent success of large pre-trained models for noise event detection, the performance of these models on infant-centric noise soundscapes in the home is yet to be explored. To bridge this gap, we have collected and labeled noises in home soundscapes from 22 families in an unobtrusive manner, where the data are collected through an infant-worn recording device. In this paper, we explore the performance of a large pre-trained model (Audio Spectrogram Transformer [AST]) on our noise-conditioned infant-centric environmental data as well as publicly available home environmental datasets. Utilizing different training strategies such as resampling, utilizing public datasets, mixing public and infant-centric training sets, and data augmentation using noise and masking, we evaluate the performance of a large pre-trained model on sparse and imbalanced infant-centric data. Our results show that fine-tuning the large pre-trained model by combining our collected dataset with public datasets increases the F1-score from 0.11 (public datasets) and 0.76 (collected datasets) to 0.84 (combined datasets) and Cohen's Kappa from 0.013 (public datasets) and 0.77 (collected datasets) to 0.83 (combined datasets) compared to only training with public or collected datasets, respectively. | [
"['Mohammad Nur Hossain Khan' 'Jialu Li' 'Nancy L. McElwain'\n 'Mark Hasegawa-Johnson' 'Bashima Islam']"
] |
null | null | 2406.17199 | null | null | http://arxiv.org/pdf/2406.17199v1 | 2024-06-25T01:08:03Z | 2024-06-25T01:08:03Z | Contrastive General Graph Matching with Adaptive Augmentation Sampling | Graph matching has important applications in pattern recognition and beyond. Current approaches predominantly adopt supervised learning, demanding extensive labeled data which can be limited or costly. Meanwhile, self-supervised learning methods for graph matching often require additional side information such as extra categorical information and input features, limiting their application to the general case. Moreover, designing the optimal graph augmentations for self-supervised graph matching presents another challenge to ensure robustness and efficacy. To address these issues, we introduce a novel Graph-centric Contrastive framework for Graph Matching (GCGM), capitalizing on a vast pool of graph augmentations for contrastive learning, yet without needing any side information. Given the variety of augmentation choices, we further introduce a Boosting-inspired Adaptive Augmentation Sampler (BiAS), which adaptively selects more challenging augmentations tailored for graph matching. Through various experiments, our GCGM surpasses state-of-the-art self-supervised methods across various datasets, marking a significant step toward more effective, efficient and general graph matching. | [
"['Jianyuan Bo' 'Yuan Fang']"
] |
null | null | 2406.17216 | null | null | http://arxiv.org/pdf/2406.17216v1 | 2024-06-25T02:05:29Z | 2024-06-25T02:05:29Z | Machine Unlearning Fails to Remove Data Poisoning Attacks | We revisit the efficacy of several practical methods for approximate machine unlearning developed for large-scale deep learning. In addition to complying with data deletion requests, one often-cited potential application for unlearning methods is to remove the effects of training on poisoned data. We experimentally demonstrate that, while existing unlearning methods have been demonstrated to be effective in a number of evaluation settings (e.g., alleviating membership inference attacks), they fail to remove the effects of data poisoning, across a variety of types of poisoning attacks (indiscriminate, targeted, and a newly-introduced Gaussian poisoning attack) and models (image classifiers and LLMs); even when granted a relatively large compute budget. In order to precisely characterize unlearning efficacy, we introduce new evaluation metrics for unlearning based on data poisoning. Our results suggest that a broader perspective, including a wider variety of evaluations, is required to avoid a false sense of confidence in machine unlearning procedures for deep learning without provable guarantees. Moreover, while unlearning methods show some signs of being useful to efficiently remove poisoned datapoints without having to retrain, our work suggests that these methods are not yet "ready for prime time", and currently provide limited benefit over retraining. | [
"['Martin Pawelczyk' 'Jimmy Z. Di' 'Yiwei Lu' 'Gautam Kamath'\n 'Ayush Sekhari' 'Seth Neel']"
] |
null | null | 2406.17224 | null | null | http://arxiv.org/pdf/2406.17224v1 | 2024-06-25T02:18:15Z | 2024-06-25T02:18:15Z | Large Language Models are Interpretable Learners | The trade-off between expressiveness and interpretability remains a core challenge when building human-centric predictive models for classification and decision-making. While symbolic rules offer interpretability, they often lack expressiveness, whereas neural networks excel in performance but are known for being black boxes. In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge this gap. In the proposed LLM-based Symbolic Programs (LSPs), the pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts. Symbolic programs then integrate these modules into an interpretable decision rule. To train LSPs, we develop a divide-and-conquer approach to incrementally build the program from scratch, where the learning process of each step is guided by LLMs. To evaluate the effectiveness of LSPs in extracting interpretable and accurate knowledge from data, we introduce IL-Bench, a collection of diverse tasks, including both synthetic and real-world scenarios across different modalities. Empirical results demonstrate LSP's superior performance compared to traditional neurosymbolic programs and vanilla automatic prompt tuning methods. Moreover, as the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable), and other LLMs, and generalizes well to out-of-distribution samples. | [
"['Ruochen Wang' 'Si Si' 'Felix Yu' 'Dorothea Wiesmann' 'Cho-Jui Hsieh'\n 'Inderjit Dhillon']"
] |
null | null | 2406.17228 | null | null | http://arxiv.org/pdf/2406.17228v1 | 2024-06-25T02:31:32Z | 2024-06-25T02:31:32Z | Greedy equivalence search for nonparametric graphical models | One of the hallmark achievements of the theory of graphical models and Bayesian model selection is the celebrated greedy equivalence search (GES) algorithm due to Chickering and Meek. GES is known to consistently estimate the structure of directed acyclic graph (DAG) models in various special cases including Gaussian and discrete models, which are in particular curved exponential families. A general theory that covers general nonparametric DAG models, however, is missing. Here, we establish the consistency of greedy equivalence search for general families of DAG models that satisfy smoothness conditions on the Markov factorization, and hence may not be curved exponential families, or even parametric. The proof leverages recent advances in nonparametric Bayes to construct a test for comparing misspecified DAG models that avoids arguments based on the Laplace approximation. Nonetheless, when the Laplace approximation is valid and a consistent scoring function exists, we recover the classical result. As a result, we obtain a general consistency theorem for GES applied to general DAG models. | [
"['Bryon Aragam']"
] |
null | null | 2406.17229 | null | null | http://arxiv.org/pdf/2406.17229v1 | 2024-06-25T02:35:37Z | 2024-06-25T02:35:37Z | Self-Supervised Embeddings for Detecting Individual Symptoms of
Depression | Depression, a prevalent mental health disorder impacting millions globally, demands reliable assessment systems. Unlike previous studies that focus solely on either detecting depression or predicting its severity, our work identifies individual symptoms of depression while also predicting its severity using speech input. We leverage self-supervised learning (SSL)-based speech models to better utilize the small-sized datasets that are frequently encountered in this task. Our study demonstrates notable performance improvements by utilizing SSL embeddings compared to conventional speech features. We compare various types of SSL pretrained models to elucidate the type of speech information (semantic, speaker, or prosodic) that contributes the most in identifying different symptoms. Additionally, we evaluate the impact of combining multiple SSL embeddings on performance. Furthermore, we show the significance of multi-task learning for identifying depressive symptoms effectively. | [
"['Sri Harsha Dumpala' 'Katerina Dikaios' 'Abraham Nunes' 'Frank Rudzicz'\n 'Rudolf Uher' 'Sageev Oore']"
] |
null | null | 2406.17238 | null | null | http://arxiv.org/pdf/2406.17238v1 | 2024-06-25T02:59:02Z | 2024-06-25T02:59:02Z | Expansive Synthesis: Generating Large-Scale Datasets from Minimal
Samples | The challenge of limited availability of data for training in machine learning arises in many applications and the impact on performance and generalization is serious. Traditional data augmentation methods aim to enhance training with a moderately sufficient data set. Generative models like Generative Adversarial Networks (GANs) often face problematic convergence when generating significant and diverse data samples. Diffusion models, though effective, still struggle with high computational cost and long training times. This paper introduces an innovative Expansive Synthesis model that generates large-scale, high-fidelity datasets from minimal samples. The proposed approach exploits expander graph mappings and feature interpolation to synthesize expanded datasets while preserving the intrinsic data distribution and feature structural relationships. The rationale of the model is rooted in the non-linear property of neural networks' latent space and in its capture by a Koopman operator to yield a linear space of features to facilitate the construction of larger and enriched consistent datasets starting with a much smaller dataset. This process is optimized by an autoencoder architecture enhanced with self-attention layers and further refined for distributional consistency by optimal transport. We validate our Expansive Synthesis by training classifiers on the generated datasets and comparing their performance to classifiers trained on larger, original datasets. Experimental results demonstrate that classifiers trained on synthesized data achieve performance metrics on par with those trained on full-scale datasets, showcasing the model's potential to effectively augment training data. This work represents a significant advancement in data generation, offering a robust solution to data scarcity and paving the way for enhanced data availability in machine learning applications. | [
"['Vahid Jebraeeli' 'Bo Jiang' 'Hamid Krim' 'Derya Cansever']"
] |
null | null | 2406.17245 | null | null | http://arxiv.org/pdf/2406.17245v1 | 2024-06-25T03:24:06Z | 2024-06-25T03:24:06Z | Unlocking Continual Learning Abilities in Language Models | Language models (LMs) exhibit impressive performance and generalization capabilities. However, LMs struggle with the persistent challenge of catastrophic forgetting, which undermines their long-term sustainability in continual learning (CL). Existing approaches usually address the issue by incorporating old task data or task-wise inductive bias into LMs. However, old data and accurate task information are often unavailable or costly to collect, hindering the availability of current CL approaches for LMs. To address this limitation, we introduce $textbf{MIGU}$ ($textbf{M}$agn$textbf{I}$tude-based $textbf{G}$radient $textbf{U}$pdating for continual learning), a rehearsal-free and task-label-free method that only updates the model parameters with large magnitudes of output in LMs' linear layers. MIGU is based on our observation that the L1-normalized magnitude distribution of the output in LMs' linear layers is different when the LM models deal with different task data. By imposing this simple constraint on the gradient update process, we can leverage the inherent behaviors of LMs, thereby unlocking their innate CL abilities. Our experiments demonstrate that MIGU is universally applicable to all three LM architectures (T5, RoBERTa, and Llama2), delivering state-of-the-art or on-par performance across continual finetuning and continual pre-training settings on four CL benchmarks. For example, MIGU brings a 15.2% average accuracy improvement over conventional parameter-efficient finetuning baselines in a 15-task CL benchmark. MIGU can also seamlessly integrate with all three existing CL types to further enhance performance. Code is available at href{https://github.com/wenyudu/MIGU}{this https URL}. | [
"['Wenyu Du' 'Shuang Cheng' 'Tongxu Luo' 'Zihan Qiu' 'Zeyu Huang'\n 'Ka Chun Cheung' 'Reynold Cheng' 'Jie Fu']"
] |
null | null | 2406.17251 | null | null | http://arxiv.org/pdf/2406.17251v1 | 2024-06-25T03:35:20Z | 2024-06-25T03:35:20Z | TopoGCL: Topological Graph Contrastive Learning | Graph contrastive learning (GCL) has recently emerged as a new concept which allows for capitalizing on the strengths of graph neural networks (GNNs) to learn rich representations in a wide variety of applications which involve abundant unlabeled information. However, existing GCL approaches largely tend to overlook the important latent information on higher-order graph substructures. We address this limitation by introducing the concepts of topological invariance and extended persistence on graphs to GCL. In particular, we propose a new contrastive mode which targets topological representations of the two augmented views from the same graph, yielded by extracting latent shape properties of the graph at multiple resolutions. Along with the extended topological layer, we introduce a new extended persistence summary, namely, extended persistence landscapes (EPL) and derive its theoretical stability guarantees. Our extensive numerical results on biological, chemical, and social interaction graphs show that the new Topological Graph Contrastive Learning (TopoGCL) model delivers significant performance gains in unsupervised graph classification for 11 out of 12 considered datasets and also exhibits robustness under noisy scenarios. | [
"['Yuzhou Chen' 'Jose Frias' 'Yulia R. Gel']"
] |
null | null | 2406.17263 | null | null | http://arxiv.org/pdf/2406.17263v1 | 2024-06-25T04:07:22Z | 2024-06-25T04:07:22Z | Efficient, Multimodal, and Derivative-Free Bayesian Inference With
Fisher-Rao Gradient Flows | In this paper, we study efficient approximate sampling for probability distributions known up to normalization constants. We specifically focus on a problem class arising in Bayesian inference for large-scale inverse problems in science and engineering applications. The computational challenges we address with the proposed methodology are: (i) the need for repeated evaluations of expensive forward models; (ii) the potential existence of multiple modes; and (iii) the fact that gradient of, or adjoint solver for, the forward model might not be feasible. While existing Bayesian inference methods meet some of these challenges individually, we propose a framework that tackles all three systematically. Our approach builds upon the Fisher-Rao gradient flow in probability space, yielding a dynamical system for probability densities that converges towards the target distribution at a uniform exponential rate. This rapid convergence is advantageous for the computational burden outlined in (i). We apply Gaussian mixture approximations with operator splitting techniques to simulate the flow numerically; the resulting approximation can capture multiple modes thus addressing (ii). Furthermore, we employ the Kalman methodology to facilitate a derivative-free update of these Gaussian components and their respective weights, addressing the issue in (iii). The proposed methodology results in an efficient derivative-free sampler flexible enough to handle multi-modal distributions: Gaussian Mixture Kalman Inversion (GMKI). The effectiveness of GMKI is demonstrated both theoretically and numerically in several experiments with multimodal target distributions, including proof-of-concept and two-dimensional examples, as well as a large-scale application: recovering the Navier-Stokes initial condition from solution data at positive times. | [
"['Yifan Chen' 'Daniel Zhengyu Huang' 'Jiaoyang Huang' 'Sebastian Reich'\n 'Andrew M. Stuart']"
] |
null | null | 2406.17266 | null | null | http://arxiv.org/pdf/2406.17266v1 | 2024-06-25T04:20:49Z | 2024-06-25T04:20:49Z | AG-LSEC: Audio Grounded Lexical Speaker Error Correction | Speaker Diarization (SD) systems are typically audio-based and operate independently of the ASR system in traditional speech transcription pipelines and can have speaker errors due to SD and/or ASR reconciliation, especially around speaker turns and regions of speech overlap. To reduce these errors, a Lexical Speaker Error Correction (LSEC), in which an external language model provides lexical information to correct the speaker errors, was recently proposed. Though the approach achieves good Word Diarization error rate (WDER) improvements, it does not use any additional acoustic information and is prone to miscorrections. In this paper, we propose to enhance and acoustically ground the LSEC system with speaker scores directly derived from the existing SD pipeline. This approach achieves significant relative WDER reductions in the range of 25-40% over the audio-based SD, ASR system and beats the LSEC system by 15-25% relative on RT03-CTS, Callhome American English and Fisher datasets. | [
"['Rohit Paturi' 'Xiang Li' 'Sundararajan Srinivasan']"
] |
null | null | 2406.17272 | null | null | http://arxiv.org/pdf/2406.17272v1 | 2024-06-25T04:35:50Z | 2024-06-25T04:35:50Z | A Comprehensive Solution to Connect Speech Encoder and Large Language
Model for ASR | Recent works have shown promising results in connecting speech encoders to large language models (LLMs) for speech recognition. However, several limitations persist, including limited fine-tuning options, a lack of mechanisms to enforce speech-text alignment, and high insertion errors especially in domain mismatch conditions. This paper presents a comprehensive solution to address these issues. We begin by investigating more thoughtful fine-tuning schemes. Next, we propose a matching loss to enhance alignment between modalities. Finally, we explore training and inference methods to mitigate high insertion errors. Experimental results on the Librispeech corpus demonstrate that partially fine-tuning the encoder and LLM using parameter-efficient methods, such as LoRA, is the most cost-effective approach. Additionally, the matching loss improves modality alignment, enhancing performance. The proposed training and inference methods significantly reduce insertion errors. | [
"['Van Tung Pham' 'Yist Lin' 'Tao Han' 'Wei Li' 'Jun Zhang' 'Lu Lu'\n 'Yuxuan Wang']"
] |
null | null | 2406.17274 | null | null | http://arxiv.org/pdf/2406.17274v1 | 2024-06-25T04:41:17Z | 2024-06-25T04:41:17Z | Can We Trust the Performance Evaluation of Uncertainty Estimation
Methods in Text Summarization? | Text summarization, a key natural language generation (NLG) task, is vital in various domains. However, the high cost of inaccurate summaries in risk-critical applications, particularly those involving human-in-the-loop decision-making, raises concerns about the reliability of uncertainty estimation on text summarization (UE-TS) evaluation methods. This concern stems from the dependency of uncertainty model metrics on diverse and potentially conflicting NLG metrics. To address this issue, we introduce a comprehensive UE-TS benchmark incorporating 31 NLG metrics across four dimensions. The benchmark evaluates the uncertainty estimation capabilities of two large language models and one pre-trained language model on three datasets, with human-annotation analysis incorporated where applicable. We also assess the performance of 14 common uncertainty estimation methods within this benchmark. Our findings emphasize the importance of considering multiple uncorrelated NLG metrics and diverse uncertainty estimation methods to ensure reliable and efficient evaluation of UE-TS techniques. | [
"['Jianfeng He' 'Runing Yang' 'Linlin Yu' 'Changbin Li' 'Ruoxi Jia'\n 'Feng Chen' 'Ming Jin' 'Chang-Tien Lu']"
] |
null | null | 2406.17281 | null | null | http://arxiv.org/pdf/2406.17281v1 | 2024-06-25T05:12:51Z | 2024-06-25T05:12:51Z | Distance Recomputator and Topology Reconstructor for Graph Neural
Networks | This paper introduces novel methodologies, the Distance Recomputator and Topology Reconstructor, aimed at enhancing Graph Neural Networks (GNNs). The Distance Recomputator dynamically recalibrates node distances within k-hop neighborhoods using a dynamic encoding scheme, thereby improving the accuracy and adaptability of node representations. Concurrently, the Topology Reconstructor adjusts local graph structures based on computed "similarity distances," optimizing network configurations for improved learning outcomes. These methods address the limitations of static node representations and fixed aggregation schemes in traditional GNNs, offering a more nuanced approach to modeling complex and dynamic graph topologies. Furthermore, our experimental evaluations demonstrate significant performance advantages over existing methods across various benchmark datasets. The proposed Distance Recomputator and Topology Reconstructor not only enhance node relationship modeling accuracy but also optimize information aggregation efficiency through an asynchronous aggregation mechanism. This approach proves particularly effective in scenarios involving dynamic or large-scale graphs, showcasing the methods' robustness and applicability in real-world graph learning tasks. | [
"['Dong Liu' 'Meng Jiang']"
] |
null | null | 2406.17285 | null | null | http://arxiv.org/pdf/2406.17285v1 | 2024-06-25T05:23:41Z | 2024-06-25T05:23:41Z | EON-1: A Brain-Inspired Processor for Near-Sensor Extreme Edge Online
Feature Extraction | For Edge AI applications, deploying online learning and adaptation on resource-constrained embedded devices can deal with fast sensor-generated streams of data in changing environments. However, since maintaining low-latency and power-efficient inference is paramount at the Edge, online learning and adaptation on the device should impose minimal additional overhead for inference. With this goal in mind, we explore energy-efficient learning and adaptation on-device for streaming-data Edge AI applications using Spiking Neural Networks (SNNs), which follow the principles of brain-inspired computing, such as high-parallelism, neuron co-located memory and compute, and event-driven processing. We propose EON-1, a brain-inspired processor for near-sensor extreme edge online feature extraction, that integrates a fast online learning and adaptation algorithm. We report results of only 1% energy overhead for learning, by far the lowest overhead when compared to other SoTA solutions, while attaining comparable inference accuracy. Furthermore, we demonstrate that EON-1 is up for the challenge of low-latency processing of HD and UHD streaming video in real-time, with learning enabled. | [
"['Alexandra Dobrita' 'Amirreza Yousefzadeh' 'Simon Thorpe'\n 'Kanishkan Vadivel' 'Paul Detterer' 'Guangzhi Tang' 'Gert-Jan van Schaik'\n 'Mario Konijnenburg' 'Anteneh Gebregiorgis' 'Said Hamdioui'\n 'Manolis Sifalakis']"
] |
null | null | 2406.17295 | null | null | http://arxiv.org/pdf/2406.17295v2 | 2024-06-28T13:28:04Z | 2024-06-25T05:45:07Z | MatText: Do Language Models Need More than Text & Scale for Materials
Modeling? | Effectively representing materials as text has the potential to leverage the vast advancements of large language models (LLMs) for discovering new materials. While LLMs have shown remarkable success in various domains, their application to materials science remains underexplored. A fundamental challenge is the lack of understanding of how to best utilize text-based representations for materials modeling. This challenge is further compounded by the absence of a comprehensive benchmark to rigorously evaluate the capabilities and limitations of these text representations in capturing the complexity of material systems. To address this gap, we propose MatText, a suite of benchmarking tools and datasets designed to systematically evaluate the performance of language models in modeling materials. MatText encompasses nine distinct text-based representations for material systems, including several novel representations. Each representation incorporates unique inductive biases that capture relevant information and integrate prior physical knowledge about materials. Additionally, MatText provides essential tools for training and benchmarking the performance of language models in the context of materials science. These tools include standardized dataset splits for each representation, probes for evaluating sensitivity to geometric factors, and tools for seamlessly converting crystal structures into text. Using MatText, we conduct an extensive analysis of the capabilities of language models in modeling materials. Our findings reveal that current language models consistently struggle to capture the geometric information crucial for materials modeling across all representations. Instead, these models tend to leverage local information, which is emphasized in some of our novel representations. Our analysis underscores MatText's ability to reveal shortcomings of text-based methods for materials design. | [
"['Nawaf Alampara' 'Santiago Miret' 'Kevin Maik Jablonka']"
] |
null | null | 2406.17296 | null | null | http://arxiv.org/pdf/2406.17296v1 | 2024-06-25T05:45:12Z | 2024-06-25T05:45:12Z | BlockLLM: Memory-Efficient Adaptation of LLMs by Selecting and
Optimizing the Right Coordinate Blocks | Training large language models (LLMs) for pretraining or adapting to new tasks and domains has become increasingly critical as their applications expand. However, as the model and the data sizes grow, the training process presents significant memory challenges, often requiring a prohibitive amount of GPU memory that may not be readily available. Existing methods such as low-rank adaptation (LoRA) add trainable low-rank matrix factorizations, altering the training dynamics and limiting the model's parameter search to a low-rank subspace. GaLore, a more recent method, employs Gradient Low-Rank Projection to reduce the memory footprint, in the full parameter training setting. However GaLore can only be applied to a subset of the LLM layers that satisfy the "reversibility" property, thus limiting their applicability. In response to these challenges, we introduce BlockLLM, an approach inspired by block coordinate descent. Our method carefully selects and updates a very small subset of the trainable parameters without altering any part of its architecture and training procedure. BlockLLM achieves state-of-the-art performance in both finetuning and pretraining tasks, while reducing the memory footprint of the underlying optimization process. Our experiments demonstrate that fine-tuning with only less than 5% of the parameters, BlockLLM achieves state-of-the-art perplexity scores on the GLUE benchmarks. On Llama model pretrained on C4 dataset, BlockLLM is able to train with significantly less memory than the state-of-the-art, while still maintaining competitive performance. | [
"['Amrutha Varshini Ramesh' 'Vignesh Ganapathiraman' 'Issam H. Laradji'\n 'Mark Schmidt']"
] |
null | null | 2406.17298 | null | null | http://arxiv.org/pdf/2406.17298v1 | 2024-06-25T06:04:58Z | 2024-06-25T06:04:58Z | Towards Efficient and Scalable Training of Differentially Private Deep
Learning | Differentially private stochastic gradient descent (DP-SGD) is the standard algorithm for training machine learning models under differential privacy (DP). The major drawback of DP-SGD is the drop in utility which prior work has comprehensively studied. However, in practice another major drawback that hinders the large-scale deployment is the significantly higher computational cost. We conduct a comprehensive empirical study to quantify the computational cost of training deep learning models under DP and benchmark methods that aim at reducing the cost. Among these are more efficient implementations of DP-SGD and training with lower precision. Finally, we study the scaling behaviour using up to 80 GPUs. | [
"['Sebastian Rodriguez Beltran' 'Marlon Tobaben' 'Niki Loppi'\n 'Antti Honkela']"
] |
null | null | 2406.17308 | null | null | http://arxiv.org/pdf/2406.17308v1 | 2024-06-25T06:41:09Z | 2024-06-25T06:41:09Z | Improving Realized LGD Approximation: A Novel Framework with XGBoost for
Handling Missing Cash-Flow Data | The scope for the accurate calculation of the Loss Given Default (LGD) parameter is comprehensive in terms of financial data. In this research, we aim to explore methods for improving the approximation of realized LGD in conditions of limited access to the cash-flow data. We enhance the performance of the method which relies on the differences between exposure values (delta outstanding approach) by employing machine learning (ML) techniques. The research utilizes the data from the mortgage portfolio of one of the European countries and assumes a close resemblance to similar economic contexts. It incorporates non-financial variables and macroeconomic data related to the housing market, improving the accuracy of loss severity approximation. The proposed methodology attempts to mitigate the country-specific (related to the local legal) or portfolio-specific factors in aim to show the general advantage of applying ML techniques, rather than case-specific relation. We developed an XGBoost model that does not rely on cash-flow data yet enhances the accuracy of realized LGD estimation compared to results obtained with the delta outstanding approach. A novel aspect of our work is the detailed exploration of the delta outstanding approach and the methodology for addressing conditions of limited access to cash-flow data through machine learning models. | [
"['Zuzanna Kostecka' 'Robert Ślepaczuk']"
] |
null | null | 2406.17316 | null | null | http://arxiv.org/abs/2406.17316v1 | 2024-06-25T06:57:47Z | 2024-06-25T06:57:47Z | A review of unsupervised learning in astronomy | This review summarizes popular unsupervised learning methods, and gives an overview of their past, current, and future uses in astronomy. Unsupervised learning aims to organise the information content of a dataset, in such a way that knowledge can be extracted. Traditionally this has been achieved through dimensionality reduction techniques that aid the ranking of a dataset, for example through principal component analysis or by using auto-encoders, or simpler visualisation of a high dimensional space, for example through the use of a self organising map. Other desirable properties of unsupervised learning include the identification of clusters, i.e. groups of similar objects, which has traditionally been achieved by the k-means algorithm and more recently through density-based clustering such as HDBSCAN. More recently, complex frameworks have emerged, that chain together dimensionality reduction and clustering methods. However, no dataset is fully unknown. Thus, nowadays a lot of research has been directed towards self-supervised and semi-supervised methods that stand to gain from both supervised and unsupervised learning. | [
"['Sotiria Fotopoulou']"
] |
null | null | 2406.17322 | null | null | http://arxiv.org/pdf/2406.17322v1 | 2024-06-25T07:14:14Z | 2024-06-25T07:14:14Z | ALPBench: A Benchmark for Active Learning Pipelines on Tabular Data | In settings where only a budgeted amount of labeled data can be afforded, active learning seeks to devise query strategies for selecting the most informative data points to be labeled, aiming to enhance learning algorithms' efficiency and performance. Numerous such query strategies have been proposed and compared in the active learning literature. However, the community still lacks standardized benchmarks for comparing the performance of different query strategies. This particularly holds for the combination of query strategies with different learning algorithms into active learning pipelines and examining the impact of the learning algorithm choice. To close this gap, we propose ALPBench, which facilitates the specification, execution, and performance monitoring of active learning pipelines. It has built-in measures to ensure evaluations are done reproducibly, saving exact dataset splits and hyperparameter settings of used algorithms. In total, ALPBench consists of 86 real-world tabular classification datasets and 5 active learning settings, yielding 430 active learning problems. To demonstrate its usefulness and broad compatibility with various learning algorithms and query strategies, we conduct an exemplary study evaluating 9 query strategies paired with 8 learning algorithms in 2 different settings. We provide ALPBench here: https://github.com/ValentinMargraf/ActiveLearningPipelines. | [
"['Valentin Margraf' 'Marcel Wever' 'Sandra Gilhuber'\n 'Gabriel Marques Tavares' 'Thomas Seidl' 'Eyke Hüllermeier']"
] |
null | null | 2406.17323 | null | null | http://arxiv.org/pdf/2406.17323v1 | 2024-06-25T07:14:15Z | 2024-06-25T07:14:15Z | XAMI -- A Benchmark Dataset for Artefact Detection in XMM-Newton Optical
Images | Reflected or scattered light produce artefacts in astronomical observations that can negatively impact the scientific study. Hence, automated detection of these artefacts is highly beneficial, especially with the increasing amounts of data gathered. Machine learning methods are well-suited to this problem, but currently there is a lack of annotated data to train such approaches to detect artefacts in astronomical observations. In this work, we present a dataset of images from the XMM-Newton space telescope Optical Monitoring camera showing different types of artefacts. We hand-annotated a sample of 1000 images with artefacts which we use to train automated ML methods. We further demonstrate techniques tailored for accurate detection and masking of artefacts using instance segmentation. We adopt a hybrid approach, combining knowledge from both convolutional neural networks (CNNs) and transformer-based models and use their advantages in segmentation. The presented method and dataset will advance artefact detection in astronomical observations by providing a reproducible baseline. All code and data are made available (https://github.com/ESA-Datalabs/XAMI-model and https://github.com/ESA-Datalabs/XAMI-dataset). | [
"['Elisabeta-Iulia Dima' 'Pablo Gómez' 'Sandor Kruk' 'Peter Kretschmar'\n 'Simon Rosen' 'Călin-Adrian Popa']"
] |
null | null | 2406.17335 | null | null | http://arxiv.org/pdf/2406.17335v1 | 2024-06-25T07:45:00Z | 2024-06-25T07:45:00Z | A Thorough Performance Benchmarking on Lightweight Embedding-based
Recommender Systems | Since the creation of the Web, recommender systems (RSs) have been an indispensable mechanism in information filtering. State-of-the-art RSs primarily depend on categorical features, which ecoded by embedding vectors, resulting in excessively large embedding tables. To prevent over-parameterized embedding tables from harming scalability, both academia and industry have seen increasing efforts in compressing RS embeddings. However, despite the prosperity of lightweight embedding-based RSs (LERSs), a wide diversity is seen in evaluation protocols, resulting in obstacles when relating LERS performance to real-world usability. Moreover, despite the common goal of lightweight embeddings, LERSs are evaluated with a single choice between the two main recommendation tasks -- collaborative filtering and content-based recommendation. This lack of discussions on cross-task transferability hinders the development of unified, more scalable solutions. Motivated by these issues, this study investigates various LERSs' performance, efficiency, and cross-task transferability via a thorough benchmarking process. Additionally, we propose an efficient embedding compression method using magnitude pruning, which is an easy-to-deploy yet highly competitive baseline that outperforms various complex LERSs. Our study reveals the distinct performance of LERSs across the two tasks, shedding light on their effectiveness and generalizability. To support edge-based recommendations, we tested all LERSs on a Raspberry Pi 4, where the efficiency bottleneck is exposed. Finally, we conclude this paper with critical summaries of LERS performance, model selection suggestions, and underexplored challenges around LERSs for future research. To encourage future research, we publish source codes and artifacts at href{this link}{https://github.com/chenxing1999/recsys-benchmark}. | [
"['Hung Vinh Tran' 'Tong Chen' 'Quoc Viet Hung Nguyen' 'Zi Huang'\n 'Lizhen Cui' 'Hongzhi Yin']"
] |
null | null | 2406.17338 | null | null | http://arxiv.org/pdf/2406.17338v1 | 2024-06-25T07:50:09Z | 2024-06-25T07:50:09Z | Robustly Optimized Deep Feature Decoupling Network for Fatty Liver
Diseases Detection | Current medical image classification efforts mainly aim for higher average performance, often neglecting the balance between different classes. This can lead to significant differences in recognition accuracy between classes and obvious recognition weaknesses. Without the support of massive data, deep learning faces challenges in fine-grained classification of fatty liver. In this paper, we propose an innovative deep learning framework that combines feature decoupling and adaptive adversarial training. Firstly, we employ two iteratively compressed decouplers to supervised decouple common features and specific features related to fatty liver in abdominal ultrasound images. Subsequently, the decoupled features are concatenated with the original image after transforming the color space and are fed into the classifier. During adversarial training, we adaptively adjust the perturbation and balance the adversarial strength by the accuracy of each class. The model will eliminate recognition weaknesses by correctly classifying adversarial samples, thus improving recognition robustness. Finally, the accuracy of our method improved by 4.16%, achieving 82.95%. As demonstrated by extensive experiments, our method is a generalized learning framework that can be directly used to eliminate the recognition weaknesses of any classifier while improving its average performance. Code is available at https://github.com/HP-ML/MICCAI2024. | [
"['Peng Huang' 'Shu Hu' 'Bo Peng' 'Jiashu Zhang' 'Xi Wu' 'Xin Wang']"
] |
null | null | 2406.17341 | null | null | http://arxiv.org/pdf/2406.17341v1 | 2024-06-25T07:54:32Z | 2024-06-25T07:54:32Z | Generative Modelling of Structurally Constrained Graphs | Graph diffusion models have emerged as state-of-the-art techniques in graph generation, yet integrating domain knowledge into these models remains challenging. Domain knowledge is particularly important in real-world scenarios, where invalid generated graphs hinder deployment in practical applications. Unconstrained and conditioned graph generative models fail to guarantee such domain-specific structural properties. We present ConStruct, a novel framework that allows for hard-constraining graph diffusion models to incorporate specific properties, such as planarity or acyclicity. Our approach ensures that the sampled graphs remain within the domain of graphs that verify the specified property throughout the entire trajectory in both the forward and reverse processes. This is achieved by introducing a specific edge-absorbing noise model and a new projector operator. ConStruct demonstrates versatility across several structural and edge-deletion invariant constraints and achieves state-of-the-art performance for both synthetic benchmarks and attributed real-world datasets. For example, by leveraging planarity in digital pathology graph datasets, the proposed method outperforms existing baselines and enhances generated data validity by up to 71.1 percentage points. | [
"['Manuel Madeira' 'Clement Vignac' 'Dorina Thanou' 'Pascal Frossard']"
] |
null | null | 2406.17346 | null | null | http://arxiv.org/pdf/2406.17346v1 | 2024-06-25T07:59:29Z | 2024-06-25T07:59:29Z | Stacked Confusion Reject Plots (SCORE) | Machine learning is more and more applied in critical application areas like health and driver assistance. To minimize the risk of wrong decisions, in such applications it is necessary to consider the certainty of a classification to reject uncertain samples. An established tool for this are reject curves that visualize the trade-off between the number of rejected samples and classification performance metrics. We argue that common reject curves are too abstract and hard to interpret by non-experts. We propose Stacked Confusion Reject Plots (SCORE) that offer a more intuitive understanding of the used data and the classifier's behavior. We present example plots on artificial Gaussian data to document the different options of SCORE and provide the code as a Python package. | [
"['Stephan Hasler' 'Lydia Fischer']"
] |
null | null | 2406.17352 | null | null | http://arxiv.org/pdf/2406.17352v1 | 2024-06-25T08:11:22Z | 2024-06-25T08:11:22Z | Development of a digital tool for monitoring the behaviour of pre-weaned
calves using accelerometer neck-collars | Automatic monitoring of calf behaviour is a promising way of assessing animal welfare from their first week on farms. This study aims to (i) develop machine learning models from accelerometer data to classify the main behaviours of pre-weaned calves and (ii) set up a digital tool for monitoring the behaviour of pre-weaned calves from the models' prediction. Thirty pre-weaned calves were equipped with a 3-D accelerometer attached to a neck-collar for two months and filmed simultaneously. The behaviours were annotated, resulting in 27.4 hours of observation aligned with the accelerometer data. The time-series were then split into 3 seconds windows. Two machine learning models were tuned using data from 80% of the calves: (i) a Random Forest model to classify between active and inactive behaviours using a set of 11 hand-craft features [model 1] and (ii) a RidgeClassifierCV model to classify between lying, running, drinking milk and other behaviours using ROCKET features [model 2]. The performance of the models was tested using data from the remaining 20% of the calves. Model 1 achieved a balanced accuracy of 0.92. Model 2 achieved a balanced accuracy of 0.84. Behavioural metrics such as daily activity ratio and episodes of running, lying, drinking milk, and other behaviours expressed over time were deduced from the predictions. All the development was finally embedded into a Python dashboard so that the individual calf metrics could be displayed directly from the raw accelerometer files. | [
"['Oshana Dissanayake' 'Sarah E. Mcpherson' 'Joseph Allyndrée'\n 'Emer Kennedy' 'Pádraig Cunningham' 'Lucile Riaboff']"
] |
null | null | 2406.17374 | null | null | http://arxiv.org/pdf/2406.17374v1 | 2024-06-25T08:49:07Z | 2024-06-25T08:49:07Z | Generalizability of experimental studies | Experimental studies are a cornerstone of machine learning (ML) research. A common, but often implicit, assumption is that the results of a study will generalize beyond the study itself, e.g. to new data. That is, there is a high probability that repeating the study under different conditions will yield similar results. Despite the importance of the concept, the problem of measuring generalizability remains open. This is probably due to the lack of a mathematical formalization of experimental studies. In this paper, we propose such a formalization and develop a quantifiable notion of generalizability. This notion allows to explore the generalizability of existing studies and to estimate the number of experiments needed to achieve the generalizability of new studies. To demonstrate its usefulness, we apply it to two recently published benchmarks to discern generalizable and non-generalizable results. We also publish a Python module that allows our analysis to be repeated for other experimental studies. | [
"['Federico Matteucci' 'Vadim Arzamasov' 'Jose Cribeiro-Ramallo'\n 'Marco Heyden' 'Konstantin Ntounas' 'Klemens Böhm']"
] |
null | null | 2406.17381 | null | null | http://arxiv.org/pdf/2406.17381v1 | 2024-06-25T08:57:47Z | 2024-06-25T08:57:47Z | Forget but Recall: Incremental Latent Rectification in Continual
Learning | Intrinsic capability to continuously learn a changing data stream is a desideratum of deep neural networks (DNNs). However, current DNNs suffer from catastrophic forgetting, which hinders remembering past knowledge. To mitigate this issue, existing Continual Learning (CL) approaches either retain exemplars for replay, regularize learning, or allocate dedicated capacity for new tasks. This paper investigates an unexplored CL direction for incremental learning called Incremental Latent Rectification or ILR. In a nutshell, ILR learns to propagate with correction (or rectify) the representation from the current trained DNN backward to the representation space of the old task, where performing predictive decisions is easier. This rectification process only employs a chain of small representation mapping networks, called rectifier units. Empirical experiments on several continual learning benchmarks, including CIFAR10, CIFAR100, and Tiny ImageNet, demonstrate the effectiveness and potential of this novel CL direction compared to existing representative CL methods. | [
"['Nghia D. Nguyen' 'Hieu Trung Nguyen' 'Ang Li' 'Hoang Pham'\n 'Viet Anh Nguyen' 'Khoa D. Doan']"
] |
null | null | 2406.17386 | null | null | http://arxiv.org/pdf/2406.17386v1 | 2024-06-25T09:05:22Z | 2024-06-25T09:05:22Z | Double Momentum Method for Lower-Level Constrained Bilevel Optimization | Bilevel optimization (BO) has recently gained prominence in many machine learning applications due to its ability to capture the nested structure inherent in these problems. Recently, many hypergradient methods have been proposed as effective solutions for solving large-scale problems. However, current hypergradient methods for the lower-level constrained bilevel optimization (LCBO) problems need very restrictive assumptions, namely, where optimality conditions satisfy the differentiability and invertibility conditions and lack a solid analysis of the convergence rate. What's worse, existing methods require either double-loop updates, which are sometimes less efficient. To solve this problem, in this paper, we propose a new hypergradient of LCBO leveraging the theory of nonsmooth implicit function theorem instead of using the restrive assumptions. In addition, we propose a textit{single-loop single-timescale} algorithm based on the double-momentum method and adaptive step size method and prove it can return a $(delta, epsilon)$-stationary point with $tilde{mathcal{O}}(d_2^2epsilon^{-4})$ iterations. Experiments on two applications demonstrate the effectiveness of our proposed method. | [
"['Wanli Shi' 'Yi Chang' 'Bin Gu']"
] |
null | null | 2406.17399 | null | null | http://arxiv.org/pdf/2406.17399v1 | 2024-06-25T09:23:25Z | 2024-06-25T09:23:25Z | GradCheck: Analyzing classifier guidance gradients for conditional
diffusion sampling | To sample from an unconditionally trained Denoising Diffusion Probabilistic Model (DDPM), classifier guidance adds conditional information during sampling, but the gradients from classifiers, especially those not trained on noisy images, are often unstable. This study conducts a gradient analysis comparing robust and non-robust classifiers, as well as multiple gradient stabilization techniques. Experimental results demonstrate that these techniques significantly improve the quality of class-conditional samples for non-robust classifiers by providing more stable and informative classifier guidance gradients. The findings highlight the importance of gradient stability in enhancing the performance of classifier guidance, especially on non-robust classifiers. | [
"['Philipp Vaeth' 'Alexander M. Fruehwald' 'Benjamin Paassen'\n 'Magda Gregorova']"
] |
null | null | 2406.17404 | null | null | http://arxiv.org/pdf/2406.17404v1 | 2024-06-25T09:25:39Z | 2024-06-25T09:25:39Z | Make Some Noise: Unlocking Language Model Parallel Inference Capability
through Noisy Training | Existing speculative decoding methods typically require additional model structure and training processes to assist the model for draft token generation. This makes the migration of acceleration methods to the new model more costly and more demanding on device memory. To address this problem, we propose the Make Some Noise (MSN) training framework as a replacement for the supervised fine-tuning stage of the large language model. The training method simply introduces some noise at the input for the model to learn the denoising task. It significantly enhances the parallel decoding capability of the model without affecting the original task capability. In addition, we propose a tree-based retrieval-augmented Jacobi (TR-Jacobi) decoding strategy to further improve the inference speed of MSN models. Experiments in both the general and code domains have shown that MSN can improve inference speed by 2.3-2.7x times without compromising model performance. The MSN model also achieves comparable acceleration ratios to the SOTA model with additional model structure on Spec-Bench. | [
"['Yixuan Wang' 'Xianzhen Luo' 'Fuxuan Wei' 'Yijun Liu' 'Qingfu Zhu'\n 'Xuanyu Zhang' 'Qing Yang' 'Dongliang Xu' 'Wanxiang Che']"
] |
null | null | 2406.17415 | null | null | http://arxiv.org/pdf/2406.17415v2 | 2024-06-26T08:00:18Z | 2024-06-25T09:37:15Z | Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing
LLMs Beyond Integer Bit-Levels | We present a simple variable quantization approach that quantizes different layers of a large language model (LLM) at different bit levels. Specifically, we quantize the most important layers to higher bit precision and less important layers to lower bits to achieve floating point quantization levels. We propose two effective strategies to measure the importance of layers within LLMs: the first measures the importance of a layer based on how different its output embeddings are from the input embeddings (the higher the better); the second estimates the importance of a layer using the number of layer weights that are much larger than average (the smaller the better). We show that quantizing different layers at varying bits according to our importance scores results in minimal performance drop with a far more compressed model size. Finally, we present several practical key takeaways from our variable layer-wise quantization experiments: (a) LLM performance under variable quantization remains close to the original model until 25-50% of layers are moved in lower quantization using our proposed ordering but only until 5-10% if moved using no specific ordering; (b) Quantizing LLMs to lower bits performs substantially better than pruning unless extreme quantization (2-bit) is used; and (c) Layer-wise quantization to lower bits works better in the case of larger LLMs with more layers compared to smaller LLMs with fewer layers. The code used to run the experiments is available at: https://github.com/RazvanDu/LayerwiseQuant. | [
"['Razvan-Gabriel Dumitru' 'Vikas Yadav' 'Rishabh Maheshwary'\n 'Paul-Ioan Clotan' 'Sathwik Tejaswi Madhusudhan' 'Mihai Surdeanu']"
] |
null | null | 2406.17418 | null | null | http://arxiv.org/pdf/2406.17418v1 | 2024-06-25T09:40:47Z | 2024-06-25T09:40:47Z | SE-VGAE: Unsupervised Disentangled Representation Learning for
Interpretable Architectural Layout Design Graph Generation | Despite the suitability of graphs for capturing the relational structures inherent in architectural layout designs, there is a notable dearth of research on interpreting architectural design space using graph-based representation learning and exploring architectural design graph generation. Concurrently, disentangled representation learning in graph generation faces challenges such as node permutation invariance and representation expressiveness. To address these challenges, we introduce an unsupervised disentangled representation learning framework, Style-based Edge-augmented Variational Graph Auto-Encoder (SE-VGAE), aiming to generate architectural layout in the form of attributed adjacency multi-graphs while prioritizing representation disentanglement. The framework is designed with three alternative pipelines, each integrating a transformer-based edge-augmented encoder, a latent space disentanglement module, and a style-based decoder. These components collectively facilitate the decomposition of latent factors influencing architectural layout graph generation, enhancing generation fidelity and diversity. We also provide insights into optimizing the framework by systematically exploring graph feature augmentation schemes and evaluating their effectiveness for disentangling architectural layout representation through extensive experiments. Additionally, we contribute a new benchmark large-scale architectural layout graph dataset extracted from real-world floor plan images to facilitate the exploration of graph data-based architectural design representation space interpretation. This study pioneered disentangled representation learning for the architectural layout graph generation. The code and dataset of this study will be open-sourced. | [
"['Jielin Chen' 'Rudi Stouffs']"
] |
null | null | 2406.17425 | null | null | http://arxiv.org/pdf/2406.17425v1 | 2024-06-25T09:59:31Z | 2024-06-25T09:59:31Z | CuDA2: An approach for Incorporating Traitor Agents into Cooperative
Multi-Agent Systems | Cooperative Multi-Agent Reinforcement Learning (CMARL) strategies are well known to be vulnerable to adversarial perturbations. Previous works on adversarial attacks have primarily focused on white-box attacks that directly perturb the states or actions of victim agents, often in scenarios with a limited number of attacks. However, gaining complete access to victim agents in real-world environments is exceedingly difficult. To create more realistic adversarial attacks, we introduce a novel method that involves injecting traitor agents into the CMARL system. We model this problem as a Traitor Markov Decision Process (TMDP), where traitors cannot directly attack the victim agents but can influence their formation or positioning through collisions. In TMDP, traitors are trained using the same MARL algorithm as the victim agents, with their reward function set as the negative of the victim agents' reward. Despite this, the training efficiency for traitors remains low because it is challenging for them to directly associate their actions with the victim agents' rewards. To address this issue, we propose the Curiosity-Driven Adversarial Attack (CuDA2) framework. CuDA2 enhances the efficiency and aggressiveness of attacks on the specified victim agents' policies while maintaining the optimal policy invariance of the traitors. Specifically, we employ a pre-trained Random Network Distillation (RND) module, where the extra reward generated by the RND module encourages traitors to explore states unencountered by the victim agents. Extensive experiments on various scenarios from SMAC demonstrate that our CuDA2 framework offers comparable or superior adversarial attack capabilities compared to other baselines. | [
"['Zhen Chen' 'Yong Liao' 'Youpeng Zhao' 'Zipeng Dai' 'Jian Zhao']"
] |
null | null | 2406.17427 | null | null | http://arxiv.org/pdf/2406.17427v1 | 2024-06-25T10:06:07Z | 2024-06-25T10:06:07Z | A Critical Analysis of the Theoretical Framework of the Extreme Learning
Machine | Despite the number of successful applications of the Extreme Learning Machine (ELM), we show that its underlying foundational principles do not have a rigorous mathematical justification. Specifically, we refute the proofs of two main statements, and we also create a dataset that provides a counterexample to the ELM learning algorithm and explain its design, which leads to many such counterexamples. Finally, we provide alternative statements of the foundations, which justify the efficiency of ELM in some theoretical cases. | [
"['Irina Perfilievaa' 'Nicolas Madrid' 'Manuel Ojeda-Aciego'\n 'Piotr Artiemjew' 'Agnieszka Niemczynowicz']"
] |
null | null | 2406.17433 | null | null | http://arxiv.org/pdf/2406.17433v1 | 2024-06-25T10:16:19Z | 2024-06-25T10:16:19Z | Mind the Graph When Balancing Data for Fairness or Robustness | Failures of fairness or robustness in machine learning predictive settings can be due to undesired dependencies between covariates, outcomes and auxiliary factors of variation. A common strategy to mitigate these failures is data balancing, which attempts to remove those undesired dependencies. In this work, we define conditions on the training distribution for data balancing to lead to fair or robust models. Our results display that, in many cases, the balanced distribution does not correspond to selectively removing the undesired dependencies in a causal graph of the task, leading to multiple failure modes and even interference with other mitigation techniques such as regularization. Overall, our results highlight the importance of taking the causal graph into account before performing data balancing. | [
"['Jessica Schrouff' 'Alexis Bellot' 'Amal Rannen-Triki' 'Alan Malek'\n 'Isabela Albuquerque' 'Arthur Gretton' \"Alexander D'Amour\"\n 'Silvia Chiappa']"
] |
null | null | 2406.17467 | null | null | http://arxiv.org/pdf/2406.17467v1 | 2024-06-25T11:12:52Z | 2024-06-25T11:12:52Z | Early learning of the optimal constant solution in neural networks and
humans | Deep neural networks learn increasingly complex functions over the course of training. Here, we show both empirically and theoretically that learning of the target function is preceded by an early phase in which networks learn the optimal constant solution (OCS) - that is, initial model responses mirror the distribution of target labels, while entirely ignoring information provided in the input. Using a hierarchical category learning task, we derive exact solutions for learning dynamics in deep linear networks trained with bias terms. Even when initialized to zero, this simple architectural feature induces substantial changes in early dynamics. We identify hallmarks of this early OCS phase and illustrate how these signatures are observed in deep linear networks and larger, more complex (and nonlinear) convolutional neural networks solving a hierarchical learning task based on MNIST and CIFAR10. We explain these observations by proving that deep linear networks necessarily learn the OCS during early learning. To further probe the generality of our results, we train human learners over the course of three days on the category learning task. We then identify qualitative signatures of this early OCS phase in terms of the dynamics of true negative (correct-rejection) rates. Surprisingly, we find the same early reliance on the OCS in the behaviour of human learners. Finally, we show that learning of the OCS can emerge even in the absence of bias terms and is equivalently driven by generic correlations in the input data. Overall, our work suggests the OCS as a universal learning principle in supervised, error-corrective learning, and the mechanistic reasons for its prevalence. | [
"['Jirko Rubruck' 'Jan P. Bauer' 'Andrew Saxe' 'Christopher Summerfield']"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.