categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2405.02413
null
null
http://arxiv.org/pdf/2405.02413v1
2024-05-03T18:14:29Z
2024-05-03T18:14:29Z
A Unified Framework for Human-Allied Learning of Probabilistic Circuits
Probabilistic Circuits (PCs) have emerged as an efficient framework for representing and learning complex probability distributions. Nevertheless, the existing body of research on PCs predominantly concentrates on data-driven parameter learning, often neglecting the potential of knowledge-intensive learning, a particular issue in data-scarce/knowledge-rich domains such as healthcare. To bridge this gap, we propose a novel unified framework that can systematically integrate diverse domain knowledge into the parameter learning process of PCs. Experiments on several benchmarks as well as real world datasets show that our proposed framework can both effectively and efficiently leverage domain knowledge to achieve superior performance compared to purely data-driven learning approaches.
[ "['Athresh Karanam' 'Saurabh Mathur' 'Sahil Sidheekh' 'Sriraam Natarajan']" ]
null
null
2405.02429
null
null
http://arxiv.org/pdf/2405.02429v1
2024-05-03T18:51:19Z
2024-05-03T18:51:19Z
CALRec: Contrastive Alignment of Generative LLMs For Sequential Recommendation
Traditional recommender systems such as matrix factorization methods rely on learning a shared dense embedding space to represent both items and user preferences. Sequence models such as RNN, GRUs, and, recently, Transformers have also excelled in the task of sequential recommendation. This task requires understanding the sequential structure present in users' historical interactions to predict the next item they may like. Building upon the success of Large Language Models (LLMs) in a variety of tasks, researchers have recently explored using LLMs that are pretrained on vast corpora of text for sequential recommendation. To use LLMs in sequential recommendations, both the history of user interactions and the model's prediction of the next item are expressed in text form. We propose CALRec, a two-stage LLM finetuning framework that finetunes a pretrained LLM in a two-tower fashion using a mixture of two contrastive losses and a language modeling loss: the LLM is first finetuned on a data mixture from multiple domains followed by another round of target domain finetuning. Our model significantly outperforms many state-of-the-art baselines (+37% in Recall@1 and +24% in NDCG@10) and systematic ablation studies reveal that (i) both stages of finetuning are crucial, and, when combined, we achieve improved performance, and (ii) contrastive alignment is effective among the target domains explored in our experiments.
[ "['Yaoyiran Li' 'Xiang Zhai' 'Moustafa Alzantot' 'Keyi Yu' 'Ivan Vulić'\n 'Anna Korhonen' 'Mohamed Hammad']" ]
null
null
2405.02437
null
null
http://arxiv.org/pdf/2405.02437v1
2024-05-03T19:04:37Z
2024-05-03T19:04:37Z
FastLloyd: Federated, Accurate, Secure, and Tunable $k$-Means Clustering with Differential Privacy
We study the problem of privacy-preserving $k$-means clustering in the horizontally federated setting. Existing federated approaches using secure computation, suffer from substantial overheads and do not offer output privacy. At the same time, differentially private (DP) $k$-means algorithms assume a trusted central curator and do not extend to federated settings. Naively combining the secure and DP solutions results in a protocol with impractical overhead. Instead, our work provides enhancements to both the DP and secure computation components, resulting in a design that is faster, more private, and more accurate than previous work. By utilizing the computational DP model, we design a lightweight, secure aggregation-based approach that achieves four orders of magnitude speed-up over state-of-the-art related work. Furthermore, we not only maintain the utility of the state-of-the-art in the central model of DP, but we improve the utility further by taking advantage of constrained clustering techniques.
[ "['Abdulrahman Diaa' 'Thomas Humphries' 'Florian Kerschbaum']" ]
null
null
2405.02441
null
null
http://arxiv.org/pdf/2405.02441v1
2024-05-03T19:11:35Z
2024-05-03T19:11:35Z
Learning minimal volume uncertainty ellipsoids
We consider the problem of learning uncertainty regions for parameter estimation problems. The regions are ellipsoids that minimize the average volumes subject to a prescribed coverage probability. As expected, under the assumption of jointly Gaussian data, we prove that the optimal ellipsoid is centered around the conditional mean and shaped as the conditional covariance matrix. In more practical cases, we propose a differentiable optimization approach for approximately computing the optimal ellipsoids using a neural network with proper calibration. Compared to existing methods, our network requires less storage and less computations in inference time, leading to accurate yet smaller ellipsoids. We demonstrate these advantages on four real-world localization datasets.
[ "['Itai Alon' 'David Arnon' 'Ami Wiesel']" ]
null
null
2405.02449
null
null
http://arxiv.org/pdf/2405.02449v1
2024-05-03T19:33:44Z
2024-05-03T19:33:44Z
Quality-Weighted Vendi Scores And Their Application To Diverse Experimental Design
Experimental design techniques such as active search and Bayesian optimization are widely used in the natural sciences for data collection and discovery. However, existing techniques tend to favor exploitation over exploration of the search space, which causes them to get stuck in local optima. This ``collapse" problem prevents experimental design algorithms from yielding diverse high-quality data. In this paper, we extend the Vendi scores -- a family of interpretable similarity-based diversity metrics -- to account for quality. We then leverage these quality-weighted Vendi scores to tackle experimental design problems across various applications, including drug discovery, materials discovery, and reinforcement learning. We found that quality-weighted Vendi scores allow us to construct policies for experimental design that flexibly balance quality and diversity, and ultimately assemble rich and diverse sets of high-performing data points. Our algorithms led to a 70%-170% increase in the number of effective discoveries compared to baselines.
[ "['Quan Nguyen' 'Adji Bousso Dieng']" ]
null
null
2405.02456
null
null
http://arxiv.org/pdf/2405.02456v1
2024-05-03T19:43:30Z
2024-05-03T19:43:30Z
Natural Policy Gradient and Actor Critic Methods for Constrained Multi-Task Reinforcement Learning
Multi-task reinforcement learning (RL) aims to find a single policy that effectively solves multiple tasks at the same time. This paper presents a constrained formulation for multi-task RL where the goal is to maximize the average performance of the policy across tasks subject to bounds on the performance in each task. We consider solving this problem both in the centralized setting, where information for all tasks is accessible to a single server, and in the decentralized setting, where a network of agents, each given one task and observing local information, cooperate to find the solution of the globally constrained objective using local communication. We first propose a primal-dual algorithm that provably converges to the globally optimal solution of this constrained formulation under exact gradient evaluations. When the gradient is unknown, we further develop a sampled-based actor-critic algorithm that finds the optimal policy using online samples of state, action, and reward. Finally, we study the extension of the algorithm to the linear function approximation setting.
[ "['Sihan Zeng' 'Thinh T. Doan' 'Justin Romberg']" ]
null
null
2405.02466
null
null
http://arxiv.org/pdf/2405.02466v2
2024-06-26T16:22:43Z
2024-05-03T20:00:40Z
ProFLingo: A Fingerprinting-based Intellectual Property Protection Scheme for Large Language Models
Large language models (LLMs) have attracted significant attention in recent years. Due to their "Large" nature, training LLMs from scratch consumes immense computational resources. Since several major players in the artificial intelligence (AI) field have open-sourced their original LLMs, an increasing number of individual researchers and smaller companies are able to build derivative LLMs based on these open-sourced models at much lower costs. However, this practice opens up possibilities for unauthorized use or reproduction that may not comply with licensing agreements, and fine-tuning can change the model's behavior, thus complicating the determination of model ownership. Current intellectual property (IP) protection schemes for LLMs are either designed for white-box settings or require additional modifications to the original model, which restricts their use in real-world settings. In this paper, we propose ProFLingo, a black-box fingerprinting-based IP protection scheme for LLMs. ProFLingo generates queries that elicit specific responses from an original model, thereby establishing unique fingerprints. Our scheme assesses the effectiveness of these queries on a suspect model to determine whether it has been derived from the original model. ProFLingo offers a non-invasive approach, which neither requires knowledge of the suspect model nor modifications to the base model or its training process. To the best of our knowledge, our method represents the first black-box fingerprinting technique for IP protection for LLMs. Our source code and generated queries are available at: https://github.com/hengvt/ProFLingo.
[ "['Heng Jin' 'Chaoyu Zhang' 'Shanghao Shi' 'Wenjing Lou' 'Y. Thomas Hou']" ]
null
null
2405.02475
null
null
http://arxiv.org/pdf/2405.02475v2
2024-06-02T13:15:21Z
2024-05-03T20:25:57Z
Generalizing Orthogonalization for Models with Non-Linearities
The complexity of black-box algorithms can lead to various challenges, including the introduction of biases. These biases present immediate risks in the algorithms' application. It was, for instance, shown that neural networks can deduce racial information solely from a patient's X-ray scan, a task beyond the capability of medical experts. If this fact is not known to the medical expert, automatic decision-making based on this algorithm could lead to prescribing a treatment (purely) based on racial information. While current methodologies allow for the "orthogonalization" or "normalization" of neural networks with respect to such information, existing approaches are grounded in linear models. Our paper advances the discourse by introducing corrections for non-linearities such as ReLU activations. Our approach also encompasses scalar and tensor-valued predictions, facilitating its integration into neural network architectures. Through extensive experiments, we validate our method's effectiveness in safeguarding sensitive data in generalized linear models, normalizing convolutional neural networks for metadata, and rectifying pre-existing embeddings for undesired attributes.
[ "['David Rügamer' 'Chris Kolb' 'Tobias Weber' 'Lucas Kook' 'Thomas Nagler']" ]
null
null
2405.02478
null
null
http://arxiv.org/pdf/2405.02478v1
2024-05-03T20:40:14Z
2024-05-03T20:40:14Z
Continuous Learned Primal Dual
Neural ordinary differential equations (Neural ODEs) propose the idea that a sequence of layers in a neural network is just a discretisation of an ODE, and thus can instead be directly modelled by a parameterised ODE. This idea has had resounding success in the deep learning literature, with direct or indirect influence in many state of the art ideas, such as diffusion models or time dependant models. Recently, a continuous version of the U-net architecture has been proposed, showing increased performance over its discrete counterpart in many imaging applications and wrapped with theoretical guarantees around its performance and robustness. In this work, we explore the use of Neural ODEs for learned inverse problems, in particular with the well-known Learned Primal Dual algorithm, and apply it to computed tomography (CT) reconstruction.
[ "['Christina Runkel' 'Ander Biguri' 'Carola-Bibiane Schönlieb']" ]
null
null
2405.02481
null
null
http://arxiv.org/pdf/2405.02481v1
2024-05-03T21:07:54Z
2024-05-03T21:07:54Z
Proximal Curriculum with Task Correlations for Deep Reinforcement Learning
Curriculum design for reinforcement learning (RL) can speed up an agent's learning process and help it learn to perform well on complex tasks. However, existing techniques typically require domain-specific hyperparameter tuning, involve expensive optimization procedures for task selection, or are suitable only for specific learning objectives. In this work, we consider curriculum design in contextual multi-task settings where the agent's final performance is measured w.r.t. a target distribution over complex tasks. We base our curriculum design on the Zone of Proximal Development concept, which has proven to be effective in accelerating the learning process of RL agents for uniform distribution over all tasks. We propose a novel curriculum, ProCuRL-Target, that effectively balances the need for selecting tasks that are not too difficult for the agent while progressing the agent's learning toward the target distribution via leveraging task correlations. We theoretically justify the task selection strategy of ProCuRL-Target by analyzing a simple learning setting with REINFORCE learner model. Our experimental results across various domains with challenging target task distributions affirm the effectiveness of our curriculum strategy over state-of-the-art baselines in accelerating the training process of deep RL agents.
[ "['Georgios Tzannetos' 'Parameswaran Kamalaruban' 'Adish Singla']" ]
null
null
2405.02485
null
null
http://arxiv.org/pdf/2405.02485v1
2024-05-03T21:22:27Z
2024-05-03T21:22:27Z
A Survey of Few-Shot Learning for Biomedical Time Series
Advancements in wearable sensor technologies and the digitization of medical records have contributed to the unprecedented ubiquity of biomedical time series data. Data-driven models have tremendous potential to assist clinical diagnosis and improve patient care by improving long-term monitoring capabilities, facilitating early disease detection and intervention, as well as promoting personalized healthcare delivery. However, accessing extensively labeled datasets to train data-hungry deep learning models encounters many barriers, such as long-tail distribution of rare diseases, cost of annotation, privacy and security concerns, data-sharing regulations, and ethical considerations. An emerging approach to overcome the scarcity of labeled data is to augment AI methods with human-like capabilities to leverage past experiences to learn new tasks with limited examples, called few-shot learning. This survey provides a comprehensive review and comparison of few-shot learning methods for biomedical time series applications. The clinical benefits and limitations of such methods are discussed in relation to traditional data-driven approaches. This paper aims to provide insights into the current landscape of few-shot learning for biomedical time series and its implications for future research and applications.
[ "['Chenqi Li' 'Timothy Denison' 'Tingting Zhu']" ]
null
null
2405.02488
null
null
http://arxiv.org/pdf/2405.02488v1
2024-05-03T21:34:12Z
2024-05-03T21:34:12Z
Modelling Sampling Distributions of Test Statistics with Autograd
Simulation-based inference methods that feature correct conditional coverage of confidence sets based on observations that have been compressed to a scalar test statistic require accurate modelling of either the p-value function or the cumulative distribution function (cdf) of the test statistic. If the model of the cdf, which is typically a deep neural network, is a function of the test statistic then the derivative of the neural network with respect to the test statistic furnishes an approximation of the sampling distribution of the test statistic. We explore whether this approach to modelling conditional 1-dimensional sampling distributions is a viable alternative to the probability density-ratio method, also known as the likelihood-ratio trick. Relatively simple, yet effective, neural network models are used whose predictive uncertainty is quantified through a variety of methods.
[ "['Ali Al Kadhim' 'Harrison B. Prosper']" ]
null
null
2405.02534
null
null
http://arxiv.org/pdf/2405.02534v1
2024-05-04T01:36:05Z
2024-05-04T01:36:05Z
A Multi-Domain Multi-Task Approach for Feature Selection from Bulk RNA Datasets
In this paper a multi-domain multi-task algorithm for feature selection in bulk RNAseq data is proposed. Two datasets are investigated arising from mouse host immune response to Salmonella infection. Data is collected from several strains of collaborative cross mice. Samples from the spleen and liver serve as the two domains. Several machine learning experiments are conducted and the small subset of discriminative across domains features have been extracted in each case. The algorithm proves viable and underlines the benefits of across domain feature selection by extracting new subset of discriminative features which couldn't be extracted only by one-domain approach.
[ "['Karim Salta' 'Tomojit Ghosh' 'Michael Kirby']" ]
null
null
2405.02545
null
null
http://arxiv.org/pdf/2405.02545v1
2024-05-04T03:04:51Z
2024-05-04T03:04:51Z
Prediction of Space Weather Events through Analysis of Active Region Magnetograms using Convolutional Neural Network
Although space weather events may not directly affect human life, they have the potential to inflict significant harm upon our communities. Harmful space weather events can trigger atmospheric changes that result in physical and economic damages on a global scale. In 1989, Earth experienced the effects of a powerful geomagnetic storm that caused satellites to malfunction, while triggering power blackouts in Canada, along with electricity disturbances in the United States and Europe. With the solar cycle peak rapidly approaching, there is an ever-increasing need to prepare and prevent the damages that can occur, especially to modern-day technology, calling for the need of a comprehensive prediction system. This study aims to leverage machine learning techniques to predict instances of space weather (solar flares, coronal mass ejections, geomagnetic storms), based on active region magnetograms of the Sun. This was done through the use of the NASA DONKI service to determine when these solar events occur, then using data from the NASA Solar Dynamics Observatory to compile a dataset that includes magnetograms of active regions of the Sun 24 hours before the events. By inputting the magnetograms into a convolutional neural network (CNN) trained from this dataset, it can serve to predict whether a space weather event will occur, and what type of event it will be. The model was designed using a custom architecture CNN, and returned an accuracy of 90.27%, a precision of 85.83%, a recall of 91.78%, and an average F1 score of 92.14% across each class (Solar flare [Flare], geomagnetic storm [GMS], coronal mass ejection [CME]). Our results show that using magnetogram data as an input for a CNN is a viable method to space weather prediction. Future work can involve prediction of the magnitude of solar events.
[ "['Shlesh Sakpal']" ]
null
null
2405.02548
null
null
http://arxiv.org/abs/2405.02548v1
2024-05-04T03:13:13Z
2024-05-04T03:13:13Z
CNN-LSTM and Transfer Learning Models for Malware Classification based on Opcodes and API Calls
In this paper, we propose a novel model for a malware classification system based on Application Programming Interface (API) calls and opcodes, to improve classification accuracy. This system uses a novel design of combined Convolutional Neural Network and Long Short-Term Memory. We extract opcode sequences and API Calls from Windows malware samples for classification. We transform these features into N-grams (N = 2, 3, and 10)-gram sequences. Our experiments on a dataset of 9,749,57 samples produce high accuracy of 99.91% using the 8-gram sequences. Our method significantly improves the malware classification performance when using a wide range of recent deep learning architectures, leading to state-of-the-art performance. In particular, we experiment with ConvNeXt-T, ConvNeXt-S, RegNetY-4GF, RegNetY-8GF, RegNetY-12GF, EfficientNetV2, Sequencer2D-L, Swin-T, ViT-G/14, ViT-Ti, ViT-S, VIT-B, VIT-L, and MaxViT-B. Among these architectures, Swin-T and Sequencer2D-L architectures achieved high accuracies of 99.82% and 99.70%, respectively, comparable to our CNN-LSTM architecture although not surpassing it.
[ "['Ahmed Bensaoud' 'Jugal Kalita']" ]
null
null
2405.02561
null
null
http://arxiv.org/pdf/2405.02561v2
2024-06-18T07:33:29Z
2024-05-04T04:22:52Z
Understanding the Difficulty of Solving Cauchy Problems with PINNs
Physics-Informed Neural Networks (PINNs) have gained popularity in scientific computing in recent years. However, they often fail to achieve the same level of accuracy as classical methods in solving differential equations. In this paper, we identify two sources of this issue in the case of Cauchy problems: the use of $L^2$ residuals as objective functions and the approximation gap of neural networks. We show that minimizing the sum of $L^2$ residual and initial condition error is not sufficient to guarantee the true solution, as this loss function does not capture the underlying dynamics. Additionally, neural networks are not capable of capturing singularities in the solutions due to the non-compactness of their image sets. This, in turn, influences the existence of global minima and the regularity of the network. We demonstrate that when the global minimum does not exist, machine precision becomes the predominant source of achievable error in practice. We also present numerical experiments in support of our theoretical claims.
[ "['Tao Wang' 'Bo Zhao' 'Sicun Gao' 'Rose Yu']" ]
null
null
2405.02563
null
null
http://arxiv.org/pdf/2405.02563v1
2024-05-04T04:29:22Z
2024-05-04T04:29:22Z
Deep Representation Learning-Based Dynamic Trajectory Phenotyping for Acute Respiratory Failure in Medical Intensive Care Units
Sepsis-induced acute respiratory failure (ARF) is a serious complication with a poor prognosis. This paper presents a deep representation learningbased phenotyping method to identify distinct groups of clinical trajectories of septic patients with ARF. For this retrospective study, we created a dataset from electronic medical records (EMR) consisting of data from sepsis patients admitted to medical intensive care units who required at least 24 hours of invasive mechanical ventilation at a quarternary care academic hospital in southeast USA for the years 2016-2021. A total of N=3349 patient encounters were included in this study. Clustering Representation Learning on Incomplete Time Series Data (CRLI) algorithm was applied to a parsimonious set of EMR variables in this data set. To validate the optimal number of clusters, the K-means algorithm was used in conjunction with dynamic time warping. Our model yielded four distinct patient phenotypes that were characterized as liver dysfunction/heterogeneous, hypercapnia, hypoxemia, and multiple organ dysfunction syndrome by a critical care expert. A Kaplan-Meier analysis to compare the 28-day mortality trends exhibited significant differences (p < 0.005) between the four phenotypes. The study demonstrates the utility of our deep representation learning-based approach in unraveling phenotypes that reflect the heterogeneity in sepsis-induced ARF in terms of different mortality outcomes and severity. These phenotypes might reveal important clinical insights into an effective prognosis and tailored treatment strategies.
[ "['Alan Wu' 'Tilendra Choudhary' 'Pulakesh Upadhyaya' 'Ayman Ali'\n 'Philip Yang' 'Rishikesan Kamaleswaran']" ]
null
null
2405.02569
null
null
http://arxiv.org/pdf/2405.02569v1
2024-05-04T05:03:11Z
2024-05-04T05:03:11Z
Decoupling Exploration and Exploitation for Unsupervised Pre-training with Successor Features
Unsupervised pre-training has been on the lookout for the virtue of a value function representation referred to as successor features (SFs), which decouples the dynamics of the environment from the rewards. It has a significant impact on the process of task-specific fine-tuning due to the decomposition. However, existing approaches struggle with local optima due to the unified intrinsic reward of exploration and exploitation without considering the linear regression problem and the discriminator supporting a small skill sapce. We propose a novel unsupervised pre-training model with SFs based on a non-monolithic exploration methodology. Our approach pursues the decomposition of exploitation and exploration of an agent built on SFs, which requires separate agents for the respective purpose. The idea will leverage not only the inherent characteristics of SFs such as a quick adaptation to new tasks but also the exploratory and task-agnostic capabilities. Our suggested model is termed Non-Monolithic unsupervised Pre-training with Successor features (NMPS), which improves the performance of the original monolithic exploration method of pre-training with SFs. NMPS outperforms Active Pre-training with Successor Features (APS) in a comparative experiment.
[ "['JaeYoon Kim' 'Junyu Xuan' 'Christy Liang' 'Farookh Hussain']" ]
null
null
2405.02572
null
null
http://arxiv.org/pdf/2405.02572v1
2024-05-04T05:21:28Z
2024-05-04T05:21:28Z
Off-OAB: Off-Policy Policy Gradient Method with Optimal Action-Dependent Baseline
Policy-based methods have achieved remarkable success in solving challenging reinforcement learning problems. Among these methods, off-policy policy gradient methods are particularly important due to that they can benefit from off-policy data. However, these methods suffer from the high variance of the off-policy policy gradient (OPPG) estimator, which results in poor sample efficiency during training. In this paper, we propose an off-policy policy gradient method with the optimal action-dependent baseline (Off-OAB) to mitigate this variance issue. Specifically, this baseline maintains the OPPG estimator's unbiasedness while theoretically minimizing its variance. To enhance practical computational efficiency, we design an approximated version of this optimal baseline. Utilizing this approximation, our method (Off-OAB) aims to decrease the OPPG estimator's variance during policy optimization. We evaluate the proposed Off-OAB method on six representative tasks from OpenAI Gym and MuJoCo, where it demonstrably surpasses state-of-the-art methods on the majority of these tasks.
[ "['Wenjia Meng' 'Qian Zheng' 'Long Yang' 'Yilong Yin' 'Gang Pan']" ]
null
null
2405.02574
null
null
http://arxiv.org/pdf/2405.02574v1
2024-05-04T05:26:13Z
2024-05-04T05:26:13Z
A Data Mining-Based Dynamical Anomaly Detection Method for Integrating with an Advance Metering System
Building operations consume 30% of total power consumption and contribute 26% of global power-related emissions. Therefore, monitoring, and early detection of anomalies at the meter level are essential for residential and commercial buildings. This work investigates both supervised and unsupervised approaches and introduces a dynamic anomaly detection system. The system introduces a supervised Light Gradient Boosting machine and an unsupervised autoencoder with a dynamic threshold. This system is designed to provide real-time detection of anomalies at the meter level. The proposed dynamical system comes with a dynamic threshold based on the Mahalanobis distance and moving averages. This approach allows the system to adapt to changes in the data distribution over time. The effectiveness of the proposed system is evaluated using real-life power consumption data collected from smart metering systems. This empirical testing ensures that the system's performance is validated under real-world conditions. By detecting unusual data movements and providing early warnings, the proposed system contributes significantly to visual analytics and decision science. Early detection of anomalies enables timely troubleshooting, preventing financial losses and potential disasters such as fire incidents.
[ "['Sarit Maitra']" ]
null
null
2405.02576
null
null
http://arxiv.org/pdf/2405.02576v2
2024-05-20T04:26:46Z
2024-05-04T05:38:38Z
CTD4 -- A Deep Continuous Distributional Actor-Critic Agent with a Kalman Fusion of Multiple Critics
Categorical Distributional Reinforcement Learning (CDRL) has demonstrated superior sample efficiency in learning complex tasks compared to conventional Reinforcement Learning (RL) approaches. However, the practical application of CDRL is encumbered by challenging projection steps, detailed parameter tuning, and domain knowledge. This paper addresses these challenges by introducing a pioneering Continuous Distributional Model-Free RL algorithm tailored for continuous action spaces. The proposed algorithm simplifies the implementation of distributional RL, adopting an actor-critic architecture wherein the critic outputs a continuous probability distribution. Additionally, we propose an ensemble of multiple critics fused through a Kalman fusion mechanism to mitigate overestimation bias. Through a series of experiments, we validate that our proposed method is easy to train and serves as a sample-efficient solution for executing complex continuous-control tasks.
[ "['David Valencia' 'Henry Williams' 'Trevor Gee' 'Bruce A MacDonald'\n 'Minas Liarokapis']" ]
null
null
2405.02594
null
null
http://arxiv.org/pdf/2405.02594v1
2024-05-04T07:32:34Z
2024-05-04T07:32:34Z
Leveraging (Biased) Information: Multi-armed Bandits with Offline Data
We leverage offline data to facilitate online learning in stochastic multi-armed bandits. The probability distributions that govern the offline data and the online rewards can be different. Without any non-trivial upper bound on their difference, we show that no non-anticipatory policy can outperform the UCB policy by (Auer et al. 2002), even in the presence of offline data. In complement, we propose an online policy MIN-UCB, which outperforms UCB when a non-trivial upper bound is given. MIN-UCB adaptively chooses to utilize the offline data when they are deemed informative, and to ignore them otherwise. MIN-UCB is shown to be tight in terms of both instance independent and dependent regret bounds. Finally, we corroborate the theoretical results with numerical experiments.
[ "['Wang Chi Cheung' 'Lixing Lyu']" ]
null
null
2405.02596
null
null
http://arxiv.org/pdf/2405.02596v1
2024-05-04T07:44:18Z
2024-05-04T07:44:18Z
Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning
Fine-tuning large language models (LLM) can be costly. Parameter-efficient fine-tuning (PEFT) addresses the problems by training a fraction of the parameters, whose success reveals the expressiveness and flexibility of pretrained models. This paper studies the limit of PEFT, by further simplifying its design and reducing the number of trainable parameters beyond standard setups. To this end, we use Random Masking to fine-tune the pretrained model. Despite its simplicity, we show that Random Masking is surprisingly effective: with a larger-than-expected learning rate, Random Masking can match the performance of standard PEFT algorithms such as LoRA on various tasks, using fewer trainable parameters. We provide both empirical and theoretical explorations into the success of Random Masking. We show that masking induces a flatter loss landscape and more distant solutions, which allows for and necessitates large learning rates.
[ "['Jing Xu' 'Jingzhao Zhang']" ]
null
null
2405.02598
null
null
http://arxiv.org/pdf/2405.02598v1
2024-05-04T07:48:59Z
2024-05-04T07:48:59Z
UDUC: An Uncertainty-driven Approach for Learning-based Robust Control
Learning-based techniques have become popular in both model predictive control (MPC) and reinforcement learning (RL). Probabilistic ensemble (PE) models offer a promising approach for modelling system dynamics, showcasing the ability to capture uncertainty and scalability in high-dimensional control scenarios. However, PE models are susceptible to mode collapse, resulting in non-robust control when faced with environments slightly different from the training set. In this paper, we introduce the $textbf{u}$ncertainty-$textbf{d}$riven rob$textbf{u}$st $textbf{c}$ontrol (UDUC) loss as an alternative objective for training PE models, drawing inspiration from contrastive learning. We analyze the robustness of UDUC loss through the lens of robust optimization and evaluate its performance on the challenging Real-world Reinforcement Learning (RWRL) benchmark, which involves significant environmental mismatches between the training and testing environments.
[ "['Yuan Zhang' 'Jasper Hoffmann' 'Joschka Boedecker']" ]
null
null
2405.02609
null
null
http://arxiv.org/pdf/2405.02609v1
2024-05-04T08:33:24Z
2024-05-04T08:33:24Z
Advanced Equalization in 112 Gb/s Upstream PON Using a Novel Fourier Convolution-based Network
We experimentally demonstrate a novel, low-complexity Fourier Convolution-based Network (FConvNet) based equalizer for 112 Gb/s upstream PAM4-PON. At a BER of 0.005, FConvNet enhances the receiver sensitivity by 2 and 1 dB compared to a 51-tap Sato equalizer and benchmark machine learning algorithms respectively.
[ "['Chen Shao' 'Elias Giacoumidis' 'Patrick Matalla' 'Jialei Li' 'Shi Li'\n 'Sebastian Randel' 'Andre Richter' 'Michael Faerber' 'Tobias Kaefer']" ]
null
null
2405.02612
null
null
http://arxiv.org/pdf/2405.02612v3
2024-06-19T17:08:13Z
2024-05-04T08:43:45Z
Learning Linear Utility Functions From Pairwise Comparison Queries
We study learnability of linear utility functions from pairwise comparison queries. In particular, we consider two learning objectives. The first objective is to predict out-of-sample responses to pairwise comparisons, whereas the second is to approximately recover the true parameters of the utility function. We show that in the passive learning setting, linear utilities are efficiently learnable with respect to the first objective, both when query responses are uncorrupted by noise, and under Tsybakov noise when the distributions are sufficiently "nice". In contrast, we show that utility parameters are not learnable for a large set of data distributions without strong modeling assumptions, even when query responses are noise-free. Next, we proceed to analyze the learning problem in an active learning setting. In this case, we show that even the second objective is efficiently learnable, and present algorithms for both the noise-free and noisy query response settings. Our results thus exhibit a qualitative learnability gap between passive and active learning from pairwise preference queries, demonstrating the value of the ability to select pairwise queries for utility learning.
[ "['Luise Ge' 'Brendan Juba' 'Yevgeniy Vorobeychik']" ]
null
null
2405.02628
null
null
http://arxiv.org/pdf/2405.02628v1
2024-05-04T10:09:27Z
2024-05-04T10:09:27Z
Contrastive Dual-Interaction Graph Neural Network for Molecular Property Prediction
Molecular property prediction is a key component of AI-driven drug discovery and molecular characterization learning. Despite recent advances, existing methods still face challenges such as limited ability to generalize, and inadequate representation of learning from unlabeled data, especially for tasks specific to molecular structures. To address these limitations, we introduce DIG-Mol, a novel self-supervised graph neural network framework for molecular property prediction. This architecture leverages the power of contrast learning with dual interaction mechanisms and unique molecular graph enhancement strategies. DIG-Mol integrates a momentum distillation network with two interconnected networks to efficiently improve molecular characterization. The framework's ability to extract key information about molecular structure and higher-order semantics is supported by minimizing loss of contrast. We have established DIG-Mol's state-of-the-art performance through extensive experimental evaluation in a variety of molecular property prediction tasks. In addition to demonstrating superior transferability in a small number of learning scenarios, our visualizations highlight DIG-Mol's enhanced interpretability and representation capabilities. These findings confirm the effectiveness of our approach in overcoming challenges faced by traditional methods and mark a significant advance in molecular property prediction.
[ "['Zexing Zhao' 'Guangsi Shi' 'Xiaopeng Wu' 'Ruohua Ren' 'Xiaojun Gao'\n 'Fuyi Li']" ]
null
null
2405.02631
null
null
http://arxiv.org/pdf/2405.02631v1
2024-05-04T10:54:07Z
2024-05-04T10:54:07Z
Unsupervised machine learning for data-driven classification of rock mass using drilling data: How can a data-driven system handle limitations in existing rock mass classification systems?
Rock mass classification systems are crucial for assessing stability and risk in underground construction globally and guiding support and excavation design. However, systems developed primarily in the 1970s lack access to modern high-resolution data and advanced statistical techniques, limiting their effectiveness as decision-support systems. Initially, we outline the limitations observed in this context and later describe how a data-driven system, based on drilling data as detailed in this study, can overcome these limitations. Using extracted statistical information from thousands of MWD-data values in one-meter sections of a full tunnel profile, thus working as a signature of the rock mass, we have demonstrated that it is possible to form well-defined clusters that can act as a foundational basis for various rock mass classification systems. We reduced the dimensionality of 48-value vectors using nonlinear manifold learning techniques (UMAP) and linear principal component analysis (PCA) to enhance clustering. Unsupervised machine learning methods (HDBSCAN, Agglomerative Clustering, K-means) were employed to cluster the data, with hyperparameters optimised through multi-objective Bayesian optimisation for effective clustering. Using domain knowledge, we experienced improved clustering and system tuning opportunities in adding extra features to core clusters of MWD-data. We structured and correlated these clusters with physical rock mass properties, including labels of rock type and rock quality, and analysed cumulative distributions of key MWD-parameters for rock mass assessment to determine if clusters meaningfully differentiate rock masses. The ability of MWD data to form distinct rock mass clusters suggests substantial potential for future classification systems grounded in this objective, data-driven methodology, free from human bias.
[ "['T. F. Hansen' 'A. Aarset']" ]
null
null
2405.02634
null
null
http://arxiv.org/pdf/2405.02634v1
2024-05-04T11:05:52Z
2024-05-04T11:05:52Z
Onboard Out-of-Calibration Detection of Deep Learning Models using Conformal Prediction
The black box nature of deep learning models complicate their usage in critical applications such as remote sensing. Conformal prediction is a method to ensure trust in such scenarios. Subject to data exchangeability, conformal prediction provides finite sample coverage guarantees in the form of a prediction set that is guaranteed to contain the true class within a user defined error rate. In this letter we show that conformal prediction algorithms are related to the uncertainty of the deep learning model and that this relation can be used to detect if the deep learning model is out-of-calibration. Popular classification models like Resnet50, Densenet161, InceptionV3, and MobileNetV2 are applied on remote sensing datasets such as the EuroSAT to demonstrate how under noisy scenarios the model outputs become untrustworthy. Furthermore an out-of-calibration detection procedure relating the model uncertainty and the average size of the conformal prediction set is presented.
[ "['Protim Bhattacharjee' 'Peter Jung']" ]
null
null
2405.02638
null
null
http://arxiv.org/pdf/2405.02638v1
2024-05-04T11:22:53Z
2024-05-04T11:22:53Z
PrivSGP-VR: Differentially Private Variance-Reduced Stochastic Gradient Push with Tight Utility Bounds
In this paper, we propose a differentially private decentralized learning method (termed PrivSGP-VR) which employs stochastic gradient push with variance reduction and guarantees $(epsilon, delta)$-differential privacy (DP) for each node. Our theoretical analysis shows that, under DP Gaussian noise with constant variance, PrivSGP-VR achieves a sub-linear convergence rate of $mathcal{O}(1/sqrt{nK})$, where $n$ and $K$ are the number of nodes and iterations, respectively, which is independent of stochastic gradient variance, and achieves a linear speedup with respect to $n$. Leveraging the moments accountant method, we further derive an optimal $K$ to maximize the model utility under certain privacy budget in decentralized settings. With this optimized $K$, PrivSGP-VR achieves a tight utility bound of $mathcal{O}left( sqrt{dlog left( frac{1}{delta} right)}/(sqrt{n}Jepsilon) right)$, where $J$ and $d$ are the number of local samples and the dimension of decision variable, respectively, which matches that of the server-client distributed counterparts, and exhibits an extra factor of $1/sqrt{n}$ improvement compared to that of the existing decentralized counterparts, such as A(DP)$^2$SGD. Extensive experiments corroborate our theoretical findings, especially in terms of the maximized utility with optimized $K$, in fully decentralized settings.
[ "['Zehan Zhu' 'Yan Huang' 'Xin Wang' 'Jinming Xu']" ]
null
null
2405.02642
null
null
http://arxiv.org/pdf/2405.02642v2
2024-05-29T18:13:03Z
2024-05-04T11:37:08Z
Machine Learning in Space: Surveying the Robustness of on-board ML models to Radiation
Modern spacecraft are increasingly relying on machine learning (ML). However, physical equipment in space is subject to various natural hazards, such as radiation, which may inhibit the correct operation of computing devices. Despite plenty of evidence showing the damage that naturally-induced faults can cause to ML-related hardware, we observe that the effects of radiation on ML models for space applications are not well-studied. This is a problem: without understanding how ML models are affected by these natural phenomena, it is uncertain "where to start from" to develop radiation-tolerant ML software. As ML researchers, we attempt to tackle this dilemma. By partnering up with space-industry practitioners specialized in ML, we perform a reflective analysis of the state of the art. We provide factual evidence that prior work did not thoroughly examine the impact of natural hazards on ML models meant for spacecraft. Then, through a "negative result", we show that some existing open-source technologies can hardly be used by researchers to study the effects of radiation for some applications of ML in satellites. As a constructive step forward, we perform simple experiments showcasing how to leverage current frameworks to assess the robustness of practical ML models for cloud detection against radiation-induced faults. Our evaluation reveals that not all faults are as devastating as claimed by some prior work. By publicly releasing our resources, we provide a foothold -- usable by researchers without access to spacecraft -- for spearheading development of space-tolerant ML models.
[ "['Kevin Lange' 'Federico Fontana' 'Francesco Rossi' 'Mattia Varile'\n 'Giovanni Apruzzese']" ]
null
null
2405.02644
null
null
http://arxiv.org/pdf/2405.02644v1
2024-05-04T11:56:24Z
2024-05-04T11:56:24Z
Interpretable Multi-View Clustering
Multi-view clustering has become a significant area of research, with numerous methods proposed over the past decades to enhance clustering accuracy. However, in many real-world applications, it is crucial to demonstrate a clear decision-making process-specifically, explaining why samples are assigned to particular clusters. Consequently, there remains a notable gap in developing interpretable methods for clustering multi-view data. To fill this crucial gap, we make the first attempt towards this direction by introducing an interpretable multi-view clustering framework. Our method begins by extracting embedded features from each view and generates pseudo-labels to guide the initial construction of the decision tree. Subsequently, it iteratively optimizes the feature representation for each view along with refining the interpretable decision tree. Experimental results on real datasets demonstrate that our method not only provides a transparent clustering process for multi-view data but also delivers performance comparable to state-of-the-art multi-view clustering methods. To the best of our knowledge, this is the first effort to design an interpretable clustering framework specifically for multi-view data, opening a new avenue in this field.
[ "['Mudi Jiang' 'Lianyu Hu' 'Zengyou He' 'Zhikui Chen']" ]
null
null
2405.02648
null
null
http://arxiv.org/pdf/2405.02648v2
2024-05-21T13:06:56Z
2024-05-04T12:22:02Z
A Conformal Prediction Score that is Robust to Label Noise
Conformal Prediction (CP) quantifies network uncertainty by building a small prediction set with a pre-defined probability that the correct class is within this set. In this study we tackle the problem of CP calibration based on a validation set with noisy labels. We introduce a conformal score that is robust to label noise. The noise-free conformal score is estimated using the noisy labeled data and the noise level. In the test phase the noise-free score is used to form the prediction set. We applied the proposed algorithm to several standard medical imaging classification datasets. We show that our method outperforms current methods by a large margin, in terms of the average size of the prediction set, while maintaining the required coverage.
[ "['Coby Penso' 'Jacob Goldberger']" ]
null
null
2405.02649
null
null
http://arxiv.org/pdf/2405.02649v1
2024-05-04T12:24:29Z
2024-05-04T12:24:29Z
Generic Multi-modal Representation Learning for Network Traffic Analysis
Network traffic analysis is fundamental for network management, troubleshooting, and security. Tasks such as traffic classification, anomaly detection, and novelty discovery are fundamental for extracting operational information from network data and measurements. We witness the shift from deep packet inspection and basic machine learning to Deep Learning (DL) approaches where researchers define and test a custom DL architecture designed for each specific problem. We here advocate the need for a general DL architecture flexible enough to solve different traffic analysis tasks. We test this idea by proposing a DL architecture based on generic data adaptation modules, followed by an integration module that summarises the extracted information into a compact and rich intermediate representation (i.e. embeddings). The result is a flexible Multi-modal Autoencoder (MAE) pipeline that can solve different use cases. We demonstrate the architecture with traffic classification (TC) tasks since they allow us to quantitatively compare results with state-of-the-art solutions. However, we argue that the MAE architecture is generic and can be used to learn representations useful in multiple scenarios. On TC, the MAE performs on par or better than alternatives while avoiding cumbersome feature engineering, thus streamlining the adoption of DL solutions for traffic analysis.
[ "['Luca Gioacchini' 'Idilio Drago' 'Marco Mellia' 'Zied Ben Houidi'\n 'Dario Rossi']" ]
null
null
2405.02661
null
null
http://arxiv.org/pdf/2405.02661v2
2024-05-15T09:34:20Z
2024-05-04T13:14:29Z
DDE-Find: Learning Delay Differential Equations from Noisy, Limited Data
Delay Differential Equations (DDEs) are a class of differential equations that can model diverse scientific phenomena. However, identifying the parameters, especially the time delay, that make a DDE's predictions match experimental results can be challenging. We introduce DDE-Find, a data-driven framework for learning a DDE's parameters, time delay, and initial condition function. DDE-Find uses an adjoint-based approach to efficiently compute the gradient of a loss function with respect to the model parameters. We motivate and rigorously prove an expression for the gradients of the loss using the adjoint. DDE-Find builds upon recent developments in learning DDEs from data and delivers the first complete framework for learning DDEs from data. Through a series of numerical experiments, we demonstrate that DDE-Find can learn DDEs from noisy, limited data.
[ "['Robert Stephany']" ]
null
null
2405.02670
null
null
http://arxiv.org/pdf/2405.02670v1
2024-05-04T13:58:03Z
2024-05-04T13:58:03Z
From Generalization Analysis to Optimization Designs for State Space Models
A State Space Model (SSM) is a foundation model in time series analysis, which has recently been shown as an alternative to transformers in sequence modeling. In this paper, we theoretically study the generalization of SSMs and propose improvements to training algorithms based on the generalization results. Specifically, we give a textit{data-dependent} generalization bound for SSMs, showing an interplay between the SSM parameters and the temporal dependencies of the training sequences. Leveraging the generalization bound, we (1) set up a scaling rule for model initialization based on the proposed generalization measure, which significantly improves the robustness of the output value scales on SSMs to different temporal patterns in the sequence data; (2) introduce a new regularization method for training SSMs to enhance the generalization performance. Numerical results are conducted to validate our results.
[ "['Fusheng Liu' 'Qianxiao Li']" ]
null
null
2405.02678
null
null
http://arxiv.org/pdf/2405.02678v3
2024-06-05T13:12:17Z
2024-05-04T14:43:31Z
Position: Quo Vadis, Unsupervised Time Series Anomaly Detection?
The current state of machine learning scholarship in Timeseries Anomaly Detection (TAD) is plagued by the persistent use of flawed evaluation metrics, inconsistent benchmarking practices, and a lack of proper justification for the choices made in novel deep learning-based model designs. Our paper presents a critical analysis of the status quo in TAD, revealing the misleading track of current research and highlighting problematic methods, and evaluation practices. Our position advocates for a shift in focus from solely pursuing novel model designs to improving benchmarking practices, creating non-trivial datasets, and critically evaluating the utility of complex methods against simpler baselines. Our findings demonstrate the need for rigorous evaluation protocols, the creation of simple baselines, and the revelation that state-of-the-art deep anomaly detection models effectively learn linear mappings. These findings suggest the need for more exploration and development of simple and interpretable TAD methods. The increment of model complexity in the state-of-the-art deep-learning based models unfortunately offers very little improvement. We offer insights and suggestions for the field to move forward. Code: https://github.com/ssarfraz/QuoVadisTAD
[ "['M. Saquib Sarfraz' 'Mei-Yen Chen' 'Lukas Layer' 'Kunyu Peng'\n 'Marios Koulakis']" ]
null
null
2405.02679
null
null
http://arxiv.org/pdf/2405.02679v1
2024-05-04T14:45:31Z
2024-05-04T14:45:31Z
Prévisions météorologiques basées sur l'intelligence artificielle : une révolution peut en cacher une autre
Artificial intelligence (AI), based on deep-learning algorithm using high-quality reanalysis datasets, is showing enormous potential for weather forecasting. In this context, the European Centre for Medium-Range Weather Forecasts (ECMWF) is developing a new forecasting system based on AI. Verification results of deterministic forecast for now are promising. However, the realism of weather forecasts based on AI is often questioned. Here, different types of realism are identified and we discuss, in particular, the relationship between structural realism and predictability of weather events. Furthermore, a statistical analysis of deterministic forecasts based on AI points to a realism/performance dilemma that a probabilistic approach should help to solve. -- L'intelligence artificielle (IA) bouleverse aujourd'hui le monde de la pr'evision m'et'eorologique avec l'utilisation d'algorithmes d'apprentissage profond nourris par des champs de r'eanalyses. Dans ce contexte, le Centre Europ'een pour les Pr'evisions M'et'eorologiques `a Moyen Terme (CEPMMT) a d'ecid'e de d'evelopper un nouveau syst`eme de pr'evisions resposant sur l'IA. Ces pr'evisions, pour le moment de type d'eterministe, montrent des r'esultats prometteurs. Toutefois, le r'ealisme de ce type de pr'evisions reposant sur l'IA est souvent questionn'e. Ici, nous identifions diff'erents types de r'ealisme et interrogeons notamment le rapport entre r'ealisme structurel et pr'evisibilit'e des 'ev^enements m'et'eorologiques. Une analyse statistique de pr'evisions d'eterministes reposant sur l'IA laisse apparaitre un dilemme r'ealisme/performance qu'une approche probabiliste devrait aider `a r'esoudre.
[ "['Zied Ben-Bouallegue' 'Mariana C A Clare' 'Matthieu Chevallier']" ]
null
null
2405.02685
null
null
http://arxiv.org/pdf/2405.02685v1
2024-05-04T14:57:09Z
2024-05-04T14:57:09Z
FedProK: Trustworthy Federated Class-Incremental Learning via Prototypical Feature Knowledge Transfer
Federated Class-Incremental Learning (FCIL) focuses on continually transferring the previous knowledge to learn new classes in dynamic Federated Learning (FL). However, existing methods do not consider the trustworthiness of FCIL, i.e., improving continual utility, privacy, and efficiency simultaneously, which is greatly influenced by catastrophic forgetting and data heterogeneity among clients. To address this issue, we propose FedProK (Federated Prototypical Feature Knowledge Transfer), leveraging prototypical feature as a novel representation of knowledge to perform spatial-temporal knowledge transfer. Specifically, FedProK consists of two components: (1) feature translation procedure on the client side by temporal knowledge transfer from the learned classes and (2) prototypical knowledge fusion on the server side by spatial knowledge transfer among clients. Extensive experiments conducted in both synchronous and asynchronous settings demonstrate that our FedProK outperforms the other state-of-the-art methods in three perspectives of trustworthiness, validating its effectiveness in selectively transferring spatial-temporal knowledge.
[ "['Xin Gao' 'Xin Yang' 'Hao Yu' 'Yan Kang' 'Tianrui Li']" ]
null
null
2405.02688
null
null
http://arxiv.org/pdf/2405.02688v1
2024-05-04T14:58:47Z
2024-05-04T14:58:47Z
Semi-supervised Symmetric Matrix Factorization with Low-Rank Tensor Representation
Semi-supervised symmetric non-negative matrix factorization (SNMF) utilizes the available supervisory information (usually in the form of pairwise constraints) to improve the clustering ability of SNMF. The previous methods introduce the pairwise constraints from the local perspective, i.e., they either directly refine the similarity matrix element-wisely or restrain the distance of the decomposed vectors in pairs according to the pairwise constraints, which overlook the global perspective, i.e., in the ideal case, the pairwise constraint matrix and the ideal similarity matrix possess the same low-rank structure. To this end, we first propose a novel semi-supervised SNMF model by seeking low-rank representation for the tensor synthesized by the pairwise constraint matrix and a similarity matrix obtained by the product of the embedding matrix and its transpose, which could strengthen those two matrices simultaneously from a global perspective. We then propose an enhanced SNMF model, making the embedding matrix tailored to the above tensor low-rank representation. We finally refine the similarity matrix by the strengthened pairwise constraints. We repeat the above steps to continuously boost the similarity matrix and pairwise constraint matrix, leading to a high-quality embedding matrix. Extensive experiments substantiate the superiority of our method. The code is available at https://github.com/JinaLeejnl/TSNMF.
[ "['Yuheng Jia' 'Jia-Nan Li' 'Wenhui Wu' 'Ran Wang']" ]
null
null
2405.02698
null
null
http://arxiv.org/pdf/2405.02698v1
2024-05-04T15:37:22Z
2024-05-04T15:37:22Z
Stable Diffusion Dataset Generation for Downstream Classification Tasks
Recent advances in generative artificial intelligence have enabled the creation of high-quality synthetic data that closely mimics real-world data. This paper explores the adaptation of the Stable Diffusion 2.0 model for generating synthetic datasets, using Transfer Learning, Fine-Tuning and generation parameter optimisation techniques to improve the utility of the dataset for downstream classification tasks. We present a class-conditional version of the model that exploits a Class-Encoder and optimisation of key generation parameters. Our methodology led to synthetic datasets that, in a third of cases, produced models that outperformed those trained on real datasets.
[ "['Eugenio Lomurno' \"Matteo D'Oria\" 'Matteo Matteucci']" ]
null
null
2405.02700
null
null
http://arxiv.org/pdf/2405.02700v2
2024-07-05T03:11:17Z
2024-05-04T16:06:50Z
Identification of Novel Modes in Generative Models via Fourier-based Differential Clustering
An interpretable comparison of generative models requires the identification of sample types produced more frequently by each of the involved models. While several quantitative scores have been proposed in the literature to rank different generative models, such score-based evaluations do not reveal the nuanced differences between the generative models in capturing various sample types. In this work, we attempt to solve a differential clustering problem to detect sample types expressed differently by two generative models. To solve the differential clustering problem, we propose a method called Fourier-based Identification of Novel Clusters (FINC) to identify modes produced by a generative model with a higher frequency in comparison to a reference distribution. FINC provides a scalable stochastic algorithm based on random Fourier features to estimate the eigenspace of kernel covariance matrices of two generative models and utilize the principal eigendirections to detect the sample types present more dominantly in each model. We demonstrate the application of the FINC method to large-scale computer vision datasets and generative model frameworks. Our numerical results suggest the scalability of the developed Fourier-based method in highlighting the sample types produced with different frequencies by widely-used generative models. Code is available at url{https://github.com/buyeah1109/FINC}
[ "['Jingwei Zhang' 'Mohammad Jalali' 'Cheuk Ting Li' 'Farzan Farnia']" ]
null
null
2405.02724
null
null
http://arxiv.org/pdf/2405.02724v1
2024-05-04T17:47:45Z
2024-05-04T17:47:45Z
Taming Equilibrium Bias in Risk-Sensitive Multi-Agent Reinforcement Learning
We study risk-sensitive multi-agent reinforcement learning under general-sum Markov games, where agents optimize the entropic risk measure of rewards with possibly diverse risk preferences. We show that using the regret naively adapted from existing literature as a performance metric could induce policies with equilibrium bias that favor the most risk-sensitive agents and overlook the other agents. To address such deficiency of the naive regret, we propose a novel notion of regret, which we call risk-balanced regret, and show through a lower bound that it overcomes the issue of equilibrium bias. Furthermore, we develop a self-play algorithm for learning Nash, correlated, and coarse correlated equilibria in risk-sensitive Markov games. We prove that the proposed algorithm attains near-optimal regret guarantees with respect to the risk-balanced regret.
[ "['Yingjie Fei' 'Ruitu Xu']" ]
null
null
2405.02726
null
null
http://arxiv.org/pdf/2405.02726v1
2024-05-04T17:57:24Z
2024-05-04T17:57:24Z
A Mathematical Model of the Hidden Feedback Loop Effect in Machine Learning Systems
Widespread deployment of societal-scale machine learning systems necessitates a thorough understanding of the resulting long-term effects these systems have on their environment, including loss of trustworthiness, bias amplification, and violation of AI safety requirements. We introduce a repeated learning process to jointly describe several phenomena attributed to unintended hidden feedback loops, such as error amplification, induced concept drift, echo chambers and others. The process comprises the entire cycle of obtaining the data, training the predictive model, and delivering predictions to end-users within a single mathematical model. A distinctive feature of such repeated learning setting is that the state of the environment becomes causally dependent on the learner itself over time, thus violating the usual assumptions about the data distribution. We present a novel dynamical systems model of the repeated learning process and prove the limiting set of probability distributions for positive and negative feedback loop modes of the system operation. We conduct a series of computational experiments using an exemplary supervised learning problem on two synthetic data sets. The results of the experiments correspond to the theoretical predictions derived from the dynamical model. Our results demonstrate the feasibility of the proposed approach for studying the repeated learning processes in machine learning systems and open a range of opportunities for further research in the area.
[ "['Andrey Veprikov' 'Alexander Afanasiev' 'Anton Khritankov']" ]
null
null
2405.02731
null
null
http://arxiv.org/pdf/2405.02731v1
2024-05-04T18:31:38Z
2024-05-04T18:31:38Z
Systematic Review: Anomaly Detection in Connected and Autonomous Vehicles
This systematic review focuses on anomaly detection for connected and autonomous vehicles. The initial database search identified 2160 articles, of which 203 were included in this review after rigorous screening and assessment. This study revealed that the most commonly used Artificial Intelligence (AI) algorithms employed in anomaly detection are neural networks like LSTM, CNN, and autoencoders, alongside one-class SVM. Most anomaly-based models were trained using real-world operational vehicle data, although anomalies, such as attacks and faults, were often injected artificially into the datasets. These models were evaluated mostly using five key evaluation metrics: recall, accuracy, precision, F1-score, and false positive rate. The most frequently used selection of evaluation metrics used for anomaly detection models were accuracy, precision, recall, and F1-score. This systematic review presents several recommendations. First, there is a need to incorporate multiple evaluation metrics to provide a comprehensive assessment of the anomaly detection models. Second, only a small proportion of the studies have made their models open source, indicating a need to share models publicly to facilitate collaboration within the research community, and to validate and compare findings effectively. Third, there is a need for benchmarking datasets with predefined anomalies or cyberattacks to test and improve the effectiveness of the proposed anomaly-based detection models. Furthermore, there is a need for future research to investigate the deployment of anomaly detection to a vehicle to assess its performance on the road. There is a notable lack of research done on intrusion detection systems using different protocols to CAN, such as Ethernet and FlexRay.
[ "['J. R. V. Solaas' 'N. Tuptuk' 'E. Mariconti']" ]
null
null
2405.02745
null
null
http://arxiv.org/pdf/2405.02745v2
2024-05-25T04:18:29Z
2024-05-04T19:56:49Z
Understanding Server-Assisted Federated Learning in the Presence of Incomplete Client Participation
Existing works in federated learning (FL) often assume an ideal system with either full client or uniformly distributed client participation. However, in practice, it has been observed that some clients may never participate in FL training (aka incomplete client participation) due to a myriad of system heterogeneity factors. A popular approach to mitigate impacts of incomplete client participation is the server-assisted federated learning (SA-FL) framework, where the server is equipped with an auxiliary dataset. However, despite SA-FL has been empirically shown to be effective in addressing the incomplete client participation problem, there remains a lack of theoretical understanding for SA-FL. Meanwhile, the ramifications of incomplete client participation in conventional FL are also poorly understood. These theoretical gaps motivate us to rigorously investigate SA-FL. Toward this end, we first show that conventional FL is {em not} PAC-learnable under incomplete client participation in the worst case. Then, we show that the PAC-learnability of FL with incomplete client participation can indeed be revived by SA-FL, which theoretically justifies the use of SA-FL for the first time. Lastly, to provide practical guidance for SA-FL training under {em incomplete client participation}, we propose the $mathsf{SAFARI}$ (server-assisted federated averaging) algorithm that enjoys the same linear convergence speedup guarantees as classic FL with ideal client participation assumptions, offering the first SA-FL algorithm with convergence guarantee. Extensive experiments on different datasets show $mathsf{SAFARI}$ significantly improves the performance under incomplete client participation.
[ "['Haibo Yang' 'Peiwen Qiu' 'Prashant Khanduri' 'Minghong Fang' 'Jia Liu']" ]
null
null
2405.02749
null
null
http://arxiv.org/pdf/2405.02749v1
2024-05-04T20:34:06Z
2024-05-04T20:34:06Z
Sub-goal Distillation: A Method to Improve Small Language Agents
While Large Language Models (LLMs) have demonstrated significant promise as agents in interactive tasks, their substantial computational requirements and restricted number of calls constrain their practical utility, especially in long-horizon interactive tasks such as decision-making or in scenarios involving continuous ongoing tasks. To address these constraints, we propose a method for transferring the performance of an LLM with billions of parameters to a much smaller language model (770M parameters). Our approach involves constructing a hierarchical agent comprising a planning module, which learns through Knowledge Distillation from an LLM to generate sub-goals, and an execution module, which learns to accomplish these sub-goals using elementary actions. In detail, we leverage an LLM to annotate an oracle path with a sequence of sub-goals towards completing a goal. Subsequently, we utilize this annotated data to fine-tune both the planning and execution modules. Importantly, neither module relies on real-time access to an LLM during inference, significantly reducing the overall cost associated with LLM interactions to a fixed cost. In ScienceWorld, a challenging and multi-task interactive text environment, our method surpasses standard imitation learning based solely on elementary actions by 16.7% (absolute). Our analysis highlights the efficiency of our approach compared to other LLM-based methods. Our code and annotated data for distillation can be found on GitHub.
[ "['Maryam Hashemzadeh' 'Elias Stengel-Eskin' 'Sarath Chandar'\n 'Marc-Alexandre Cote']" ]
null
null
2405.02754
null
null
http://arxiv.org/pdf/2405.02754v1
2024-05-04T20:59:06Z
2024-05-04T20:59:06Z
Implicit Safe Set Algorithm for Provably Safe Reinforcement Learning
Deep reinforcement learning (DRL) has demonstrated remarkable performance in many continuous control tasks. However, a significant obstacle to the real-world application of DRL is the lack of safety guarantees. Although DRL agents can satisfy system safety in expectation through reward shaping, designing agents to consistently meet hard constraints (e.g., safety specifications) at every time step remains a formidable challenge. In contrast, existing work in the field of safe control provides guarantees on persistent satisfaction of hard safety constraints. However, these methods require explicit analytical system dynamics models to synthesize safe control, which are typically inaccessible in DRL settings. In this paper, we present a model-free safe control algorithm, the implicit safe set algorithm, for synthesizing safeguards for DRL agents that ensure provable safety throughout training. The proposed algorithm synthesizes a safety index (barrier certificate) and a subsequent safe control law solely by querying a black-box dynamic function (e.g., a digital twin simulator). Moreover, we theoretically prove that the implicit safe set algorithm guarantees finite time convergence to the safe set and forward invariance for both continuous-time and discrete-time systems. We validate the proposed algorithm on the state-of-the-art Safety Gym benchmark, where it achieves zero safety violations while gaining $95% pm 9%$ cumulative reward compared to state-of-the-art safe DRL methods. Furthermore, the resulting algorithm scales well to high-dimensional systems with parallel computing.
[ "['Weiye Zhao' 'Tairan He' 'Feihan Li' 'Changliu Liu']" ]
null
null
2405.02762
null
null
http://arxiv.org/pdf/2405.02762v1
2024-05-04T21:55:33Z
2024-05-04T21:55:33Z
TK-Planes: Tiered K-Planes with High Dimensional Feature Vectors for Dynamic UAV-based Scenes
In this paper, we present a new approach to bridge the domain gap between synthetic and real-world data for un- manned aerial vehicle (UAV)-based perception. Our formu- lation is designed for dynamic scenes, consisting of moving objects or human actions, where the goal is to recognize the pose or actions. We propose an extension of K-Planes Neural Radiance Field (NeRF), wherein our algorithm stores a set of tiered feature vectors. The tiered feature vectors are generated to effectively model conceptual information about a scene as well as an image decoder that transforms output feature maps into RGB images. Our technique leverages the information amongst both static and dynamic objects within a scene and is able to capture salient scene attributes of high altitude videos. We evaluate its performance on challenging datasets, including Okutama Action and UG2, and observe considerable improvement in accuracy over state of the art aerial perception algorithms.
[ "['Christopher Maxey' 'Jaehoon Choi' 'Yonghan Lee' 'Hyungtae Lee'\n 'Dinesh Manocha' 'Heesung Kwon']" ]
null
null
2405.02764
null
null
http://arxiv.org/pdf/2405.02764v1
2024-05-04T22:00:28Z
2024-05-04T22:00:28Z
Assessing Adversarial Robustness of Large Language Models: An Empirical Study
Large Language Models (LLMs) have revolutionized natural language processing, but their robustness against adversarial attacks remains a critical concern. We presents a novel white-box style attack approach that exposes vulnerabilities in leading open-source LLMs, including Llama, OPT, and T5. We assess the impact of model size, structure, and fine-tuning strategies on their resistance to adversarial perturbations. Our comprehensive evaluation across five diverse text classification tasks establishes a new benchmark for LLM robustness. The findings of this study have far-reaching implications for the reliable deployment of LLMs in real-world applications and contribute to the advancement of trustworthy AI systems.
[ "['Zeyu Yang' 'Zhao Meng' 'Xiaochen Zheng' 'Roger Wattenhofer']" ]
null
null
2405.02766
null
null
http://arxiv.org/pdf/2405.02766v1
2024-05-04T22:02:58Z
2024-05-04T22:02:58Z
Beyond Unimodal Learning: The Importance of Integrating Multiple Modalities for Lifelong Learning
While humans excel at continual learning (CL), deep neural networks (DNNs) exhibit catastrophic forgetting. A salient feature of the brain that allows effective CL is that it utilizes multiple modalities for learning and inference, which is underexplored in DNNs. Therefore, we study the role and interactions of multiple modalities in mitigating forgetting and introduce a benchmark for multimodal continual learning. Our findings demonstrate that leveraging multiple views and complementary information from multiple modalities enables the model to learn more accurate and robust representations. This makes the model less vulnerable to modality-specific regularities and considerably mitigates forgetting. Furthermore, we observe that individual modalities exhibit varying degrees of robustness to distribution shift. Finally, we propose a method for integrating and aligning the information from different modalities by utilizing the relational structural similarities between the data points in each modality. Our method sets a strong baseline that enables both single- and multimodal inference. Our study provides a promising case for further exploring the role of multiple modalities in enabling CL and provides a standard benchmark for future research.
[ "['Fahad Sarfraz' 'Bahram Zonooz' 'Elahe Arani']" ]
null
null
2405.02769
null
null
http://arxiv.org/pdf/2405.02769v1
2024-05-04T22:48:53Z
2024-05-04T22:48:53Z
Linear Convergence of Independent Natural Policy Gradient in Games with Entropy Regularization
This work focuses on the entropy-regularized independent natural policy gradient (NPG) algorithm in multi-agent reinforcement learning. In this work, agents are assumed to have access to an oracle with exact policy evaluation and seek to maximize their respective independent rewards. Each individual's reward is assumed to depend on the actions of all the agents in the multi-agent system, leading to a game between agents. We assume all agents make decisions under a policy with bounded rationality, which is enforced by the introduction of entropy regularization. In practice, a smaller regularization implies the agents are more rational and behave closer to Nash policies. On the other hand, agents with larger regularization acts more randomly, which ensures more exploration. We show that, under sufficient entropy regularization, the dynamics of this system converge at a linear rate to the quantal response equilibrium (QRE). Although regularization assumptions prevent the QRE from approximating a Nash equilibrium, our findings apply to a wide range of games, including cooperative, potential, and two-player matrix games. We also provide extensive empirical results on multiple games (including Markov games) as a verification of our theoretical analysis.
[ "['Youbang Sun' 'Tao Liu' 'P. R. Kumar' 'Shahin Shahrampour']" ]
null
null
2405.02770
null
null
http://arxiv.org/pdf/2405.02770v2
2024-05-16T17:24:01Z
2024-05-04T22:50:39Z
PhilHumans: Benchmarking Machine Learning for Personal Health
The use of machine learning in Healthcare has the potential to improve patient outcomes as well as broaden the reach and affordability of Healthcare. The history of other application areas indicates that strong benchmarks are essential for the development of intelligent systems. We present Personal Health Interfaces Leveraging HUman-MAchine Natural interactions (PhilHumans), a holistic suite of benchmarks for machine learning across different Healthcare settings - talk therapy, diet coaching, emergency care, intensive care, obstetric sonography - as well as different learning settings, such as action anticipation, timeseries modeling, insight mining, language modeling, computer vision, reinforcement learning and program synthesis
[ "['Vadim Liventsev' 'Vivek Kumar' 'Allmin Pradhap Singh Susaiyah'\n 'Zixiu Wu' 'Ivan Rodin' 'Asfand Yaar' 'Simone Balloccu'\n 'Marharyta Beraziuk' 'Sebastiano Battiato' 'Giovanni Maria Farinella'\n 'Aki Härmä' 'Rim Helaoui' 'Milan Petkovic' 'Diego Reforgiato Recupero'\n 'Ehud Reiter' 'Daniele Riboni' 'Raymond Sterling']" ]
null
null
2405.02771
null
null
http://arxiv.org/pdf/2405.02771v1
2024-05-04T23:16:48Z
2024-05-04T23:16:48Z
MMEarth: Exploring Multi-Modal Pretext Tasks For Geospatial Representation Learning
The volume of unlabelled Earth observation (EO) data is huge, but many important applications lack labelled training data. However, EO data offers the unique opportunity to pair data from different modalities and sensors automatically based on geographic location and time, at virtually no human labor cost. We seize this opportunity to create a diverse multi-modal pretraining dataset at global scale. Using this new corpus of 1.2 million locations, we propose a Multi-Pretext Masked Autoencoder (MP-MAE) approach to learn general-purpose representations for optical satellite images. Our approach builds on the ConvNeXt V2 architecture, a fully convolutional masked autoencoder (MAE). Drawing upon a suite of multi-modal pretext tasks, we demonstrate that our MP-MAE approach outperforms both MAEs pretrained on ImageNet and MAEs pretrained on domain-specific satellite images. This is shown on several downstream tasks including image classification and semantic segmentation. We find that multi-modal pretraining notably improves the linear probing performance, e.g. 4pp on BigEarthNet and 16pp on So2Sat, compared to pretraining on optical satellite images only. We show that this also leads to better label and parameter efficiency which are crucial aspects in global scale applications.
[ "['Vishal Nedungadi' 'Ankit Kariryaa' 'Stefan Oehmcke' 'Serge Belongie'\n 'Christian Igel' 'Nico Lang']" ]
null
null
2405.02774
null
null
http://arxiv.org/pdf/2405.02774v1
2024-05-05T00:08:00Z
2024-05-05T00:08:00Z
Get more for less: Principled Data Selection for Warming Up Fine-Tuning in LLMs
This work focuses on leveraging and selecting from vast, unlabeled, open data to pre-fine-tune a pre-trained language model. The goal is to minimize the need for costly domain-specific data for subsequent fine-tuning while achieving desired performance levels. While many data selection algorithms have been designed for small-scale applications, rendering them unsuitable for our context, some emerging methods do cater to language data scales. However, they often prioritize data that aligns with the target distribution. While this strategy may be effective when training a model from scratch, it can yield limited results when the model has already been pre-trained on a different distribution. Differing from prior work, our key idea is to select data that nudges the pre-training distribution closer to the target distribution. We show the optimality of this approach for fine-tuning tasks under certain conditions. We demonstrate the efficacy of our methodology across a diverse array of tasks (NLU, NLG, zero-shot) with models up to 2.7B, showing that it consistently surpasses other selection methods. Moreover, our proposed method is significantly faster than existing techniques, scaling to millions of samples within a single GPU hour. Our code is open-sourced (Code repository: https://anonymous.4open.science/r/DV4LLM-D761/ ). While fine-tuning offers significant potential for enhancing performance across diverse tasks, its associated costs often limit its widespread adoption; with this work, we hope to lay the groundwork for cost-effective fine-tuning, making its benefits more accessible.
[ "['Feiyang Kang' 'Hoang Anh Just' 'Yifan Sun' 'Himanshu Jahagirdar'\n 'Yuanzhi Zhang' 'Rongxing Du' 'Anit Kumar Sahu' 'Ruoxi Jia']" ]
null
null
2405.02783
null
null
http://arxiv.org/pdf/2405.02783v2
2024-06-28T23:30:36Z
2024-05-05T01:54:21Z
Linear Noise Approximation Assisted Bayesian Inference on Mechanistic Model of Partially Observed Stochastic Reaction Network
To support mechanism online learning and facilitate digital twin development for biomanufacturing processes, this paper develops an efficient Bayesian inference approach for partially observed enzymatic stochastic reaction network (SRN), a fundamental building block of multi-scale bioprocess mechanistic model. To tackle the critical challenges brought by the nonlinear stochastic differential equations (SDEs)-based mechanistic model with partially observed state and having measurement errors, an interpretable Bayesian updating linear noise approximation (LNA) metamodel, incorporating the structure information of the mechanistic model, is proposed to approximate the likelihood of observations. Then, an efficient posterior sampling approach is developed by utilizing the gradients of the derived likelihood to speed up the convergence of Markov Chain Monte Carlo (MCMC). The empirical study demonstrates that the proposed approach has a promising performance.
[ "['Wandi Xu' 'Wei Xie']" ]
null
null
2405.02790
null
null
http://arxiv.org/pdf/2405.02790v1
2024-05-05T02:10:00Z
2024-05-05T02:10:00Z
Confidential and Protected Disease Classifier using Fully Homomorphic Encryption
With the rapid surge in the prevalence of Large Language Models (LLMs), individuals are increasingly turning to conversational AI for initial insights across various domains, including health-related inquiries such as disease diagnosis. Many users seek potential causes on platforms like ChatGPT or Bard before consulting a medical professional for their ailment. These platforms offer valuable benefits by streamlining the diagnosis process, alleviating the significant workload of healthcare practitioners, and saving users both time and money by avoiding unnecessary doctor visits. However, Despite the convenience of such platforms, sharing personal medical data online poses risks, including the presence of malicious platforms or potential eavesdropping by attackers. To address privacy concerns, we propose a novel framework combining FHE and Deep Learning for a secure and private diagnosis system. Operating on a question-and-answer-based model akin to an interaction with a medical practitioner, this end-to-end secure system employs Fully Homomorphic Encryption (FHE) to handle encrypted input data. Given FHE's computational constraints, we adapt deep neural networks and activation functions to the encryted domain. Further, we also propose a faster algorithm to compute summation of ciphertext elements. Through rigorous experiments, we demonstrate the efficacy of our approach. The proposed framework achieves strict security and privacy with minimal loss in performance.
[ "['Aditya Malik' 'Nalini Ratha' 'Bharat Yalavarthi' 'Tilak Sharma'\n 'Arjun Kaushik' 'Charanjit Jutla']" ]
null
null
2405.02795
null
null
http://arxiv.org/pdf/2405.02795v2
2024-05-30T18:44:49Z
2024-05-05T02:29:41Z
Graph as Point Set
Graph is a fundamental data structure to model interconnections between entities. Set, on the contrary, stores independent elements. To learn graph representations, current Graph Neural Networks (GNNs) primarily use message passing to encode the interconnections. In contrast, this paper introduces a novel graph-to-set conversion method that bijectively transforms interconnected nodes into a set of independent points and then uses a set encoder to learn the graph representation. This conversion method holds dual significance. Firstly, it enables using set encoders to learn from graphs, thereby significantly expanding the design space of GNNs. Secondly, for Transformer, a specific set encoder, we provide a novel and principled approach to inject graph information losslessly, different from all the heuristic structural/positional encoding methods adopted in previous graph transformers. To demonstrate the effectiveness of our approach, we introduce Point Set Transformer (PST), a transformer architecture that accepts a point set converted from a graph as input. Theoretically, PST exhibits superior expressivity for both short-range substructure counting and long-range shortest path distance tasks compared to existing GNNs. Extensive experiments further validate PST's outstanding real-world performance. Besides Transformer, we also devise a Deepset-based set encoder, which achieves performance comparable to representative GNNs, affirming the versatility of our graph-to-set method.
[ "['Xiyuan Wang' 'Pan Li' 'Muhan Zhang']" ]
null
null
2405.02797
null
null
http://arxiv.org/pdf/2405.02797v1
2024-05-05T02:44:04Z
2024-05-05T02:44:04Z
Adapting to Distribution Shift by Visual Domain Prompt Generation
In this paper, we aim to adapt a model at test-time using a few unlabeled data to address distribution shifts. To tackle the challenges of extracting domain knowledge from a limited amount of data, it is crucial to utilize correlated information from pre-trained backbones and source domains. Previous studies fail to utilize recent foundation models with strong out-of-distribution generalization. Additionally, domain-centric designs are not flavored in their works. Furthermore, they employ the process of modelling source domains and the process of learning to adapt independently into disjoint training stages. In this work, we propose an approach on top of the pre-computed features of the foundation model. Specifically, we build a knowledge bank to learn the transferable knowledge from source domains. Conditioned on few-shot target data, we introduce a domain prompt generator to condense the knowledge bank into a domain-specific prompt. The domain prompt then directs the visual features towards a particular domain via a guidance module. Moreover, we propose a domain-aware contrastive loss and employ meta-learning to facilitate domain knowledge extraction. Extensive experiments are conducted to validate the domain knowledge extraction. The proposed method outperforms previous work on 5 large-scale benchmarks including WILDS and DomainNet.
[ "['Zhixiang Chi' 'Li Gu' 'Tao Zhong' 'Huan Liu' 'Yuanhao Yu'\n 'Konstantinos N Plataniotis' 'Yang Wang']" ]
null
null
2405.02803
null
null
http://arxiv.org/pdf/2405.02803v1
2024-05-05T03:25:25Z
2024-05-05T03:25:25Z
Is Flash Attention Stable?
Training large-scale machine learning models poses distinct system challenges, given both the size and complexity of today's workloads. Recently, many organizations training state-of-the-art Generative AI models have reported cases of instability during training, often taking the form of loss spikes. Numeric deviation has emerged as a potential cause of this training instability, although quantifying this is especially challenging given the costly nature of training runs. In this work, we develop a principled approach to understanding the effects of numeric deviation, and construct proxies to put observations into context when downstream effects are difficult to quantify. As a case study, we apply this framework to analyze the widely-adopted Flash Attention optimization. We find that Flash Attention sees roughly an order of magnitude more numeric deviation as compared to Baseline Attention at BF16 when measured during an isolated forward pass. We then use a data-driven analysis based on the Wasserstein Distance to provide upper bounds on how this numeric deviation impacts model weights during training, finding that the numerical deviation present in Flash Attention is 2-5 times less significant than low-precision training.
[ "['Alicia Golden' 'Samuel Hsia' 'Fei Sun' 'Bilge Acun' 'Basil Hosmer'\n 'Yejin Lee' 'Zachary DeVito' 'Jeff Johnson' 'Gu-Yeon Wei' 'David Brooks'\n 'Carole-Jean Wu']" ]
null
null
2405.02805
null
null
http://arxiv.org/pdf/2405.02805v1
2024-05-05T03:47:56Z
2024-05-05T03:47:56Z
Verlet Flows: Exact-Likelihood Integrators for Flow-Based Generative Models
Approximations in computing model likelihoods with continuous normalizing flows (CNFs) hinder the use of these models for importance sampling of Boltzmann distributions, where exact likelihoods are required. In this work, we present Verlet flows, a class of CNFs on an augmented state-space inspired by symplectic integrators from Hamiltonian dynamics. When used with carefully constructed Taylor-Verlet integrators, Verlet flows provide exact-likelihood generative models which generalize coupled flow architectures from a non-continuous setting while imposing minimal expressivity constraints. On experiments over toy densities, we demonstrate that the variance of the commonly used Hutchinson trace estimator is unsuitable for importance sampling, whereas Verlet flows perform comparably to full autograd trace computations while being significantly faster.
[ "['Ezra Erives' 'Bowen Jing' 'Tommi Jaakkola']" ]
null
null
2405.02807
null
null
http://arxiv.org/pdf/2405.02807v1
2024-05-05T04:00:03Z
2024-05-05T04:00:03Z
Kinematic analysis of structural mechanics based on convolutional neural network
Attempt to use convolutional neural network to achieve kinematic analysis of plane bar structure. Through 3dsMax animation software and OpenCV module, self-build image dataset of geometrically stable system and geometrically unstable system. we construct and train convolutional neural network model based on the TensorFlow and Keras deep learning platform framework. The model achieves 100% accuracy on the training set, validation set, and test set. The accuracy on the additional test set is 93.7%, indicating that convolutional neural network can learn and master the relevant knowledge of kinematic analysis of structural mechanics. In the future, the generalization ability of the model can be improved through the diversity of dataset, which has the potential to surpass human experts for complex structures. Convolutional neural network has certain practical value in the field of kinematic analysis of structural mechanics. Using visualization technology, we reveal how convolutional neural network learns and recognizes structural features. Using pre-trained VGG16 model for feature extraction and fine-tuning, we found that the generalization ability is inferior to the self-built model.
[ "['Leye Zhang' 'Xiangxiang Tian' 'Hongjun Zhang']" ]
null
null
2405.02816
null
null
http://arxiv.org/pdf/2405.02816v1
2024-05-05T05:42:33Z
2024-05-05T05:42:33Z
Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization
This paper introduces Stochastic RAG--a novel approach for end-to-end optimization of retrieval-augmented generation (RAG) models that relaxes the simplifying assumptions of marginalization and document independence, made in most prior work. Stochastic RAG casts the retrieval process in RAG as a stochastic sampling without replacement process. Through this formulation, we employ straight-through Gumbel-top-k that provides a differentiable approximation for sampling without replacement and enables effective end-to-end optimization for RAG. We conduct extensive experiments on seven diverse datasets on a wide range of tasks, from open-domain question answering to fact verification to slot-filling for relation extraction and to dialogue systems. By applying this optimization method to a recent and effective RAG model, we advance state-of-the-art results on six out of seven datasets.
[ "['Hamed Zamani' 'Michael Bendersky']" ]
null
null
2405.02821
null
null
http://arxiv.org/pdf/2405.02821v1
2024-05-05T06:01:31Z
2024-05-05T06:01:31Z
Sim2Real Transfer for Audio-Visual Navigation with Frequency-Adaptive Acoustic Field Prediction
Sim2real transfer has received increasing attention lately due to the success of learning robotic tasks in simulation end-to-end. While there has been a lot of progress in transferring vision-based navigation policies, the existing sim2real strategy for audio-visual navigation performs data augmentation empirically without measuring the acoustic gap. The sound differs from light in that it spans across much wider frequencies and thus requires a different solution for sim2real. We propose the first treatment of sim2real for audio-visual navigation by disentangling it into acoustic field prediction (AFP) and waypoint navigation. We first validate our design choice in the SoundSpaces simulator and show improvement on the Continuous AudioGoal navigation benchmark. We then collect real-world data to measure the spectral difference between the simulation and the real world by training AFP models that only take a specific frequency subband as input. We further propose a frequency-adaptive strategy that intelligently selects the best frequency band for prediction based on both the measured spectral difference and the energy distribution of the received audio, which improves the performance on the real data. Lastly, we build a real robot platform and show that the transferred policy can successfully navigate to sounding objects. This work demonstrates the potential of building intelligent agents that can see, hear, and act entirely from simulation, and transferring them to the real world.
[ "['Changan Chen' 'Jordi Ramos' 'Anshul Tomar' 'Kristen Grauman']" ]
null
null
2405.02828
null
null
http://arxiv.org/pdf/2405.02828v1
2024-05-05T06:43:52Z
2024-05-05T06:43:52Z
Trojans in Large Language Models of Code: A Critical Review through a Trigger-Based Taxonomy
Large language models (LLMs) have provided a lot of exciting new capabilities in software development. However, the opaque nature of these models makes them difficult to reason about and inspect. Their opacity gives rise to potential security risks, as adversaries can train and deploy compromised models to disrupt the software development process in the victims' organization. This work presents an overview of the current state-of-the-art trojan attacks on large language models of code, with a focus on triggers -- the main design point of trojans -- with the aid of a novel unifying trigger taxonomy framework. We also aim to provide a uniform definition of the fundamental concepts in the area of trojans in Code LLMs. Finally, we draw implications of findings on how code models learn on trigger design.
[ "['Aftab Hussain' 'Md Rafiqul Islam Rabin' 'Toufique Ahmed' 'Bowen Xu'\n 'Premkumar Devanbu' 'Mohammad Amin Alipour']" ]
null
null
2405.02842
null
null
http://arxiv.org/pdf/2405.02842v1
2024-05-05T08:18:42Z
2024-05-05T08:18:42Z
IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs
One limitation of existing Transformer-based models is that they cannot handle very long sequences as input since their self-attention operations exhibit quadratic time and space complexity. This problem becomes especially acute when Transformers are deployed on hardware platforms equipped only with CPUs. To address this issue, we propose a novel method for accelerating self-attention at inference time that works with pretrained Transformer models out-of-the-box without requiring retraining. We experiment using our method to accelerate various long-sequence Transformers, including a leading LLaMA 2-based LLM, on various benchmarks and demonstrate a greater speedup of 2.73x - 7.63x while retaining 98.6% - 99.6% of the accuracy of the original pretrained models. The code is available on our project website at https://yuzhenmao.github.io/IceFormer/.
[ "['Yuzhen Mao' 'Martin Ester' 'Ke Li']" ]
null
null
2405.02845
null
null
http://arxiv.org/pdf/2405.02845v2
2024-07-14T09:36:45Z
2024-05-05T08:35:23Z
Data-Efficient Molecular Generation with Hierarchical Textual Inversion
Developing an effective molecular generation framework even with a limited number of molecules is often important for its practical deployment, e.g., drug discovery, since acquiring task-related molecular data requires expensive and time-consuming experimental costs. To tackle this issue, we introduce Hierarchical textual Inversion for Molecular generation (HI-Mol), a novel data-efficient molecular generation method. HI-Mol is inspired by the importance of hierarchical information, e.g., both coarse- and fine-grained features, in understanding the molecule distribution. We propose to use multi-level embeddings to reflect such hierarchical features based on the adoption of the recent textual inversion technique in the visual domain, which achieves data-efficient image generation. Compared to the conventional textual inversion method in the image domain using a single-level token embedding, our multi-level token embeddings allow the model to effectively learn the underlying low-shot molecule distribution. We then generate molecules based on the interpolation of the multi-level token embeddings. Extensive experiments demonstrate the superiority of HI-Mol with notable data-efficiency. For instance, on QM9, HI-Mol outperforms the prior state-of-the-art method with 50x less training data. We also show the effectiveness of molecules generated by HI-Mol in low-shot molecular property prediction.
[ "['Seojin Kim' 'Jaehyun Nam' 'Sihyun Yu' 'Younghoon Shin' 'Jinwoo Shin']" ]
null
null
2405.02861
null
null
http://arxiv.org/pdf/2405.02861v1
2024-05-05T09:20:38Z
2024-05-05T09:20:38Z
Revisiting a Pain in the Neck: Semantic Phrase Processing Benchmark for Language Models
We introduce LexBench, a comprehensive evaluation suite enabled to test language models (LMs) on ten semantic phrase processing tasks. Unlike prior studies, it is the first work to propose a framework from the comparative perspective to model the general semantic phrase (i.e., lexical collocation) and three fine-grained semantic phrases, including idiomatic expression, noun compound, and verbal construction. Thanks to ourbenchmark, we assess the performance of 15 LMs across model architectures and parameter scales in classification, extraction, and interpretation tasks. Through the experiments, we first validate the scaling law and find that, as expected, large models excel better than the smaller ones in most tasks. Second, we investigate further through the scaling semantic relation categorization and find that few-shot LMs still lag behind vanilla fine-tuned models in the task. Third, through human evaluation, we find that the performance of strong models is comparable to the human level regarding semantic phrase processing. Our benchmarking findings can serve future research aiming to improve the generic capability of LMs on semantic phrase comprehension. Our source code and data are available at https://github.com/jacklanda/LexBench
[ "['Yang Liu' 'Melissa Xiaohui Qin' 'Hongming Li' 'Chao Huang']" ]
null
null
2405.02876
null
null
http://arxiv.org/pdf/2405.02876v2
2024-05-23T10:10:11Z
2024-05-05T10:13:55Z
Exploring the Improvement of Evolutionary Computation via Large Language Models
Evolutionary computation (EC), as a powerful optimization algorithm, has been applied across various domains. However, as the complexity of problems increases, the limitations of EC have become more apparent. The advent of large language models (LLMs) has not only transformed natural language processing but also extended their capabilities to diverse fields. By harnessing LLMs' vast knowledge and adaptive capabilities, we provide a forward-looking overview of potential improvements LLMs can bring to EC, focusing on the algorithms themselves, population design, and additional enhancements. This presents a promising direction for future research at the intersection of LLMs and EC.
[ "['Jinyu Cai' 'Jinglue Xu' 'Jialong Li' 'Takuto Ymauchi' 'Hitoshi Iba'\n 'Kenji Tei']" ]
null
null
2405.02881
null
null
http://arxiv.org/pdf/2405.02881v2
2024-06-20T16:11:59Z
2024-05-05T10:28:06Z
FedConPE: Efficient Federated Conversational Bandits with Heterogeneous Clients
Conversational recommender systems have emerged as a potent solution for efficiently eliciting user preferences. These systems interactively present queries associated with "key terms" to users and leverage user feedback to estimate user preferences more efficiently. Nonetheless, most existing algorithms adopt a centralized approach. In this paper, we introduce FedConPE, a phase elimination-based federated conversational bandit algorithm, where $M$ agents collaboratively solve a global contextual linear bandit problem with the help of a central server while ensuring secure data management. To effectively coordinate all the clients and aggregate their collected data, FedConPE uses an adaptive approach to construct key terms that minimize uncertainty across all dimensions in the feature space. Furthermore, compared with existing federated linear bandit algorithms, FedConPE offers improved computational and communication efficiency as well as enhanced privacy protections. Our theoretical analysis shows that FedConPE is minimax near-optimal in terms of cumulative regret. We also establish upper bounds for communication costs and conversation frequency. Comprehensive evaluations demonstrate that FedConPE outperforms existing conversational bandit algorithms while using fewer conversations.
[ "['Zhuohua Li' 'Maoli Liu' 'John C. S. Lui']" ]
null
null
2405.02903
null
null
http://arxiv.org/pdf/2405.02903v2
2024-06-09T07:49:00Z
2024-05-05T11:48:50Z
Predicting Open-Hole Laminates Failure Using Support Vector Machines With Classical and Quantum Kernels
Modeling open hole failure of composites is a complex task, consisting in a highly nonlinear response with interacting failure modes. Numerical modeling of this phenomenon has traditionally been based on the finite element method, but requires to tradeoff between high fidelity and computational cost. To mitigate this shortcoming, recent work has leveraged machine learning to predict the strength of open hole composite specimens. Here, we also propose using data-based models but to tackle open hole composite failure from a classification point of view. More specifically, we show how to train surrogate models to learn the ultimate failure envelope of an open hole composite plate under in-plane loading. To achieve this, we solve the classification problem via support vector machine (SVM) and test different classifiers by changing the SVM kernel function. The flexibility of kernel-based SVM also allows us to integrate the recently developed quantum kernels in our algorithm and compare them with the standard radial basis function (RBF) kernel. Finally, thanks to kernel-target alignment optimization, we tune the free parameters of all kernels to best separate safe and failure-inducing loading states. The results show classification accuracies higher than 90% for RBF, especially after alignment, followed closely by the quantum kernel classifiers.
[ "['Giorgio Tosti Balducci' 'Boyang Chen' 'Matthias Möller' 'Marc Gerritsma'\n 'Roeland De Breuker']" ]
null
null
2405.02917
null
null
http://arxiv.org/pdf/2405.02917v1
2024-05-05T12:51:38Z
2024-05-05T12:51:38Z
Overconfidence is Key: Verbalized Uncertainty Evaluation in Large Language and Vision-Language Models
Language and Vision-Language Models (LLMs/VLMs) have revolutionized the field of AI by their ability to generate human-like text and understand images, but ensuring their reliability is crucial. This paper aims to evaluate the ability of LLMs (GPT4, GPT-3.5, LLaMA2, and PaLM 2) and VLMs (GPT4V and Gemini Pro Vision) to estimate their verbalized uncertainty via prompting. We propose the new Japanese Uncertain Scenes (JUS) dataset, aimed at testing VLM capabilities via difficult queries and object counting, and the Net Calibration Error (NCE) to measure direction of miscalibration. Results show that both LLMs and VLMs have a high calibration error and are overconfident most of the time, indicating a poor capability for uncertainty estimation. Additionally we develop prompts for regression tasks, and we show that VLMs have poor calibration when producing mean/standard deviation and 95% confidence intervals.
[ "['Tobias Groot' 'Matias Valdenegro-Toro']" ]
null
null
2405.02936
null
null
http://arxiv.org/pdf/2405.02936v2
2024-05-24T23:45:34Z
2024-05-05T13:56:12Z
On the Tractability of SHAP Explanations under Markovian Distributions
Thanks to its solid theoretical foundation, the SHAP framework is arguably one the most widely utilized frameworks for local explainability of ML models. Despite its popularity, its exact computation is known to be very challenging, proven to be NP-Hard in various configurations. Recent works have unveiled positive complexity results regarding the computation of the SHAP score for specific model families, encompassing decision trees, random forests, and some classes of boolean circuits. Yet, all these positive results hinge on the assumption of feature independence, often simplistic in real-world scenarios. In this article, we investigate the computational complexity of the SHAP score by relaxing this assumption and introducing a Markovian perspective. We show that, under the Markovian assumption, computing the SHAP score for the class of Weighted automata, Disjoint DNFs and Decision Trees can be performed in polynomial time, offering a first positive complexity result for the problem of SHAP score computation that transcends the limitations of the feature independence assumption.
[ "['Reda Marzouk' 'Colin de La Higuera']" ]
null
null
2405.02952
null
null
http://arxiv.org/pdf/2405.02952v1
2024-05-05T14:39:43Z
2024-05-05T14:39:43Z
Accelerating Legacy Numerical Solvers by Non-intrusive Gradient-based Meta-solving
Scientific computing is an essential tool for scientific discovery and engineering design, and its computational cost is always a main concern in practice. To accelerate scientific computing, it is a promising approach to use machine learning (especially meta-learning) techniques for selecting hyperparameters of traditional numerical methods. There have been numerous proposals to this direction, but many of them require automatic-differentiable numerical methods. However, in reality, many practical applications still depend on well-established but non-automatic-differentiable legacy codes, which prevents practitioners from applying the state-of-the-art research to their own problems. To resolve this problem, we propose a non-intrusive methodology with a novel gradient estimation technique to combine machine learning and legacy numerical codes without any modification. We theoretically and numerically show the advantage of the proposed method over other baselines and present applications of accelerating established non-automatic-differentiable numerical solvers implemented in PETSc, a widely used open-source numerical software library.
[ "['Sohei Arisaka' 'Qianxiao Li']" ]
null
null
2405.02953
null
null
http://arxiv.org/pdf/2405.02953v1
2024-05-05T14:47:24Z
2024-05-05T14:47:24Z
Analysis of the Identifying Regulation with Adversarial Surrogates Algorithm
Given a time-series of noisy measured outputs of a dynamical system z[k], k=1...N, the Identifying Regulation with Adversarial Surrogates (IRAS) algorithm aims to find a non-trivial first integral of the system, namely, a scalar function g() such that g(z[i]) = g(z[j]), for all i,j. IRAS has been suggested recently and was used successfully in several learning tasks in models from biology and physics. Here, we give the first rigorous analysis of this algorithm in a specific setting. We assume that the observations admit a linear first integral and that they are contaminated by Gaussian noise. We show that in this case the IRAS iterations are closely related to the self-consistent-field (SCF) iterations for solving a generalized Rayleigh quotient minimization problem. Using this approach, we derive several sufficient conditions guaranteeing local convergence of IRAS to the correct first integral.
[ "['Ron Teichner' 'Ron Meir' 'Michael Margaliot']" ]
null
null
2405.02954
null
null
http://arxiv.org/pdf/2405.02954v1
2024-05-05T14:48:13Z
2024-05-05T14:48:13Z
Source-Free Domain Adaptation Guided by Vision and Vision-Language Pre-Training
Source-free domain adaptation (SFDA) aims to adapt a source model trained on a fully-labeled source domain to a related but unlabeled target domain. While the source model is a key avenue for acquiring target pseudolabels, the generated pseudolabels may exhibit source bias. In the conventional SFDA pipeline, a large data (e.g. ImageNet) pre-trained feature extractor is used to initialize the source model at the start of source training, and subsequently discarded. Despite having diverse features important for generalization, the pre-trained feature extractor can overfit to the source data distribution during source training and forget relevant target domain knowledge. Rather than discarding this valuable knowledge, we introduce an integrated framework to incorporate pre-trained networks into the target adaptation process. The proposed framework is flexible and allows us to plug modern pre-trained networks into the adaptation process to leverage their stronger representation learning capabilities. For adaptation, we propose the Co-learn algorithm to improve target pseudolabel quality collaboratively through the source model and a pre-trained feature extractor. Building on the recent success of the vision-language model CLIP in zero-shot image recognition, we present an extension Co-learn++ to further incorporate CLIP's zero-shot classification decisions. We evaluate on 3 benchmark datasets and include more challenging scenarios such as open-set, partial-set and open-partial SFDA. Experimental results demonstrate that our proposed strategy improves adaptation performance and can be successfully integrated with existing SFDA methods.
[ "['Wenyu Zhang' 'Li Shen' 'Chuan-Sheng Foo']" ]
null
null
2405.02968
null
null
http://arxiv.org/pdf/2405.02968v2
2024-05-07T09:36:54Z
2024-05-05T15:27:05Z
CoverLib: Classifiers-equipped Experience Library by Iterative Problem Distribution Coverage Maximization for Domain-tuned Motion Planning
Library-based methods are known to be very effective for fast motion planning by adapting an experience retrieved from a precomputed library. This article presents CoverLib, a principled approach for constructing and utilizing such a library. CoverLib iteratively adds an experience-classifier-pair to the library, where each classifier corresponds to an adaptable region of the experience within the problem space. This iterative process is an active procedure, as it selects the next experience based on its ability to effectively cover the uncovered region. During the query phase, these classifiers are utilized to select an experience that is expected to be adaptable for a given problem. Experimental results demonstrate that CoverLib effectively mitigates the trade-off between plannability and speed observed in global (e.g. sampling-based) and local (e.g. optimization-based) methods. As a result, it achieves both fast planning and high success rates over the problem domain. Moreover, due to its adaptation-algorithm-agnostic nature, CoverLib seamlessly integrates with various adaptation methods, including nonlinear programming-based and sampling-based algorithms.
[ "['Hirokazu Ishida' 'Naoki Hiraoka' 'Kei Okada' 'Masayuki Inaba']" ]
null
null
2405.02969
null
null
http://arxiv.org/pdf/2405.02969v1
2024-05-05T15:27:56Z
2024-05-05T15:27:56Z
Towards a Flexible and High-Fidelity Approach to Distributed DNN Training Emulation
We propose NeuronaBox, a flexible, user-friendly, and high-fidelity approach to emulate DNN training workloads. We argue that to accurately observe performance, it is possible to execute the training workload on a subset of real nodes and emulate the networked execution environment along with the collective communication operations. Initial results from a proof-of-concept implementation show that NeuronaBox replicates the behavior of actual systems with high accuracy, with an error margin of less than 1% between the emulated measurements and the real system.
[ "['Banruo Liu' 'Mubarak Adetunji Ojewale' 'Yuhan Ding' 'Marco Canini']" ]
null
null
2405.02977
null
null
http://arxiv.org/pdf/2405.02977v1
2024-05-05T15:50:02Z
2024-05-05T15:50:02Z
SkelCap: Automated Generation of Descriptive Text from Skeleton Keypoint Sequences
Numerous sign language datasets exist, yet they typically cover only a limited selection of the thousands of signs used globally. Moreover, creating diverse sign language datasets is an expensive and challenging task due to the costs associated with gathering a varied group of signers. Motivated by these challenges, we aimed to develop a solution that addresses these limitations. In this context, we focused on textually describing body movements from skeleton keypoint sequences, leading to the creation of a new dataset. We structured this dataset around AUTSL, a comprehensive isolated Turkish sign language dataset. We also developed a baseline model, SkelCap, which can generate textual descriptions of body movements. This model processes the skeleton keypoints data as a vector, applies a fully connected layer for embedding, and utilizes a transformer neural network for sequence-to-sequence modeling. We conducted extensive evaluations of our model, including signer-agnostic and sign-agnostic assessments. The model achieved promising results, with a ROUGE-L score of 0.98 and a BLEU-4 score of 0.94 in the signer-agnostic evaluation. The dataset we have prepared, namely the AUTSL-SkelCap, will be made publicly available soon.
[ "['Ali Emre Keskin' 'Hacer Yalim Keles']" ]
null
null
2405.02984
null
null
http://arxiv.org/pdf/2405.02984v1
2024-05-05T16:07:23Z
2024-05-05T16:07:23Z
E-TSL: A Continuous Educational Turkish Sign Language Dataset with Baseline Methods
This study introduces the continuous Educational Turkish Sign Language (E-TSL) dataset, collected from online Turkish language lessons for 5th, 6th, and 8th grades. The dataset comprises 1,410 videos totaling nearly 24 hours and includes performances from 11 signers. Turkish, an agglutinative language, poses unique challenges for sign language translation, particularly with a vocabulary where 64% are singleton words and 85% are rare words, appearing less than five times. We developed two baseline models to address these challenges: the Pose to Text Transformer (P2T-T) and the Graph Neural Network based Transformer (GNN-T) models. The GNN-T model achieved 19.13% BLEU-1 score and 3.28% BLEU-4 score, presenting a significant challenge compared to existing benchmarks. The P2T-T model, while demonstrating slightly lower performance in BLEU scores, achieved a higher ROUGE-L score of 22.09%. Additionally, we benchmarked our model using the well-known PHOENIX-Weather 2014T dataset to validate our approach.
[ "['Şükrü Öztürk' 'Hacer Yalim Keles']" ]
null
null
2405.03003
null
null
http://arxiv.org/pdf/2405.03003v1
2024-05-05T17:15:24Z
2024-05-05T17:15:24Z
Parameter-Efficient Fine-Tuning with Discrete Fourier Transform
Low-rank adaptation~(LoRA) has recently gained much interest in fine-tuning foundation models. It effectively reduces the number of trainable parameters by incorporating low-rank matrices $A$ and $B$ to represent the weight change, i.e., $Delta W=BA$. Despite LoRA's progress, it faces storage challenges when handling extensive customization adaptations or larger base models. In this work, we aim to further compress trainable parameters by enjoying the powerful expressiveness of the Fourier transform. Specifically, we introduce FourierFT, which treats $Delta W$ as a matrix in the spatial domain and learns only a small fraction of its spectral coefficients. With the trained spectral coefficients, we implement the inverse discrete Fourier transform to recover $Delta W$. Empirically, our FourierFT method shows comparable or better performance with fewer parameters than LoRA on various tasks, including natural language understanding, natural language generation, instruction tuning, and image classification. For example, when performing instruction tuning on the LLaMA2-7B model, FourierFT surpasses LoRA with only 0.064M trainable parameters, compared to LoRA's 33.5M. Our code is released at url{https://github.com/Chaos96/fourierft}.
[ "['Ziqi Gao' 'Qichao Wang' 'Aochuan Chen' 'Zijing Liu' 'Bingzhe Wu'\n 'Liang Chen' 'Jia Li']" ]
null
null
2405.03004
null
null
http://arxiv.org/pdf/2405.03004v1
2024-05-05T17:19:35Z
2024-05-05T17:19:35Z
Exploring prompts to elicit memorization in masked language model-based named entity recognition
Training data memorization in language models impacts model capability (generalization) and safety (privacy risk). This paper focuses on analyzing prompts' impact on detecting the memorization of 6 masked language model-based named entity recognition models. Specifically, we employ a diverse set of 400 automatically generated prompts, and a pairwise dataset where each pair consists of one person's name from the training set and another name out of the set. A prompt completed with a person's name serves as input for getting the model's confidence in predicting this name. Finally, the prompt performance of detecting model memorization is quantified by the percentage of name pairs for which the model has higher confidence for the name from the training set. We show that the performance of different prompts varies by as much as 16 percentage points on the same model, and prompt engineering further increases the gap. Moreover, our experiments demonstrate that prompt performance is model-dependent but does generalize across different name sets. A comprehensive analysis indicates how prompt performance is influenced by prompt properties, contained tokens, and the model's self-attention weights on the prompt.
[ "['Yuxi Xia' 'Anastasiia Sedova' 'Pedro Henrique Luz de Araujo'\n 'Vasiliki Kougia' 'Lisa Nußbaumer' 'Benjamin Roth']" ]
null
null
2405.03005
null
null
http://arxiv.org/pdf/2405.03005v1
2024-05-05T17:27:22Z
2024-05-05T17:27:22Z
Safe Reinforcement Learning with Learned Non-Markovian Safety Constraints
In safe Reinforcement Learning (RL), safety cost is typically defined as a function dependent on the immediate state and actions. In practice, safety constraints can often be non-Markovian due to the insufficient fidelity of state representation, and safety cost may not be known. We therefore address a general setting where safety labels (e.g., safe or unsafe) are associated with state-action trajectories. Our key contributions are: first, we design a safety model that specifically performs credit assignment to assess contributions of partial state-action trajectories on safety. This safety model is trained using a labeled safety dataset. Second, using RL-as-inference strategy we derive an effective algorithm for optimizing a safe policy using the learned safety model. Finally, we devise a method to dynamically adapt the tradeoff coefficient between reward maximization and safety compliance. We rewrite the constrained optimization problem into its dual problem and derive a gradient-based method to dynamically adjust the tradeoff coefficient during training. Our empirical results demonstrate that this approach is highly scalable and able to satisfy sophisticated non-Markovian safety constraints.
[ "['Siow Meng Low' 'Akshat Kumar']" ]
null
null
2405.03008
null
null
http://arxiv.org/pdf/2405.03008v2
2024-05-11T05:15:41Z
2024-05-05T17:34:38Z
DVMSR: Distillated Vision Mamba for Efficient Super-Resolution
Efficient Image Super-Resolution (SR) aims to accelerate SR network inference by minimizing computational complexity and network parameters while preserving performance. Existing state-of-the-art Efficient Image Super-Resolution methods are based on convolutional neural networks. Few attempts have been made with Mamba to harness its long-range modeling capability and efficient computational complexity, which have shown impressive performance on high-level vision tasks. In this paper, we propose DVMSR, a novel lightweight Image SR network that incorporates Vision Mamba and a distillation strategy. The network of DVMSR consists of three modules: feature extraction convolution, multiple stacked Residual State Space Blocks (RSSBs), and a reconstruction module. Specifically, the deep feature extraction module is composed of several residual state space blocks (RSSB), each of which has several Vision Mamba Moudles(ViMM) together with a residual connection. To achieve efficiency improvement while maintaining comparable performance, we employ a distillation strategy to the vision Mamba network for superior performance. Specifically, we leverage the rich representation knowledge of teacher network as additional supervision for the output of lightweight student networks. Extensive experiments have demonstrated that our proposed DVMSR can outperform state-of-the-art efficient SR methods in terms of model parameters while maintaining the performance of both PSNR and SSIM. The source code is available at https://github.com/nathan66666/DVMSR.git
[ "['Xiaoyan Lei' 'Wenlong Zhang' 'Weifeng Cao']" ]
null
null
2405.03052
null
null
http://arxiv.org/pdf/2405.03052v3
2024-05-10T14:09:38Z
2024-05-05T21:06:07Z
A View on Out-of-Distribution Identification from a Statistical Testing Theory Perspective
We study the problem of efficiently detecting Out-of-Distribution (OOD) samples at test time in supervised and unsupervised learning contexts. While ML models are typically trained under the assumption that training and test data stem from the same distribution, this is often not the case in realistic settings, thus reliably detecting distribution shifts is crucial at deployment. We re-formulate the OOD problem under the lenses of statistical testing and then discuss conditions that render the OOD problem identifiable in statistical terms. Building on this framework, we study convergence guarantees of an OOD test based on the Wasserstein distance, and provide a simple empirical evaluation.
[ "['Alberto Caron' 'Chris Hicks' 'Vasilios Mavroudis']" ]
null
null
2405.03056
null
null
http://arxiv.org/pdf/2405.03056v1
2024-05-05T21:30:18Z
2024-05-05T21:30:18Z
Convolutional Learning on Directed Acyclic Graphs
We develop a novel convolutional architecture tailored for learning from data defined over directed acyclic graphs (DAGs). DAGs can be used to model causal relationships among variables, but their nilpotent adjacency matrices pose unique challenges towards developing DAG signal processing and machine learning tools. To address this limitation, we harness recent advances offering alternative definitions of causal shifts and convolutions for signals on DAGs. We develop a novel convolutional graph neural network that integrates learnable DAG filters to account for the partial ordering induced by the graph topology, thus providing valuable inductive bias to learn effective representations of DAG-supported data. We discuss the salient advantages and potential limitations of the proposed DAG convolutional network (DCN) and evaluate its performance on two learning tasks using synthetic data: network diffusion estimation and source identification. DCN compares favorably relative to several baselines, showcasing its promising potential.
[ "['Samuel Rey' 'Hamed Ajorlou' 'Gonzalo Mateos']" ]
null
null
2405.03059
null
null
http://arxiv.org/pdf/2405.03059v1
2024-05-05T21:44:03Z
2024-05-05T21:44:03Z
Active Preference Learning for Ordering Items In- and Out-of-sample
Learning an ordering of items based on noisy pairwise comparisons is useful when item-specific labels are difficult to assign, for example, when annotators have to make subjective assessments. Algorithms have been proposed for actively sampling comparisons of items to minimize the number of annotations necessary for learning an accurate ordering. However, many ignore shared structure between items, treating them as unrelated, limiting sample efficiency and precluding generalization to new items. In this work, we study active learning with pairwise preference feedback for ordering items with contextual attributes, both in- and out-of-sample. We give an upper bound on the expected ordering error incurred by active learning strategies under a logistic preference model, in terms of the aleatoric and epistemic uncertainty in comparisons, and propose two algorithms designed to greedily minimize this bound. We evaluate these algorithms in two realistic image ordering tasks, including one with comparisons made by human annotators, and demonstrate superior sample efficiency compared to non-contextual ranking approaches and active preference learning baselines.
[ "['Herman Bergström' 'Emil Carlsson' 'Devdatt Dubhashi'\n 'Fredrik D. Johansson']" ]
null
null
2405.03060
null
null
http://arxiv.org/pdf/2405.03060v1
2024-05-05T21:49:51Z
2024-05-05T21:49:51Z
Tree-based Ensemble Learning for Out-of-distribution Detection
Being able to successfully determine whether the testing samples has similar distribution as the training samples is a fundamental question to address before we can safely deploy most of the machine learning models into practice. In this paper, we propose TOOD detection, a simple yet effective tree-based out-of-distribution (TOOD) detection mechanism to determine if a set of unseen samples will have similar distribution as of the training samples. The TOOD detection mechanism is based on computing pairwise hamming distance of testing samples' tree embeddings, which are obtained by fitting a tree-based ensemble model through in-distribution training samples. Our approach is interpretable and robust for its tree-based nature. Furthermore, our approach is efficient, flexible to various machine learning tasks, and can be easily generalized to unsupervised setting. Extensive experiments are conducted to show the proposed method outperforms other state-of-the-art out-of-distribution detection methods in distinguishing the in-distribution from out-of-distribution on various tabular, image, and text data.
[ "['Zhaiming Shen' 'Menglun Wang' 'Guang Cheng' 'Ming-Jun Lai' 'Lin Mu'\n 'Ruihao Huang' 'Qi Liu' 'Hao Zhu']" ]
null
null
2405.03063
null
null
http://arxiv.org/pdf/2405.03063v1
2024-05-05T22:05:02Z
2024-05-05T22:05:02Z
Stability of a Generalized Debiased Lasso with Applications to Resampling-Based Variable Selection
Suppose that we first apply the Lasso to a design matrix, and then update one of its columns. In general, the signs of the Lasso coefficients may change, and there is no closed-form expression for updating the Lasso solution exactly. In this work, we propose an approximate formula for updating a debiased Lasso coefficient. We provide general nonasymptotic error bounds in terms of the norms and correlations of a given design matrix's columns, and then prove asymptotic convergence results for the case of a random design matrix with i.i.d. sub-Gaussian row vectors and i.i.d. Gaussian noise. Notably, the approximate formula is asymptotically correct for most coordinates in the proportional growth regime, under the mild assumption that each row of the design matrix is sub-Gaussian with a covariance matrix having a bounded condition number. Our proof only requires certain concentration and anti-concentration properties to control various error terms and the number of sign changes. In contrast, rigorously establishing distributional limit properties (e.g. Gaussian limits for the debiased Lasso) under similarly general assumptions has been considered open problem in the universality theory. As applications, we show that the approximate formula allows us to reduce the computation complexity of variable selection algorithms that require solving multiple Lasso problems, such as the conditional randomization test and a variant of the knockoff filter.
[ "['Jingbo Liu']" ]
null
null
2405.03064
null
null
http://arxiv.org/pdf/2405.03064v3
2024-06-06T00:36:03Z
2024-05-05T22:06:42Z
RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation
Deep reinforcement learning (DRL) is playing an increasingly important role in real-world applications. However, obtaining an optimally performing DRL agent for complex tasks, especially with sparse rewards, remains a significant challenge. The training of a DRL agent can be often trapped in a bottleneck without further progress. In this paper, we propose RICE, an innovative refining scheme for reinforcement learning that incorporates explanation methods to break through the training bottlenecks. The high-level idea of RICE is to construct a new initial state distribution that combines both the default initial states and critical states identified through explanation methods, thereby encouraging the agent to explore from the mixed initial states. Through careful design, we can theoretically guarantee that our refining scheme has a tighter sub-optimality bound. We evaluate RICE in various popular RL environments and real-world applications. The results demonstrate that RICE significantly outperforms existing refining schemes in enhancing agent performance.
[ "['Zelei Cheng' 'Xian Wu' 'Jiahao Yu' 'Sabrina Yang' 'Gang Wang'\n 'Xinyu Xing']" ]
null
null
2405.03075
null
null
http://arxiv.org/pdf/2405.03075v1
2024-05-05T22:54:43Z
2024-05-05T22:54:43Z
AnoGAN for Tabular Data: A Novel Approach to Anomaly Detection
Anomaly detection, a critical facet in data analysis, involves identifying patterns that deviate from expected behavior. This research addresses the complexities inherent in anomaly detection, exploring challenges and adapting to sophisticated malicious activities. With applications spanning cybersecurity, healthcare, finance, and surveillance, anomalies often signify critical information or potential threats. Inspired by the success of Anomaly Generative Adversarial Network (AnoGAN) in image domains, our research extends its principles to tabular data. Our contributions include adapting AnoGAN's principles to a new domain and promising advancements in detecting previously undetectable anomalies. This paper delves into the multifaceted nature of anomaly detection, considering the dynamic evolution of normal behavior, context-dependent anomaly definitions, and data-related challenges like noise and imbalances.
[ "['Aditya Singh' 'Pavan Reddy']" ]
null
null
2405.03082
null
null
http://arxiv.org/pdf/2405.03082v2
2024-05-09T04:33:22Z
2024-05-05T23:52:57Z
Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning
Reinforcement learning with multiple, potentially conflicting objectives is pervasive in real-world applications, while this problem remains theoretically under-explored. This paper tackles the multi-objective reinforcement learning (MORL) problem and introduces an innovative actor-critic algorithm named MOAC which finds a policy by iteratively making trade-offs among conflicting reward signals. Notably, we provide the first analysis of finite-time Pareto-stationary convergence and corresponding sample complexity in both discounted and average reward settings. Our approach has two salient features: (a) MOAC mitigates the cumulative estimation bias resulting from finding an optimal common gradient descent direction out of stochastic samples. This enables provable convergence rate and sample complexity guarantees independent of the number of objectives; (b) With proper momentum coefficient, MOAC initializes the weights of individual policy gradients using samples from the environment, instead of manual initialization. This enhances the practicality and robustness of our algorithm. Finally, experiments conducted on a real-world dataset validate the effectiveness of our proposed method.
[ "['Tianchen Zhou' 'FNU Hairi' 'Haibo Yang' 'Jia Liu' 'Tian Tong' 'Fan Yang'\n 'Michinari Momma' 'Yan Gao']" ]
null
null
2405.03083
null
null
http://arxiv.org/pdf/2405.03083v2
2024-06-29T21:03:50Z
2024-05-05T23:59:51Z
Causal K-Means Clustering
Causal effects are often characterized with population summaries. These might provide an incomplete picture when there are heterogeneous treatment effects across subgroups. Since the subgroup structure is typically unknown, it is more challenging to identify and evaluate subgroup effects than population effects. We propose a new solution to this problem: Causal k-Means Clustering, which harnesses the widely-used k-means clustering algorithm to uncover the unknown subgroup structure. Our problem differs significantly from the conventional clustering setup since the variables to be clustered are unknown counterfactual functions. We present a plug-in estimator which is simple and readily implementable using off-the-shelf algorithms, and study its rate of convergence. We also develop a new bias-corrected estimator based on nonparametric efficiency theory and double machine learning, and show that this estimator achieves fast root-n rates and asymptotic normality in large nonparametric models. Our proposed methods are especially useful for modern outcome-wide studies with multiple treatment levels. Further, our framework is extensible to clustering with generic pseudo-outcomes, such as partially observed outcomes or otherwise unknown functions. Finally, we explore finite sample properties via simulation, and illustrate the proposed methods in a study of treatment programs for adolescent substance abuse.
[ "['Kwangho Kim' 'Jisu Kim' 'Edward H. Kennedy']" ]
null
null
2405.03084
null
null
http://arxiv.org/pdf/2405.03084v1
2024-05-06T00:18:35Z
2024-05-06T00:18:35Z
Analyzing Emotional Trends from X platform using SenticNet: A Comparative Analysis with Cryptocurrency Price
This study delves into the relationship between emotional trends from X platform data and the market dynamics of well-known cryptocurrencies Cardano, Binance, Fantom, Matic, and Ripple over the period from October 2022 to March 2023. Leveraging SenticNet, we identified emotions like Fear and Anxiety, Rage and Anger, Grief and Sadness, Delight and Pleasantness, Enthusiasm and Eagerness, and Delight and Joy. Following data extraction, we segmented each month into bi-weekly intervals, replicating this process for price data obtained from Finance-Yahoo. Consequently, a comparative analysis was conducted, establishing connections between emotional trends observed across bi-weekly intervals and cryptocurrency prices, uncovering significant correlations between emotional sentiments and coin valuations.
[ "['Moein Shahiki Tash' 'Zahra Ahani' 'Olga Kolesnikova' 'Grigori Sidorov']" ]
null
null
2405.03089
null
null
http://arxiv.org/pdf/2405.03089v1
2024-05-06T00:58:23Z
2024-05-06T00:58:23Z
Structure-Preserving Network Compression Via Low-Rank Induced Training Through Linear Layers Composition
Deep Neural Networks (DNNs) have achieved remarkable success in addressing many previously unsolvable tasks. However, the storage and computational requirements associated with DNNs pose a challenge for deploying these trained models on resource-limited devices. Therefore, a plethora of compression and pruning techniques have been proposed in recent years. Low-rank decomposition techniques are among the approaches most utilized to address this problem. Compared to post-training compression, compression-promoted training is still under-explored. In this paper, we present a theoretically-justified novel approach, termed Low-Rank Induced Training (LoRITa), that promotes low-rankness through the composition of linear layers and compresses by using singular value truncation. This is achieved without the need to change the structure at inference time or require constrained and/or additional optimization, other than the standard weight decay regularization. Moreover, LoRITa eliminates the need to (i) initialize with pre-trained models and (ii) specify rank selection prior to training. Our experimental results (i) demonstrate the effectiveness of our approach using MNIST on Fully Connected Networks, CIFAR10 on Vision Transformers, and CIFAR10/100 on Convolutional Neural Networks, and (ii) illustrate that we achieve either competitive or SOTA results when compared to leading structured pruning methods in terms of FLOPs and parameters drop.
[ "['Xitong Zhang' 'Ismail R. Alkhouri' 'Rongrong Wang']" ]
null
null
2405.03091
null
null
http://arxiv.org/pdf/2405.03091v1
2024-05-06T01:05:21Z
2024-05-06T01:05:21Z
Research on Image Recognition Technology Based on Multimodal Deep Learning
This project investigates the human multi-modal behavior identification algorithm utilizing deep neural networks. According to the characteristics of different modal information, different deep neural networks are used to adapt to different modal video information. Through the integration of various deep neural networks, the algorithm successfully identifies behaviors across multiple modalities. In this project, multiple cameras developed by Microsoft Kinect were used to collect corresponding bone point data based on acquiring conventional images. In this way, the motion features in the image can be extracted. Ultimately, the behavioral characteristics discerned through both approaches are synthesized to facilitate the precise identification and categorization of behaviors. The performance of the suggested algorithm was evaluated using the MSR3D data set. The findings from these experiments indicate that the accuracy in recognizing behaviors remains consistently high, suggesting that the algorithm is reliable in various scenarios. Additionally, the tests demonstrate that the algorithm substantially enhances the accuracy of detecting pedestrian behaviors in video footage.
[ "['Jinyin Wang' 'Xingchen Li' 'Yixuan Jin' 'Yihao Zhong' 'Keke Zhang'\n 'Chang Zhou']" ]
null
null
2405.03092
null
null
http://arxiv.org/abs/2405.03092v1
2024-05-06T01:08:22Z
2024-05-06T01:08:22Z
Bayesian optimization for stable properties amid processing fluctuations in sputter deposition
We introduce a Bayesian optimization approach to guide the sputter deposition of molybdenum thin films, aiming to achieve desired residual stress and sheet resistance while minimizing susceptibility to stochastic fluctuations during deposition. Thin films are pivotal in numerous technologies, including semiconductors and optical devices, where their properties are critical. Sputter deposition parameters, such as deposition power, vacuum chamber pressure, and working distance, influence physical properties like residual stress and resistance. Excessive stress and high resistance can impair device performance, necessitating the selection of optimal process parameters. Furthermore, these parameters should ensure the consistency and reliability of thin film properties, assisting in the reproducibility of the devices. However, exploring the multidimensional design space for process optimization is expensive. Bayesian optimization is ideal for optimizing inputs/parameters of general black-box functions without reliance on gradient information. We utilize Bayesian optimization to optimize deposition power and pressure using a custom-built objective function incorporating observed stress and resistance data. Additionally, we integrate prior knowledge of stress variation with pressure into the objective function to prioritize films least affected by stochastic variations. Our findings demonstrate that Bayesian optimization effectively explores the design space and identifies optimal parameter combinations meeting desired stress and resistance specifications.
[ "['Ankit Shrivastava' 'Matias Kalaswad' 'Joyce O. Custer' 'David P. Adams'\n 'Habib N. Najm']" ]
null
null
2405.03095
null
null
http://arxiv.org/pdf/2405.03095v1
2024-05-06T01:18:36Z
2024-05-06T01:18:36Z
Loss Jump During Loss Switch in Solving PDEs with Neural Networks
Using neural networks to solve partial differential equations (PDEs) is gaining popularity as an alternative approach in the scientific computing community. Neural networks can integrate different types of information into the loss function. These include observation data, governing equations, and variational forms, etc. These loss functions can be broadly categorized into two types: observation data loss directly constrains and measures the model output, while other loss functions indirectly model the performance of the network, which can be classified as model loss. However, this alternative approach lacks a thorough understanding of its underlying mechanisms, including theoretical foundations and rigorous characterization of various phenomena. This work focuses on investigating how different loss functions impact the training of neural networks for solving PDEs. We discover a stable loss-jump phenomenon: when switching the loss function from the data loss to the model loss, which includes different orders of derivative information, the neural network solution significantly deviates from the exact solution immediately. Further experiments reveal that this phenomenon arises from the different frequency preferences of neural networks under different loss functions. We theoretically analyze the frequency preference of neural networks under model loss. This loss-jump phenomenon provides a valuable perspective for examining the underlying mechanisms of neural networks in solving PDEs.
[ "['Zhiwei Wang' 'Lulu Zhang' 'Zhongwang Zhang' 'Zhi-Qin John Xu']" ]
null
null
2405.03097
null
null
http://arxiv.org/pdf/2405.03097v1
2024-05-06T01:21:50Z
2024-05-06T01:21:50Z
To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models
LLMs have been found to memorize training textual sequences and regurgitate verbatim said sequences during text generation time. This fact is known to be the cause of privacy and related (e.g., copyright) problems. Unlearning in LLMs then takes the form of devising new algorithms that will properly deal with these side-effects of memorized data, while not hurting the model's utility. We offer a fresh perspective towards this goal, namely, that each textual sequence to be forgotten should be treated differently when being unlearned based on its degree of memorization within the LLM. We contribute a new metric for measuring unlearning quality, an adversarial attack showing that SOTA algorithms lacking this perspective fail for privacy, and two new unlearning methods based on Gradient Ascent and Task Arithmetic, respectively. A comprehensive performance evaluation across an extensive suite of NLP tasks then mapped the solution space, identifying the best solutions under different scales in model capacities and forget set sizes and quantified the gains of the new approaches.
[ "['George-Octavian Barbulescu' 'Peter Triantafillou']" ]
null
null
2405.03103
null
null
http://arxiv.org/pdf/2405.03103v2
2024-06-10T23:41:18Z
2024-05-06T01:39:59Z
Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
The increasing size of large language models (LLMs) traditionally requires low-precision integer formats to meet strict latency and power demands. Yet recently, alternative formats such as Normal Float (NF4) have increased model accuracy at the cost of increased chip area. In this work, we first conduct a large-scale analysis of LLM weights and activations across 30 networks and conclude that most distributions follow a Student's t-distribution. We then derive a new theoretically optimal format, Student Float (SF4), that improves over NF4 across modern LLMs, for example increasing the average accuracy on LLaMA2-7B by 0.76% across tasks. Using this format as a high-accuracy reference, we then propose augmenting E2M1 with two variants of supernormal support for higher model accuracy. Finally, we explore the quality and efficiency frontier across 11 datatypes by evaluating their model accuracy and hardware complexity. We discover a Pareto curve composed of INT4, E2M1, and E2M1 with supernormal support, which offers a continuous tradeoff between model accuracy and chip area. For example, E2M1 with supernormal support increases the accuracy of Phi-2 by up to 2.19% with 1.22% area overhead, enabling more LLM-based applications to be run at four bits. The supporting code is hosted at https://github.com/cornell-zhang/llm-datatypes.
[ "['Jordan Dotzel' 'Yuzong Chen' 'Bahaa Kotb' 'Sushma Prasad' 'Gang Wu'\n 'Sheng Li' 'Mohamed S. Abdelfattah' 'Zhiru Zhang']" ]