categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2402.10028
null
null
http://arxiv.org/pdf/2402.10028v1
2024-02-15T15:48:55Z
2024-02-15T15:48:55Z
Diffusion Models Meet Contextual Bandits with Large Action Spaces
Efficient exploration is a key challenge in contextual bandits due to the large size of their action space, where uninformed exploration can result in computational and statistical inefficiencies. Fortunately, the rewards of actions are often correlated and this can be leveraged to explore them efficiently. In this work, we capture such correlations using pre-trained diffusion models; upon which we design diffusion Thompson sampling (dTS). Both theoretical and algorithmic foundations are developed for dTS, and empirical evaluation also shows its favorable performance.
[ "['Imad Aouali']" ]
null
null
2402.10036
null
null
http://arxiv.org/pdf/2402.10036v3
2024-06-13T13:04:41Z
2024-02-15T15:59:59Z
Predictive Linear Online Tracking for Unknown Targets
In this paper, we study the problem of online tracking in linear control systems, where the objective is to follow a moving target. Unlike classical tracking control, the target is unknown, non-stationary, and its state is revealed sequentially, thus, fitting the framework of online non-stochastic control. We consider the case of quadratic costs and propose a new algorithm, called predictive linear online tracking (PLOT). The algorithm uses recursive least squares with exponential forgetting to learn a time-varying dynamic model of the target. The learned model is used in the optimal policy under the framework of receding horizon control. We show the dynamic regret of PLOT scales with $mathcal{O}(sqrt{TV_T})$, where $V_T$ is the total variation of the target dynamics and $T$ is the time horizon. Unlike prior work, our theoretical results hold for non-stationary targets. We implement PLOT on a real quadrotor and provide open-source software, thus, showcasing one of the first successful applications of online control methods on real hardware.
[ "['Anastasios Tsiamis' 'Aren Karapetyan' 'Yueshan Li' 'Efe C. Balta'\n 'John Lygeros']" ]
null
null
2402.10038
null
null
http://arxiv.org/pdf/2402.10038v2
2024-03-30T16:10:47Z
2024-02-15T16:00:58Z
RS-DPO: A Hybrid Rejection Sampling and Direct Preference Optimization Method for Alignment of Large Language Models
Reinforcement learning from human feedback (RLHF) has been extensively employed to align large language models with user intent. However, proximal policy optimization (PPO) based RLHF is occasionally unstable requiring significant hyperparameter finetuning, and computationally expensive to maximize the estimated reward during alignment. Recently, direct preference optimization (DPO) is proposed to address those challenges. However, DPO relies on contrastive responses generated from human annotator and alternative LLM, instead of the policy model, limiting the effectiveness of the RLHF. In this paper, we addresses both challenges by systematically combining rejection sampling (RS) and DPO. Our proposed method, RS-DPO, initiates with the development of a supervised fine-tuned policy model (SFT). A varied set of k responses per prompt are sampled directly from the SFT model. RS-DPO identifies pairs of contrastive samples based on their reward distribution. Finally, we apply DPO with the contrastive samples to align the model to human preference. Our experiments indicate that our proposed method effectively fine-tunes LLMs with limited resource environments, leading to improved alignment with user intent. Furthermore, it outperforms existing methods, including RS, PPO, and DPO.
[ "['Saeed Khaki' 'JinJin Li' 'Lan Ma' 'Liu Yang' 'Prathap Ramachandra']" ]
null
null
2402.10043
null
null
http://arxiv.org/pdf/2402.10043v4
2024-06-05T14:25:23Z
2024-02-15T16:05:35Z
Negative impact of heavy-tailed uncertainty and error distributions on the reliability of calibration statistics for machine learning regression tasks
Average calibration of the (variance-based) prediction uncertainties of machine learning regression tasks can be tested in two ways: one is to estimate the calibration error (CE) as the difference between the mean absolute error (MSE) and the mean variance (MV); the alternative is to compare the mean squared z-scores (ZMS) to 1. The problem is that both approaches might lead to different conclusions, as illustrated in this study for an ensemble of datasets from the recent machine learning uncertainty quantification (ML-UQ) literature. It is shown that the estimation of MV, MSE and their confidence intervals becomes unreliable for heavy-tailed uncertainty and error distributions, which seems to be a frequent feature of ML-UQ datasets. By contrast, the ZMS statistic is less sensitive and offers the most reliable approach in this context. Unfortunately, the same problem is expected to affect also conditional calibrations statistics, such as the popular ENCE, and very likely post-hoc calibration methods based on similar statistics. Several solutions to circumvent the outlined problems are proposed.
[ "['Pascal Pernot']" ]
null
null
2402.10045
null
null
http://arxiv.org/pdf/2402.10045v3
2024-03-21T21:37:50Z
2024-01-11T03:36:47Z
Short-Form Videos and Mental Health: A Knowledge-Guided Neural Topic Model
While short-form videos head to reshape the entire social media landscape, experts are exceedingly worried about their depressive impacts on viewers, as evidenced by medical studies. To prevent widespread consequences, platforms are eager to predict these videos' impact on viewers' mental health. Subsequently, they can take intervention measures, such as revising recommendation algorithms and displaying viewer discretion. Nevertheless, applicable predictive methods lack relevance to well-established medical knowledge, which outlines clinically proven external and environmental factors of depression. To account for such medical knowledge, we resort to an emergent methodological discipline, seeded Neural Topic Models (NTMs). However, existing seeded NTMs suffer from the limitations of single-origin topics, unknown topic sources, unclear seed supervision, and suboptimal convergence. To address those challenges, we develop a novel Knowledge-guided Multimodal NTM to predict a short-form video's depressive impact on viewers. Extensive empirical analyses using TikTok and Douyin datasets prove that our method outperforms state-of-the-art benchmarks. Our method also discovers medically relevant topics from videos that are linked to depressive impact. We contribute to IS with a novel video analytics method that is generalizable to other video classification problems. Practically, our method can help platforms understand videos' mental impacts, thus adjusting recommendations and video topic disclosure.
[ "['Jiaheng Xie' 'Ruicheng Liang' 'Yidong Chai' 'Yang Liu' 'Daniel Zeng']" ]
null
null
2402.10046
null
null
http://arxiv.org/pdf/2402.10046v2
2024-06-03T16:14:51Z
2024-02-15T16:07:56Z
How Flawed Is ECE? An Analysis via Logit Smoothing
Informally, a model is calibrated if its predictions are correct with a probability that matches the confidence of the prediction. By far the most common method in the literature for measuring calibration is the expected calibration error (ECE). Recent work, however, has pointed out drawbacks of ECE, such as the fact that it is discontinuous in the space of predictors. In this work, we ask: how fundamental are these issues, and what are their impacts on existing results? Towards this end, we completely characterize the discontinuities of ECE with respect to general probability measures on Polish spaces. We then use the nature of these discontinuities to motivate a novel continuous, easily estimated miscalibration metric, which we term Logit-Smoothed ECE (LS-ECE). By comparing the ECE and LS-ECE of pre-trained image classification models, we show in initial experiments that binned ECE closely tracks LS-ECE, indicating that the theoretical pathologies of ECE may be avoidable in practice.
[ "['Muthu Chidambaram' 'Holden Lee' 'Colin McSwiggen' 'Semon Rezchikov']" ]
null
null
2402.10062
null
null
http://arxiv.org/pdf/2402.10062v1
2024-02-04T07:31:06Z
2024-02-04T07:31:06Z
Optimal Parameter and Neuron Pruning for Out-of-Distribution Detection
For a machine learning model deployed in real world scenarios, the ability of detecting out-of-distribution (OOD) samples is indispensable and challenging. Most existing OOD detection methods focused on exploring advanced training skills or training-free tricks to prevent the model from yielding overconfident confidence score for unknown samples. The training-based methods require expensive training cost and rely on OOD samples which are not always available, while most training-free methods can not efficiently utilize the prior information from the training data. In this work, we propose an textbf{O}ptimal textbf{P}arameter and textbf{N}euron textbf{P}runing (textbf{OPNP}) approach, which aims to identify and remove those parameters and neurons that lead to over-fitting. The main method is divided into two steps. In the first step, we evaluate the sensitivity of the model parameters and neurons by averaging gradients over all training samples. In the second step, the parameters and neurons with exceptionally large or close to zero sensitivities are removed for prediction. Our proposal is training-free, compatible with other post-hoc methods, and exploring the information from all training data. Extensive experiments are performed on multiple OOD detection tasks and model architectures, showing that our proposed OPNP consistently outperforms the existing methods by a large margin.
[ "['Chao Chen' 'Zhihang Fu' 'Kai Liu' 'Ze Chen' 'Mingyuan Tao' 'Jieping Ye']" ]
null
null
2402.10063
null
null
http://arxiv.org/pdf/2402.10063v1
2024-02-15T16:30:45Z
2024-02-15T16:30:45Z
Balancing the Causal Effects in Class-Incremental Learning
Class-Incremental Learning (CIL) is a practical and challenging problem for achieving general artificial intelligence. Recently, Pre-Trained Models (PTMs) have led to breakthroughs in both visual and natural language processing tasks. Despite recent studies showing PTMs' potential ability to learn sequentially, a plethora of work indicates the necessity of alleviating the catastrophic forgetting of PTMs. Through a pilot study and a causal analysis of CIL, we reveal that the crux lies in the imbalanced causal effects between new and old data. Specifically, the new data encourage models to adapt to new classes while hindering the adaptation of old classes. Similarly, the old data encourages models to adapt to old classes while hindering the adaptation of new classes. In other words, the adaptation process between new and old classes conflicts from the causal perspective. To alleviate this problem, we propose Balancing the Causal Effects (BaCE) in CIL. Concretely, BaCE proposes two objectives for building causal paths from both new and old data to the prediction of new and classes, respectively. In this way, the model is encouraged to adapt to all classes with causal effects from both new and old data and thus alleviates the causal imbalance problem. We conduct extensive experiments on continual image classification, continual text classification, and continual named entity recognition. Empirical results show that BaCE outperforms a series of CIL methods on different tasks and settings.
[ "['Junhao Zheng' 'Ruiyan Wang' 'Chongzhi Zhang' 'Huawen Feng' 'Qianli Ma']" ]
null
null
2402.10064
null
null
http://arxiv.org/pdf/2402.10064v1
2024-01-22T12:17:27Z
2024-01-22T12:17:27Z
Navigating the Maize: Cyclic and conditional computational graphs for molecular simulation
Many computational chemistry and molecular simulation workflows can be expressed as graphs. This abstraction is useful to modularize and potentially reuse existing components, as well as provide parallelization and ease reproducibility. Existing tools represent the computation as a directed acyclic graph (DAG), thus allowing efficient execution by parallelization of concurrent branches. These systems can, however, generally not express cyclic and conditional workflows. We therefore developed Maize, a workflow manager for cyclic and conditional graphs based on the principles of flow-based programming. By running each node of the graph concurrently in separate processes and allowing communication at any time through dedicated inter-node channels, arbitrary graph structures can be executed. We demonstrate the effectiveness of the tool on a dynamic active learning task in computational drug design, involving the use of a small molecule generative model and an associated scoring system.
[ "['Thomas Löhr' 'Michael Dodds' 'Lili Cao' 'Mikhail Kabeshov'\n 'Michele Assante' 'Jon-Paul Janet' 'Marco Klähn' 'Ola Engkvist']" ]
null
null
2402.10065
null
null
http://arxiv.org/pdf/2402.10065v1
2024-02-15T16:30:55Z
2024-02-15T16:30:55Z
How Much Does Each Datapoint Leak Your Privacy? Quantifying the Per-datum Membership Leakage
We study the per-datum Membership Inference Attacks (MIAs), where an attacker aims to infer whether a fixed target datum has been included in the input dataset of an algorithm and thus, violates privacy. First, we define the membership leakage of a datum as the advantage of the optimal adversary targeting to identify it. Then, we quantify the per-datum membership leakage for the empirical mean, and show that it depends on the Mahalanobis distance between the target datum and the data-generating distribution. We further assess the effect of two privacy defences, i.e. adding Gaussian noise and sub-sampling. We quantify exactly how both of them decrease the per-datum membership leakage. Our analysis builds on a novel proof technique that combines an Edgeworth expansion of the likelihood ratio test and a Lindeberg-Feller central limit theorem. Our analysis connects the existing likelihood ratio and scalar product attacks, and also justifies different canary selection strategies used in the privacy auditing literature. Finally, our experiments demonstrate the impacts of the leakage score, the sub-sampling ratio and the noise scale on the per-datum membership leakage as indicated by the theory.
[ "['Achraf Azize' 'Debabrota Basu']" ]
null
null
2402.10066
null
null
http://arxiv.org/pdf/2402.10066v1
2024-02-15T16:31:54Z
2024-02-15T16:31:54Z
NYCTALE: Neuro-Evidence Transformer for Adaptive and Personalized Lung Nodule Invasiveness Prediction
Drawing inspiration from the primate brain's intriguing evidence accumulation process, and guided by models from cognitive psychology and neuroscience, the paper introduces the NYCTALE framework, a neuro-inspired and evidence accumulation-based Transformer architecture. The proposed neuro-inspired NYCTALE offers a novel pathway in the domain of Personalized Medicine (PM) for lung cancer diagnosis. In nature, Nyctales are small owls known for their nocturnal behavior, hunting primarily during the darkness of night. The NYCTALE operates in a similarly vigilant manner, i.e., processing data in an evidence-based fashion and making predictions dynamically/adaptively. Distinct from conventional Computed Tomography (CT)-based Deep Learning (DL) models, the NYCTALE performs predictions only when sufficient amount of evidence is accumulated. In other words, instead of processing all or a pre-defined subset of CT slices, for each person, slices are provided one at a time. The NYCTALE framework then computes an evidence vector associated with contribution of each new CT image. A decision is made once the total accumulated evidence surpasses a specific threshold. Preliminary experimental analyses conducted using a challenging in-house dataset comprising 114 subjects. The results are noteworthy, suggesting that NYCTALE outperforms the benchmark accuracy even with approximately 60% less training data on this demanding and small dataset.
[ "['Sadaf Khademi' 'Anastasia Oikonomou' 'Konstantinos N. Plataniotis'\n 'Arash Mohammadi']" ]
null
null
2402.10067
null
null
http://arxiv.org/abs/2402.10067v1
2024-01-22T15:37:04Z
2024-01-22T15:37:04Z
LLM-based policy generation for intent-based management of applications
Automated management requires decomposing high-level user requests, such as intents, to an abstraction that the system can understand and execute. This is challenging because even a simple intent requires performing a number of ordered steps. And the task of identifying and adapting these steps (as conditions change) requires a decomposition approach that cannot be exactly pre-defined beforehand. To tackle these challenges and support automated intent decomposition and execution, we explore the few-shot capability of Large Language Models (LLMs). We propose a pipeline that progressively decomposes intents by generating the required actions using a policy-based abstraction. This allows us to automate the policy execution by creating a closed control loop for the intent deployment. To do so, we generate and map the policies to APIs and form application management loops that perform the necessary monitoring, analysis, planning and execution. We evaluate our proposal with a use-case to fulfill and assure an application service chain of virtual network functions. Using our approach, we can generalize and generate the necessary steps to realize intents, thereby enabling intent automation for application management.
[ "['Kristina Dzeparoska' 'Jieyu Lin' 'Ali Tizghadam' 'Alberto Leon-Garcia']" ]
null
null
2402.10069
null
null
http://arxiv.org/pdf/2402.10069v2
2024-04-09T15:47:15Z
2024-01-25T12:03:10Z
Learning fast changing slow in spiking neural networks
Reinforcement learning (RL) faces substantial challenges when applied to real-life problems, primarily stemming from the scarcity of available data due to limited interactions with the environment. This limitation is exacerbated by the fact that RL often demands a considerable volume of data for effective learning. The complexity escalates further when implementing RL in recurrent spiking networks, where inherent noise introduced by spikes adds a layer of difficulty. Life-long learning machines must inherently resolve the plasticity-stability paradox. Striking a balance between acquiring new knowledge and maintaining stability is crucial for artificial agents. To address this challenge, we draw inspiration from machine learning technology and introduce a biologically plausible implementation of proximal policy optimization, referred to as lf-cs (learning fast changing slow). Our approach results in two notable advancements: firstly, the capacity to assimilate new information into a new policy without requiring alterations to the current policy; and secondly, the capability to replay experiences without experiencing policy divergence. Furthermore, when contrasted with other experience replay (ER) techniques, our method demonstrates the added advantage of being computationally efficient in an online setting. We demonstrate that the proposed methodology enhances the efficiency of learning, showcasing its potential impact on neuromorphic and real-world applications.
[ "['Cristiano Capone' 'Paolo Muratore']" ]
null
null
2402.10072
null
null
http://arxiv.org/pdf/2402.10072v1
2024-01-26T04:55:37Z
2024-01-26T04:55:37Z
Deep Joint Source-Channel Coding for Efficient and Reliable Cross-Technology Communication
Cross-technology communication (CTC) is a promising technique that enables direct communications among incompatible wireless technologies without needing hardware modification. However, it has not been widely adopted in real-world applications due to its inefficiency and unreliability. To address this issue, this paper proposes a deep joint source-channel coding (DJSCC) scheme to enable efficient and reliable CTC. The proposed scheme builds a neural-network-based encoder and decoder at the sender side and the receiver side, respectively, to achieve two critical tasks simultaneously: 1) compressing the messages to the point where only their essential semantic meanings are preserved; 2) ensuring the robustness of the semantic meanings when they are transmitted across incompatible technologies. The scheme incorporates existing CTC coding algorithms as domain knowledge to guide the encoder-decoder pair to learn the characteristics of CTC links better. Moreover, the scheme constructs shared semantic knowledge for the encoder and decoder, allowing semantic meanings to be converted into very few bits for cross-technology transmissions, thus further improving the efficiency of CTC. Extensive simulations verify that the proposed scheme can reduce the transmission overhead by up to 97.63% and increase the structural similarity index measure by up to 734.78%, compared with the state-of-the-art CTC scheme.
[ "['Shumin Yao' 'Xiaodong Xu' 'Hao Chen' 'Yaping Sun' 'Qinglin Zhao']" ]
null
null
2402.10074
null
null
http://arxiv.org/pdf/2402.10074v3
2024-05-07T12:42:02Z
2024-02-15T16:37:14Z
Class-Balanced and Reinforced Active Learning on Graphs
Graph neural networks (GNNs) have demonstrated significant success in various applications, such as node classification, link prediction, and graph classification. Active learning for GNNs aims to query the valuable samples from the unlabeled data for annotation to maximize the GNNs' performance at a lower cost. However, most existing algorithms for reinforced active learning in GNNs may lead to a highly imbalanced class distribution, especially in highly skewed class scenarios. GNNs trained with class-imbalanced labeled data are susceptible to bias toward majority classes, and the lower performance of minority classes may lead to a decline in overall performance. To tackle this issue, we propose a novel class-balanced and reinforced active learning framework for GNNs, namely, GCBR. It learns an optimal policy to acquire class-balanced and informative nodes for annotation, maximizing the performance of GNNs trained with selected labeled nodes. GCBR designs class-balance-aware states, as well as a reward function that achieves trade-off between model performance and class balance. The reinforcement learning algorithm Advantage Actor-Critic (A2C) is employed to learn an optimal policy stably and efficiently. We further upgrade GCBR to GCBR++ by introducing a punishment mechanism to obtain a more class-balanced labeled set. Extensive experiments on multiple datasets demonstrate the effectiveness of the proposed approaches, achieving superior performance over state-of-the-art baselines.
[ "['Chengcheng Yu' 'Jiapeng Zhu' 'Xiang Li']" ]
null
null
2402.10076
null
null
http://arxiv.org/pdf/2402.10076v1
2024-02-15T16:38:41Z
2024-02-15T16:38:41Z
QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference
We introduce QUICK, a group of novel optimized CUDA kernels for the efficient inference of quantized Large Language Models (LLMs). QUICK addresses the shared memory bank-conflict problem of state-of-the-art mixed precision matrix multiplication kernels. Our method interleaves the quantized weight matrices of LLMs offline to skip the shared memory write-back after the dequantization. We demonstrate up to 1.91x speedup over existing kernels of AutoAWQ on larger batches and up to 1.94x throughput gain on representative LLM models on various NVIDIA GPU devices.
[ "['Taesu Kim' 'Jongho Lee' 'Daehyun Ahn' 'Sarang Kim' 'Jiwoong Choi'\n 'Minkyu Kim' 'Hyungjun Kim']" ]
null
null
2402.10077
null
null
http://arxiv.org/pdf/2402.10077v1
2024-01-28T18:24:34Z
2024-01-28T18:24:34Z
Towards a large-scale fused and labeled dataset of human pose while interacting with robots in shared urban areas
Over the last decade, Autonomous Delivery Robots (ADRs) have transformed conventional delivery methods, responding to the growing e-commerce demand. However, the readiness of ADRs to navigate safely among pedestrians in shared urban areas remains an open question. We contend that there are crucial research gaps in understanding their interactions with pedestrians in such environments. Human Pose Estimation is a vital stepping stone for various downstream applications, including pose prediction and socially aware robot path-planning. Yet, the absence of an enriched and pose-labeled dataset capturing human-robot interactions in shared urban areas hinders this objective. In this paper, we bridge this gap by repurposing, fusing, and labeling two datasets, MOT17 and NCLT, focused on pedestrian tracking and Simultaneous Localization and Mapping (SLAM), respectively. The resulting unique dataset represents thousands of real-world indoor and outdoor human-robot interaction scenarios. Leveraging YOLOv7, we obtained human pose visual and numeric outputs and provided ground truth poses using manual annotation. To overcome the distance bias present in the traditional MPJPE metric, this study introduces a novel human pose estimation error metric called Mean Scaled Joint Error (MSJE) by incorporating bounding box dimensions into it. Findings demonstrate that YOLOv7 effectively estimates human pose in both datasets. However, it exhibits weaker performance in specific scenarios, like indoor, crowded scenes with a focused light source, where both MPJPE and MSJE are recorded as 10.89 and 25.3, respectively. In contrast, YOLOv7 performs better in single-person estimation (NCLT seq 2) and outdoor scenarios (MOT17 seq1), achieving MSJE values of 5.29 and 3.38, respectively.
[ "['E. Sherafat' 'B. Farooq']" ]
null
null
2402.10078
null
null
http://arxiv.org/pdf/2402.10078v1
2024-01-28T19:42:05Z
2024-01-28T19:42:05Z
EventF2S: Asynchronous and Sparse Spiking AER Framework using Neuromorphic-Friendly Algorithm
Bio-inspired Address Event Representation (AER) sensors have attracted significant popularity owing to their low power consumption, high sparsity, and high temporal resolution. Spiking Neural Network (SNN) has become the inherent choice for AER data processing. However, the integration of the AER-SNN paradigm has not adequately explored asynchronous processing, neuromorphic compatibility, and sparse spiking, which are the key requirements of resource-constrained applications. To address this gap, we introduce a brain-inspired AER-SNN object recognition solution, which includes a data encoder integrated with a First-To-Spike recognition network. Being fascinated by the functionality of neurons in the visual cortex, we designed the solution to be asynchronous and compatible with neuromorphic hardware. Furthermore, we have adapted the principle of denoising and First-To-Spike coding to achieve optimal spike signaling, significantly reducing computation costs. Experimental evaluation has demonstrated that the proposed method incurs significantly less computation cost to achieve state-of-the-art competitive accuracy. Overall, the proposed solution offers an asynchronous and cost-effective AER recognition system that harnesses the full potential of AER sensors.
[ "['Lakshmi Annamalai' 'Chetan Singh Thakur']" ]
null
null
2402.10079
null
null
http://arxiv.org/pdf/2402.10079v1
2024-01-29T16:56:17Z
2024-01-29T16:56:17Z
Review of the Learning-based Camera and Lidar Simulation Methods for Autonomous Driving Systems
Perception sensors, particularly camera and Lidar, are key elements of Autonomous Driving Systems (ADS) that enable them to comprehend their surroundings for informed driving and control decisions. Therefore, developing realistic camera and Lidar simulation methods, also known as camera and Lidar models, is of paramount importance to effectively conduct simulation-based testing for ADS. Moreover, the rise of deep learning-based perception models has propelled the prevalence of perception sensor models as valuable tools for synthesising diverse training datasets. The traditional sensor simulation methods rely on computationally expensive physics-based algorithms, specifically in complex systems such as ADS. Hence, the current potential resides in learning-based models, driven by the success of deep generative models in synthesising high-dimensional data. This paper reviews the current state-of-the-art in learning-based sensor simulation methods and validation approaches, focusing on two main types of perception sensors: cameras and Lidars. This review covers two categories of learning-based approaches, namely raw-data-based and object-based models. Raw-data-based methods are explained concerning the employed learning strategy, while object-based models are categorised based on the type of error considered. Finally, the paper illustrates commonly used validation techniques for evaluating perception sensor models and highlights the existing research gaps in the area.
[ "['Hamed Haghighi' 'Xiaomeng Wang' 'Hao Jing' 'Mehrdad Dianati']" ]
null
null
2402.10082
null
null
http://arxiv.org/pdf/2402.10082v1
2024-02-15T16:42:04Z
2024-02-15T16:42:04Z
FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning
Federated Learning (FL) represents a promising approach to typical privacy concerns associated with centralized Machine Learning (ML) deployments. Despite its well-known advantages, FL is vulnerable to security attacks such as Byzantine behaviors and poisoning attacks, which can significantly degrade model performance and hinder convergence. The effectiveness of existing approaches to mitigate complex attacks, such as median, trimmed mean, or Krum aggregation functions, has been only partially demonstrated in the case of specific attacks. Our study introduces a novel robust aggregation mechanism utilizing the Fourier Transform (FT), which is able to effectively handling sophisticated attacks without prior knowledge of the number of attackers. Employing this data technique, weights generated by FL clients are projected into the frequency domain to ascertain their density function, selecting the one exhibiting the highest frequency. Consequently, malicious clients' weights are excluded. Our proposed approach was tested against various model poisoning attacks, demonstrating superior performance over state-of-the-art aggregation methods.
[ "['Enrique Mármol Campos' 'Aurora González Vidal'\n 'José Luis Hernández Ramos' 'Antonio Skarmeta']" ]
null
null
2402.10085
null
null
http://arxiv.org/pdf/2402.10085v1
2024-02-01T09:02:44Z
2024-02-01T09:02:44Z
Develop End-to-End Anomaly Detection System
Anomaly detection plays a crucial role in ensuring network robustness. However, implementing intelligent alerting systems becomes a challenge when considering scenarios in which anomalies can be caused by both malicious and non-malicious events, leading to the difficulty of determining anomaly patterns. The lack of labeled data in the computer networking domain further exacerbates this issue, impeding the development of robust models capable of handling real-world scenarios. To address this challenge, in this paper, we propose an end-to-end anomaly detection model development pipeline. This framework makes it possible to consume user feedback and enable continuous user-centric model performance evaluation and optimization. We demonstrate the efficacy of the framework by way of introducing and bench-marking a new forecasting model -- named emph{Lachesis} -- on a real-world networking problem. Experiments have demonstrated the robustness and effectiveness of the two proposed versions of emph{Lachesis} compared with other models proposed in the literature. Our findings underscore the potential for improving the performance of data-driven products over their life cycles through a harmonized integration of user feedback and iterative development.
[ "['Emanuele Mengoli' 'Zhiyuan Yao' 'Wutao Wei']" ]
null
null
2402.10086
null
null
http://arxiv.org/pdf/2402.10086v2
2024-07-03T08:31:45Z
2024-02-08T09:08:44Z
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
Artificial Intelligence (AI) shows promising applications for the perception and planning tasks in autonomous driving (AD) due to its superior performance compared to conventional methods. However, inscrutable AI systems exacerbate the existing challenge of safety assurance of AD. One way to mitigate this challenge is to utilize explainable AI (XAI) techniques. To this end, we present the first comprehensive systematic literature review of explainable methods for safe and trustworthy AD. We begin by analyzing the requirements for AI in the context of AD, focusing on three key aspects: data, model, and agency. We find that XAI is fundamental to meeting these requirements. Based on this, we explain the sources of explanations in AI and describe a taxonomy of XAI. We then identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation. Finally, we propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
[ "['Anton Kuznietsov' 'Balint Gyevnar' 'Cheng Wang' 'Steven Peters'\n 'Stefano V. Albrecht']" ]
null
null
2402.10087
null
null
http://arxiv.org/pdf/2402.10087v1
2024-01-31T23:51:14Z
2024-01-31T23:51:14Z
Decentralized Covert Routing in Heterogeneous Networks Using Reinforcement Learning
This letter investigates covert routing communications in a heterogeneous network where a source transmits confidential data to a destination with the aid of relaying nodes where each transmitter judiciously chooses one modality among multiple communication modalities. We develop a novel reinforcement learning-based covert routing algorithm that finds a route from the source to the destination where each node identifies its next hop and modality only based on the local feedback information received from its neighboring nodes. We show based on numerical simulations that the proposed covert routing strategy has only negligible performance loss compared to the optimal centralized routing scheme.
[ "['Justin Kong' 'Terrence J. Moore' 'Fikadu T. Dagefu']" ]
null
null
2402.10088
null
null
http://arxiv.org/pdf/2402.10088v2
2024-06-21T16:46:55Z
2024-02-01T15:15:25Z
Deep hybrid models: infer and plan in the real world
Determining an optimal plan to accomplish a goal is a hard problem in realistic scenarios, which often comprise dynamic and causal relationships between several entities. Although traditionally such problems have been tackled with optimal control and reinforcement learning, a recent biologically-motivated proposal casts planning and control as an inference process. Among these new approaches, one is particularly promising: active inference. This new paradigm assumes that action and perception are two complementary aspects of life whereby the role of the former is to fulfill the predictions inferred by the latter. In this study, we present an effective solution, based on active inference, to complex control tasks. The proposed architecture exploits hybrid (discrete and continuous) processing to construct a hierarchical and dynamic representation of the self and the environment, which is then used to produce a flexible plan consisting of subgoals at different temporal scales. We evaluate this deep hybrid model on a non-trivial task: reaching a moving object after having picked a moving tool. This study extends past work on planning as inference and advances an alternative direction to optimal control and reinforcement learning.
[ "['Matteo Priorelli' 'Ivilin Peev Stoianov']" ]
null
null
2402.10090
null
null
http://arxiv.org/pdf/2402.10090v1
2024-02-01T03:08:21Z
2024-02-01T03:08:21Z
PICS: Pipeline for Image Captioning and Search
The growing volume of digital images necessitates advanced systems for efficient categorization and retrieval, presenting a significant challenge in database management and information retrieval. This paper introduces PICS (Pipeline for Image Captioning and Search), a novel approach designed to address the complexities inherent in organizing large-scale image repositories. PICS leverages the advancements in Large Language Models (LLMs) to automate the process of image captioning, offering a solution that transcends traditional manual annotation methods. The approach is rooted in the understanding that meaningful, AI-generated captions can significantly enhance the searchability and accessibility of images in large databases. By integrating sentiment analysis into the pipeline, PICS further enriches the metadata, enabling nuanced searches that extend beyond basic descriptors. This methodology not only simplifies the task of managing vast image collections but also sets a new precedent for accuracy and efficiency in image retrieval. The significance of PICS lies in its potential to transform image database systems, harnessing the power of machine learning and natural language processing to meet the demands of modern digital asset management.
[ "['Grant Rosario' 'David Noever']" ]
null
null
2402.10091
null
null
http://arxiv.org/pdf/2402.10091v1
2024-02-01T18:52:26Z
2024-02-01T18:52:26Z
Text-Based Product Matching -- Semi-Supervised Clustering Approach
Matching identical products present in multiple product feeds constitutes a crucial element of many tasks of e-commerce, such as comparing product offerings, dynamic price optimization, and selecting the assortment personalized for the client. It corresponds to the well-known machine learning task of entity matching, with its own specificity, like omnipresent unstructured data or inaccurate and inconsistent product descriptions. This paper aims to present a new philosophy to product matching utilizing a semi-supervised clustering approach. We study the properties of this method by experimenting with the IDEC algorithm on the real-world dataset using predominantly textual features and fuzzy string matching, with more standard approaches as a point of reference. Encouraging results show that unsupervised matching, enriched with a small annotated sample of product links, could be a possible alternative to the dominant supervised strategy, requiring extensive manual data labeling.
[ "['Alicja Martinek' 'Szymon Łukasik' 'Amir H. Gandomi']" ]
null
null
2402.10092
null
null
http://arxiv.org/pdf/2402.10092v1
2024-02-01T14:16:10Z
2024-02-01T14:16:10Z
Workflow Optimization for Parallel Split Learning
Split learning (SL) has been recently proposed as a way to enable resource-constrained devices to train multi-parameter neural networks (NNs) and participate in federated learning (FL). In a nutshell, SL splits the NN model into parts, and allows clients (devices) to offload the largest part as a processing task to a computationally powerful helper. In parallel SL, multiple helpers can process model parts of one or more clients, thus, considerably reducing the maximum training time over all clients (makespan). In this paper, we focus on orchestrating the workflow of this operation, which is critical in highly heterogeneous systems, as our experiments show. In particular, we formulate the joint problem of client-helper assignments and scheduling decisions with the goal of minimizing the training makespan, and we prove that it is NP-hard. We propose a solution method based on the decomposition of the problem by leveraging its inherent symmetry, and a second one that is fully scalable. A wealth of numerical evaluations using our testbed's measurements allow us to build a solution strategy comprising these methods. Moreover, we show that this strategy finds a near-optimal solution, and achieves a shorter makespan than the baseline scheme by up to 52.3%.
[ "['Joana Tirana' 'Dimitra Tsigkari' 'George Iosifidis'\n 'Dimitris Chatzopoulos']" ]
null
null
2402.10093
null
null
http://arxiv.org/pdf/2402.10093v2
2024-06-03T17:51:58Z
2024-02-15T16:46:16Z
MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Representations
We introduce MIM (Masked Image Modeling)-Refiner, a contrastive learning boost for pre-trained MIM models. MIM-Refiner is motivated by the insight that strong representations within MIM models generally reside in intermediate layers. Accordingly, MIM-Refiner leverages multiple contrastive heads that are connected to different intermediate layers. In each head, a modified nearest neighbor objective constructs semantic clusters that capture semantic information which improves performance on downstream tasks, including off-the-shelf and fine-tuning settings. The refinement process is short and simple - yet highly effective. Within a few epochs, we refine the features of MIM models from subpar to state-of-the-art, off-the-shelf features. Refining a ViT-H, pre-trained with data2vec 2.0 on ImageNet-1K, sets a new state-of-the-art in linear probing (84.7%) and low-shot classification among models that are pre-trained on ImageNet-1K. At ImageNet-1K 1-shot classification, MIM-Refiner advances the state-of-the-art to 64.2%, outperforming larger models that were trained on up to 2000 times more data such as DINOv2-g, OpenCLIP-G and MAWS-6.5B.
[ "['Benedikt Alkin' 'Lukas Miklautz' 'Sepp Hochreiter'\n 'Johannes Brandstetter']" ]
null
null
2402.10095
null
null
http://arxiv.org/pdf/2402.10095v2
2024-06-12T08:18:27Z
2024-02-15T16:49:42Z
Classification Diffusion Models: Revitalizing Density Ratio Estimation
A prominent family of methods for learning data distributions relies on density ratio estimation (DRE), where a model is trained to $textit{classify}$ between data samples and samples from some reference distribution. DRE-based models can directly output the likelihood for any given input, a highly desired property that is lacking in most generative techniques. Nevertheless, to date, DRE methods have struggled to accurately capture the distributions of complex high-dimensional data like images, which led to reduced research attention over the years. In this work we present $textit{classification diffusion models}$ (CDMs), a DRE-based generative method that adopts the formalism of denoising diffusion models (DDMs) while making use of a classifier that predicts the level of noise added to a clean signal. Our method is based on an analytical connection that we derive between an MSE-optimal denoiser for white Gaussian noise and a cross-entropy-optimal classifier for predicting the noise level. To the best of our knowledge, our method is the first DRE-based technique that can successfully generate images. Furthermore, it can output the likelihood of any input in a single forward pass, achieving state-of-the-art negative log likelihood (NLL) among methods with this property. Code is available on the project's webpage in https://shaharYadin.github.io/CDM/ .
[ "['Shahar Yadin' 'Noam Elata' 'Tomer Michaeli']" ]
null
null
2402.10097
null
null
http://arxiv.org/pdf/2402.10097v3
2024-05-14T03:17:41Z
2024-02-15T16:51:38Z
Adaptive Federated Learning in Heterogeneous Wireless Networks with Independent Sampling
Federated Learning (FL) algorithms commonly sample a random subset of clients to address the straggler issue and improve communication efficiency. While recent works have proposed various client sampling methods, they have limitations in joint system and data heterogeneity design, which may not align with practical heterogeneous wireless networks. In this work, we advocate a new independent client sampling strategy to minimize the wall-clock training time of FL, while considering data heterogeneity and system heterogeneity in both communication and computation. We first derive a new convergence bound for non-convex loss functions with independent client sampling and then propose an adaptive bandwidth allocation scheme. Furthermore, we propose an efficient independent client sampling algorithm based on the upper bounds on the convergence rounds and the expected per-round training time, to minimize the wall-clock time of FL, while considering both the data and system heterogeneity. Experimental results under practical wireless network settings with real-world prototype demonstrate that the proposed independent sampling scheme substantially outperforms the current best sampling schemes under various training models and datasets.
[ "['Jiaxiang Geng' 'Yanzhao Hou' 'Xiaofeng Tao' 'Juncheng Wang' 'Bing Luo']" ]
null
null
2402.10098
null
null
http://arxiv.org/pdf/2402.10098v1
2024-02-06T14:04:31Z
2024-02-06T14:04:31Z
Parameter-tuning-free data entry error unlearning with adaptive selective synaptic dampening
Data entry constitutes a fundamental component of the machine learning pipeline, yet it frequently results in the introduction of labelling errors. When a model has been trained on a dataset containing such errors its performance is reduced. This leads to the challenge of efficiently unlearning the influence of the erroneous data to improve the model performance without needing to completely retrain the model. While model editing methods exist for cases in which the correct label for a wrong entry is known, we focus on the case of data entry errors where we do not know the correct labels for the erroneous data. Our contribution is twofold. First, we introduce an extension to the selective synaptic dampening unlearning method that removes the need for parameter tuning, making unlearning accessible to practitioners. We demonstrate the performance of this extension, adaptive selective synaptic dampening (ASSD), on various ResNet18 and Vision Transformer unlearning tasks. Second, we demonstrate the performance of ASSD in a supply chain delay prediction problem with labelling errors using real-world data where we randomly introduce various levels of labelling errors. The application of this approach is particularly compelling in industrial settings, such as supply chain management, where a significant portion of data entry occurs manually through Excel sheets, rendering it error-prone. ASSD shows strong performance on general unlearning benchmarks and on the error correction problem where it outperforms fine-tuning for error correction.
[ "['Stefan Schoepf' 'Jack Foster' 'Alexandra Brintrup']" ]
null
null
2402.10100
null
null
http://arxiv.org/pdf/2402.10100v3
2024-04-05T21:40:33Z
2024-02-07T16:41:11Z
Tuning In: Analysis of Audio Classifier Performance in Clinical Settings with Limited Data
This study assesses deep learning models for audio classification in a clinical setting with the constraint of small datasets reflecting real-world prospective data collection. We analyze CNNs, including DenseNet and ConvNeXt, alongside transformer models like ViT, SWIN, and AST, and compare them against pre-trained audio models such as YAMNet and VGGish. Our method highlights the benefits of pre-training on large datasets before fine-tuning on specific clinical data. We prospectively collected two first-of-their-kind patient audio datasets from stroke patients. We investigated various preprocessing techniques, finding that RGB and grayscale spectrogram transformations affect model performance differently based on the priors they learn from pre-training. Our findings indicate CNNs can match or exceed transformer models in small dataset contexts, with DenseNet-Contrastive and AST models showing notable performance. This study highlights the significance of incremental marginal gains through model selection, pre-training, and preprocessing in sound classification; this offers valuable insights for clinical diagnostics that rely on audio classification.
[ "['Hamza Mahdi' 'Eptehal Nashnoush' 'Rami Saab' 'Arjun Balachandar'\n 'Rishit Dagli' 'Lucas X. Perri' 'Houman Khosravani']" ]
null
null
2402.10101
null
null
http://arxiv.org/pdf/2402.10101v1
2024-02-07T14:21:21Z
2024-02-07T14:21:21Z
Deep Learning Based Situation Awareness for Multiple Missiles Evasion
As the effective range of air-to-air missiles increases, it becomes harder for human operators to maintain the situational awareness needed to keep a UAV safe. In this work, we propose a decision support tool to help UAV operators in Beyond Visual Range (BVR) air combat scenarios assess the risks of different options and make decisions based on those. Earlier work focused on the threat posed by a single missile, and in this work, we extend the ideas to several missile threats. The proposed method uses Deep Neural Networks (DNN) to learn from high-fidelity simulations to provide the operator with an outcome estimate for a set of different strategies. Our results demonstrate that the proposed system can manage multiple incoming missiles, evaluate a family of options, and recommend the least risky course of action.
[ "['Edvards Scukins' 'Markus Klein' 'Lars Kroon' 'Petter Ögren']" ]
null
null
2402.10109
null
null
http://arxiv.org/pdf/2402.10109v2
2024-03-19T16:43:09Z
2024-02-15T17:05:48Z
Towards Reducing Diagnostic Errors with Interpretable Risk Prediction
Many diagnostic errors occur because clinicians cannot easily access relevant information in patient Electronic Health Records (EHRs). In this work we propose a method to use LLMs to identify pieces of evidence in patient EHR data that indicate increased or decreased risk of specific diagnoses; our ultimate aim is to increase access to evidence and reduce diagnostic errors. In particular, we propose a Neural Additive Model to make predictions backed by evidence with individualized risk estimates at time-points where clinicians are still uncertain, aiming to specifically mitigate delays in diagnosis and errors stemming from an incomplete differential. To train such a model, it is necessary to infer temporally fine-grained retrospective labels of eventual "true" diagnoses. We do so with LLMs, to ensure that the input text is from before a confident diagnosis can be made. We use an LLM to retrieve an initial pool of evidence, but then refine this set of evidence according to correlations learned by the model. We conduct an in-depth evaluation of the usefulness of our approach by simulating how it might be used by a clinician to decide between a pre-defined list of differential diagnoses.
[ "['Denis Jered McInerney' 'William Dickinson' 'Lucy C. Flynn'\n 'Andrea C. Young' 'Geoffrey S. Young' 'Jan-Willem van de Meent'\n 'Byron C. Wallace']" ]
null
null
2402.10110
null
null
http://arxiv.org/pdf/2402.10110v2
2024-06-07T20:23:21Z
2024-02-15T17:06:21Z
Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
Instruction tuning is critical to large language models (LLMs) for achieving better instruction following and task adaptation capabilities but its success heavily relies on the training data quality. Many recent methods focus on improving the data quality but often overlook the compatibility of the data with the student model being finetuned. This paper introduces Selective Reflection-Tuning, a novel paradigm that synergizes a teacher LLM's reflection and introspection for improving existing data quality with the data selection capability of the student LLM, to automatically refine existing instruction-tuning data. This teacher-student collaboration produces high-quality and student-compatible instruction-response pairs, resulting in sample-efficient instruction tuning and LLMs of superior performance. Selective Reflection-Tuning is a data augmentation and synthesis that generally improves LLM finetuning and self-improvement without collecting brand-new data. We apply our method to Alpaca and WizardLM data and achieve much stronger and top-tier 7B and 13B LLMs.
[ "['Ming Li' 'Lichang Chen' 'Jiuhai Chen' 'Shwai He' 'Jiuxiang Gu'\n 'Tianyi Zhou']" ]
null
null
2402.10115
null
null
http://arxiv.org/pdf/2402.10115v1
2024-02-15T17:10:27Z
2024-02-15T17:10:27Z
Generating Visual Stimuli from EEG Recordings using Transformer-encoder based EEG encoder and GAN
In this study, we tackle a modern research challenge within the field of perceptual brain decoding, which revolves around synthesizing images from EEG signals using an adversarial deep learning framework. The specific objective is to recreate images belonging to various object categories by leveraging EEG recordings obtained while subjects view those images. To achieve this, we employ a Transformer-encoder based EEG encoder to produce EEG encodings, which serve as inputs to the generator component of the GAN network. Alongside the adversarial loss, we also incorporate perceptual loss to enhance the quality of the generated images.
[ "['Rahul Mishra' 'Arnav Bhavsar']" ]
null
null
2402.10118
null
null
http://arxiv.org/pdf/2402.10118v2
2024-02-16T08:52:29Z
2024-02-15T17:16:33Z
Reusing Softmax Hardware Unit for GELU Computation in Transformers
Transformers have improved drastically the performance of natural language processing (NLP) and computer vision applications. The computation of transformers involves matrix multiplications and non-linear activation functions such as softmax and GELU (Gaussion Error Linear Unit) that are accelerated directly in hardware. Currently, function evaluation is done separately for each function and rarely allows for hardware reuse. To mitigate this problem, in this work, we map the computation of GELU to a softmax operator. In this way, the efficient hardware units designed already for softmax can be reused for computing GELU as well. Computation of GELU can enjoy the inherent vectorized nature of softmax and produce in parallel multiple GELU outcomes. Experimental results show that computing GELU via a pre-existing and incrementally modified softmax hardware unit (a) does not reduce the accuracy of representative NLP applications and (b) allows the reduction of the overall hardware area and power by 6.1% and 11.9%, respectively, on average.
[ "['Christodoulos Peltekis' 'Kosmas Alexandridis' 'Giorgos Dimitrakopoulos']" ]
null
null
2402.10127
null
null
http://arxiv.org/pdf/2402.10127v1
2024-02-15T17:31:19Z
2024-02-15T17:31:19Z
Nonlinear spiked covariance matrices and signal propagation in deep neural networks
Many recent works have studied the eigenvalue spectrum of the Conjugate Kernel (CK) defined by the nonlinear feature map of a feedforward neural network. However, existing results only establish weak convergence of the empirical eigenvalue distribution, and fall short of providing precise quantitative characterizations of the ''spike'' eigenvalues and eigenvectors that often capture the low-dimensional signal structure of the learning problem. In this work, we characterize these signal eigenvalues and eigenvectors for a nonlinear version of the spiked covariance model, including the CK as a special case. Using this general result, we give a quantitative description of how spiked eigenstructure in the input data propagates through the hidden layers of a neural network with random weights. As a second application, we study a simple regime of representation learning where the weight matrix develops a rank-one signal component over training and characterize the alignment of the target function with the spike eigenvector of the CK on test data.
[ "['Zhichao Wang' 'Denny Wu' 'Zhou Fan']" ]
null
null
2402.10128
null
null
http://arxiv.org/pdf/2402.10128v2
2024-05-24T16:13:43Z
2024-02-15T17:32:50Z
GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering
Advancements in 3D Gaussian Splatting have significantly accelerated 3D reconstruction and generation. However, it may require a large number of Gaussians, which creates a substantial memory footprint. This paper introduces GES (Generalized Exponential Splatting), a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes, requiring far fewer particles to represent a scene and thus significantly outperforming Gaussian Splatting methods in efficiency with a plug-and-play replacement ability for Gaussian-based utilities. GES is validated theoretically and empirically in both principled 1D setup and realistic 3D scenes. It is shown to represent signals with sharp edges more accurately, which are typically challenging for Gaussians due to their inherent low-pass characteristics. Our empirical analysis demonstrates that GEF outperforms Gaussians in fitting natural-occurring signals (e.g. squares, triangles, and parabolic signals), thereby reducing the need for extensive splitting operations that increase the memory footprint of Gaussian Splatting. With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks while requiring less than half the memory storage of Gaussian Splatting and increasing the rendering speed by up to 39%. The code is available on the project website https://abdullahamdi.com/ges .
[ "['Abdullah Hamdi' 'Luke Melas-Kyriazi' 'Jinjie Mai' 'Guocheng Qian'\n 'Ruoshi Liu' 'Carl Vondrick' 'Bernard Ghanem' 'Andrea Vedaldi']" ]
null
null
2402.10130
null
null
http://arxiv.org/pdf/2402.10130v1
2024-02-15T17:34:56Z
2024-02-15T17:34:56Z
Is Continual Learning Ready for Real-world Challenges?
Despite continual learning's long and well-established academic history, its application in real-world scenarios remains rather limited. This paper contends that this gap is attributable to a misalignment between the actual challenges of continual learning and the evaluation protocols in use, rendering proposed solutions ineffective for addressing the complexities of real-world setups. We validate our hypothesis and assess progress to date, using a new 3D semantic segmentation benchmark, OCL-3DSS. We investigate various continual learning schemes from the literature by utilizing more realistic protocols that necessitate online and continual learning for dynamic, real-world scenarios (eg., in robotics and 3D vision applications). The outcomes are sobering: all considered methods perform poorly, significantly deviating from the upper bound of joint offline training. This raises questions about the applicability of existing methods in realistic settings. Our paper aims to initiate a paradigm shift, advocating for the adoption of continual learning methods through new experimental protocols that better emulate real-world conditions to facilitate breakthroughs in the field.
[ "['Theodora Kontogianni' 'Yuanwen Yue' 'Siyu Tang' 'Konrad Schindler']" ]
null
null
2402.10135
null
null
http://arxiv.org/pdf/2402.10135v1
2024-02-15T17:38:32Z
2024-02-15T17:38:32Z
Benchmarking federated strategies in Peer-to-Peer Federated learning for biomedical data
The increasing requirements for data protection and privacy has attracted a huge research interest on distributed artificial intelligence and specifically on federated learning, an emerging machine learning approach that allows the construction of a model between several participants who hold their own private data. In the initial proposal of federated learning the architecture was centralised and the aggregation was done with federated averaging, meaning that a central server will orchestrate the federation using the most straightforward averaging strategy. This research is focused on testing different federated strategies in a peer-to-peer environment. The authors propose various aggregation strategies for federated learning, including weighted averaging aggregation, using different factors and strategies based on participant contribution. The strategies are tested with varying data sizes to identify the most robust ones. This research tests the strategies with several biomedical datasets and the results of the experiments show that the accuracy-based weighted average outperforms the classical federated averaging method.
[ "['Jose L. Salmeron' 'Irina Arévalo' 'Antonio Ruiz-Celma']" ]
null
null
2402.10142
null
null
http://arxiv.org/pdf/2402.10142v2
2024-04-30T04:15:24Z
2024-02-15T17:48:58Z
Tracking Changing Probabilities via Dynamic Learners
Consider a predictor, a learner, whose input is a stream of discrete items. The predictor's task, at every time point, is probabilistic multiclass prediction, i.e., to predict which item may occur next by outputting zero or more candidate items, each with a probability, after which the actual item is revealed and the predictor learns from this observation. To output probabilities, the predictor keeps track of the proportions of the items it has seen. The stream is unbounded and the predictor has finite limited space and we seek efficient prediction and update techniques: the set of items is unknown to the predictor and their totality can also grow unbounded. Moreover, there is non-stationarity: the underlying frequencies of items may change, substantially, from time to time. For instance, new items may start appearing and a few recently frequent items may cease to occur again. The predictor, being space-bounded, need only provide probabilities for those items with (currently) sufficiently high frequency, i.e., the salient items. This problem is motivated in the setting of prediction games, a self-supervised learning regime where concepts serve as both the predictors and the predictands, and the set of concepts grows over time, resulting in non-stationarities as new concepts are generated and used. We develop sparse multiclass moving average techniques designed to respond to such non-stationarities in a timely manner. One technique is based on the exponentiated moving average (EMA) and another is based on queuing a few count snapshots. We show that the combination, and in particular supporting dynamic predictand-specific learning rates, offers advantages in terms of faster change detection and convergence.
[ "['Omid Madani']" ]
null
null
2402.10145
null
null
http://arxiv.org/pdf/2402.10145v1
2024-02-15T17:49:50Z
2024-02-15T17:49:50Z
A chaotic maps-based privacy-preserving distributed deep learning for incomplete and Non-IID datasets
Federated Learning is a machine learning approach that enables the training of a deep learning model among several participants with sensitive data that wish to share their own knowledge without compromising the privacy of their data. In this research, the authors employ a secured Federated Learning method with an additional layer of privacy and proposes a method for addressing the non-IID challenge. Moreover, differential privacy is compared with chaotic-based encryption as layer of privacy. The experimental approach assesses the performance of the federated deep learning model with differential privacy using both IID and non-IID data. In each experiment, the Federated Learning process improves the average performance metrics of the deep neural network, even in the case of non-IID data.
[ "['Irina Arévalo' 'Jose L. Salmeron']" ]
null
null
2402.10150
null
null
http://arxiv.org/pdf/2402.10150v1
2024-02-15T17:57:54Z
2024-02-15T17:57:54Z
$f$-MICL: Understanding and Generalizing InfoNCE-based Contrastive Learning
In self-supervised contrastive learning, a widely-adopted objective function is InfoNCE, which uses the heuristic cosine similarity for the representation comparison, and is closely related to maximizing the Kullback-Leibler (KL)-based mutual information. In this paper, we aim at answering two intriguing questions: (1) Can we go beyond the KL-based objective? (2) Besides the popular cosine similarity, can we design a better similarity function? We provide answers to both questions by generalizing the KL-based mutual information to the $f$-Mutual Information in Contrastive Learning ($f$-MICL) using the $f$-divergences. To answer the first question, we provide a wide range of $f$-MICL objectives which share the nice properties of InfoNCE (e.g., alignment and uniformity), and meanwhile result in similar or even superior performance. For the second question, assuming that the joint feature distribution is proportional to the Gaussian kernel, we derive an $f$-Gaussian similarity with better interpretability and empirical performance. Finally, we identify close relationships between the $f$-MICL objective and several popular InfoNCE-based objectives. Using benchmark tasks from both vision and natural language, we empirically evaluate $f$-MICL with different $f$-divergences on various architectures (SimCLR, MoCo, and MoCo v3) and datasets. We observe that $f$-MICL generally outperforms the benchmarks and the best-performing $f$-divergence is task and dataset dependent.
[ "['Yiwei Lu' 'Guojun Zhang' 'Sun Sun' 'Hongyu Guo' 'Yaoliang Yu']" ]
null
null
2402.10164
null
null
http://arxiv.org/pdf/2402.10164v1
2024-02-15T18:09:41Z
2024-02-15T18:09:41Z
Random features and polynomial rules
Random features models play a distinguished role in the theory of deep learning, describing the behavior of neural networks close to their infinite-width limit. In this work, we present a thorough analysis of the generalization performance of random features models for generic supervised learning problems with Gaussian data. Our approach, built with tools from the statistical mechanics of disordered systems, maps the random features model to an equivalent polynomial model, and allows us to plot average generalization curves as functions of the two main control parameters of the problem: the number of random features $N$ and the size $P$ of the training set, both assumed to scale as powers in the input dimension $D$. Our results extend the case of proportional scaling between $N$, $P$ and $D$. They are in accordance with rigorous bounds known for certain particular learning tasks and are in quantitative agreement with numerical experiments performed over many order of magnitudes of $N$ and $P$. We find good agreement also far from the asymptotic limits where $Dto infty$ and at least one between $P/D^K$, $N/D^L$ remains finite.
[ "['Fabián Aguirre-López' 'Silvio Franz' 'Mauro Pastore']" ]
null
null
2402.10168
null
null
http://arxiv.org/pdf/2402.10168v1
2024-02-15T18:11:02Z
2024-02-15T18:11:02Z
DeepSRGM -- Sequence Classification and Ranking in Indian Classical Music with Deep Learning
A vital aspect of Indian Classical Music (ICM) is Raga, which serves as a melodic framework for compositions and improvisations alike. Raga Recognition is an important music information retrieval task in ICM as it can aid numerous downstream applications ranging from music recommendations to organizing huge music collections. In this work, we propose a deep learning based approach to Raga recognition. Our approach employs efficient pre possessing and learns temporal sequences in music data using Long Short Term Memory based Recurrent Neural Networks (LSTM-RNN). We train and test the network on smaller sequences sampled from the original audio while the final inference is performed on the audio as a whole. Our method achieves an accuracy of 88.1% and 97 % during inference on the Comp Music Carnatic dataset and its 10 Raga subset respectively making it the state-of-the-art for the Raga recognition task. Our approach also enables sequence ranking which aids us in retrieving melodic patterns from a given music data base that are closely related to the presented query sequence.
[ "['Sathwik Tejaswi Madhusudhan' 'Girish Chowdhary']" ]
null
null
2402.10176
null
null
http://arxiv.org/pdf/2402.10176v1
2024-02-15T18:26:11Z
2024-02-15T18:26:11Z
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
Recent work has shown the immense potential of synthetically generated datasets for training large language models (LLMs), especially for acquiring targeted skills. Current large-scale math instruction tuning datasets such as MetaMathQA (Yu et al., 2024) and MAmmoTH (Yue et al., 2024) are constructed using outputs from closed-source LLMs with commercially restrictive licenses. A key reason limiting the use of open-source LLMs in these data generation pipelines has been the wide gap between the mathematical skills of the best closed-source LLMs, such as GPT-4, and the best open-source LLMs. Building on the recent progress in open-source LLMs, our proposed prompting novelty, and some brute-force scaling, we construct OpenMathInstruct-1, a math instruction tuning dataset with 1.8M problem-solution pairs. The dataset is constructed by synthesizing code-interpreter solutions for GSM8K and MATH, two popular math reasoning benchmarks, using the recently released and permissively licensed Mixtral model. Our best model, OpenMath-CodeLlama-70B, trained on a subset of OpenMathInstruct-1, achieves a score of 84.6% on GSM8K and 50.7% on MATH, which is competitive with the best gpt-distilled models. We release our code, models, and the OpenMathInstruct-1 dataset under a commercially permissive license.
[ "['Shubham Toshniwal' 'Ivan Moshkov' 'Sean Narenthiran' 'Daria Gitman'\n 'Fei Jia' 'Igor Gitman']" ]
null
null
2402.10177
null
null
http://arxiv.org/pdf/2402.10177v1
2024-02-15T18:27:18Z
2024-02-15T18:27:18Z
Large Scale Constrained Clustering With Reinforcement Learning
Given a network, allocating resources at clusters level, rather than at each node, enhances efficiency in resource allocation and usage. In this paper, we study the problem of finding fully connected disjoint clusters to minimize the intra-cluster distances and maximize the number of nodes assigned to the clusters, while also ensuring that no two nodes within a cluster exceed a threshold distance. While the problem can easily be formulated using a binary linear model, traditional combinatorial optimization solvers struggle when dealing with large-scale instances. We propose an approach to solve this constrained clustering problem via reinforcement learning. Our method involves training an agent to generate both feasible and (near) optimal solutions. The agent learns problem-specific heuristics, tailored to the instances encountered in this task. In the results section, we show that our algorithm finds near optimal solutions, even for large scale instances.
[ "['Benedikt Schesch' 'Marco Caserta']" ]
null
null
2402.10184
null
null
http://arxiv.org/pdf/2402.10184v5
2024-06-16T21:25:50Z
2024-02-15T18:39:24Z
Reward Generalization in RLHF: A Topological Perspective
Existing alignment methods share a common topology of information flow, where reward information is collected from humans, modeled with preference learning, and used to tune language models. However, this shared topology has not been systematically characterized, nor have its alternatives been thoroughly explored, leaving the problems of low data efficiency and unreliable generalization unaddressed. As a solution, we introduce a theoretical framework for investigating reward generalization in reinforcement learning from human feedback (RLHF), focusing on the topology of information flow at both macro and micro levels. At the macro level, we portray the RLHF information flow as an autoencoding process over behavior distributions, formalizing the RLHF objective of distributional consistency between human preference and model behavior. At the micro level, we present induced Bayesian networks as a theory of reward generalization in RLHF, introducing fine-grained dataset topologies into generalization bounds. Combining analysis on both levels, we propose reward modeling from tree-structured preference information. It is shown to reduce reward uncertainty by up to $Theta(log n/loglog n)$ times compared to baselines, where $n$ is the dataset size. Validation on three NLP tasks shows that our tree-based reward model achieves an average win rate of 65% against baseline methods, thus improving reward generalization for free via topology design.
[ "['Tianyi Qiu' 'Fanzhi Zeng' 'Jiaming Ji' 'Dong Yan' 'Kaile Wang'\n 'Jiayi Zhou' 'Yang Han' 'Josef Dai' 'Xuehai Pan' 'Yaodong Yang']" ]
null
null
2402.10186
null
null
http://arxiv.org/pdf/2402.10186v1
2024-02-15T18:41:35Z
2024-02-15T18:41:35Z
Self-consistent Validation for Machine Learning Electronic Structure
Machine learning has emerged as a significant approach to efficiently tackle electronic structure problems. Despite its potential, there is less guarantee for the model to generalize to unseen data that hinders its application in real-world scenarios. To address this issue, a technique has been proposed to estimate the accuracy of the predictions. This method integrates machine learning with self-consistent field methods to achieve both low validation cost and interpret-ability. This, in turn, enables exploration of the model's ability with active learning and instills confidence in its integration into real-world studies.
[ "['Gengyuan Hu' 'Gengchen Wei' 'Zekun Lou' 'Philip H. S. Torr'\n 'Wanli Ouyang' 'Han-sen Zhong' 'Chen Lin']" ]
null
null
2402.10189
null
null
http://arxiv.org/pdf/2402.10189v2
2024-03-28T19:41:34Z
2024-02-15T18:46:24Z
Uncertainty Quantification for In-Context Learning of Large Language Models
In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM's response, such as hallucination, have also been actively discussed. Existing works have been devoted to quantifying the uncertainty in LLM's response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning. In this work, we delve into the predictive uncertainty of LLMs associated with in-context learning, highlighting that such uncertainties may stem from both the provided demonstrations (aleatoric uncertainty) and ambiguities tied to the model's configurations (epistemic uncertainty). We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties. The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion. Extensive experiments are conducted to demonstrate the effectiveness of the decomposition. The code and data are available at: https://github.com/lingchen0331/UQ_ICL.
[ "['Chen Ling' 'Xujiang Zhao' 'Xuchao Zhang' 'Wei Cheng' 'Yanchi Liu'\n 'Yiyou Sun' 'Mika Oishi' 'Takao Osaki' 'Katsushi Matsuda' 'Jie Ji'\n 'Guangji Bai' 'Liang Zhao' 'Haifeng Chen']" ]
null
null
2402.10191
null
null
http://arxiv.org/pdf/2402.10191v1
2024-02-15T18:48:21Z
2024-02-15T18:48:21Z
FedAnchor: Enhancing Federated Semi-Supervised Learning with Label Contrastive Loss for Unlabeled Clients
Federated learning (FL) is a distributed learning paradigm that facilitates collaborative training of a shared global model across devices while keeping data localized. The deployment of FL in numerous real-world applications faces delays, primarily due to the prevalent reliance on supervised tasks. Generating detailed labels at edge devices, if feasible, is demanding, given resource constraints and the imperative for continuous data updates. In addressing these challenges, solutions such as federated semi-supervised learning (FSSL), which relies on unlabeled clients' data and a limited amount of labeled data on the server, become pivotal. In this paper, we propose FedAnchor, an innovative FSSL method that introduces a unique double-head structure, called anchor head, paired with the classification head trained exclusively on labeled anchor data on the server. The anchor head is empowered with a newly designed label contrastive loss based on the cosine similarity metric. Our approach mitigates the confirmation bias and overfitting issues associated with pseudo-labeling techniques based on high-confidence model prediction samples. Extensive experiments on CIFAR10/100 and SVHN datasets demonstrate that our method outperforms the state-of-the-art method by a significant margin in terms of convergence rate and model accuracy.
[ "['Xinchi Qiu' 'Yan Gao' 'Lorenzo Sani' 'Heng Pan' 'Wanru Zhao'\n 'Pedro P. B. Gusmao' 'Mina Alibeigi' 'Alex Iacob' 'Nicholas D. Lane']" ]
null
null
2402.10192
null
null
http://arxiv.org/pdf/2402.10192v2
2024-02-29T11:22:28Z
2024-02-15T18:48:32Z
Multi-Excitation Projective Simulation with a Many-Body Physics Inspired Inductive Bias
With the impressive progress of deep learning, applications relying on machine learning are increasingly being integrated into daily life. However, most deep learning models have an opaque, oracle-like nature making it difficult to interpret and understand their decisions. This problem led to the development of the field known as eXplainable Artificial Intelligence (XAI). One method in this field known as Projective Simulation (PS) models a chain-of-thought as a random walk of a particle on a graph with vertices that have concepts attached to them. While this description has various benefits, including the possibility of quantization, it cannot be naturally used to model thoughts that combine several concepts simultaneously. To overcome this limitation, we introduce Multi-Excitation Projective Simulation (mePS), a generalization that considers a chain-of-thought to be a random walk of several particles on a hypergraph. A definition for a dynamic hypergraph is put forward to describe the agent's training history along with applications to AI and hypergraph visualization. An inductive bias inspired by the remarkably successful few-body interaction models used in quantum many-body physics is formalized for our classical mePS framework and employed to tackle the exponential complexity associated with naive implementations of hypergraphs. We prove that our inductive bias reduces the complexity from exponential to polynomial, with the exponent representing the cutoff on how many particles can interact. We numerically apply our method to two toy environments and a more complex scenario modelling the diagnosis of a broken computer. These environments demonstrate the resource savings provided by an appropriate choice of inductive bias, as well as showcasing aspects of interpretability. A quantum model for mePS is also briefly outlined and some future directions for it are discussed.
[ "['Philip A. LeMaitre' 'Marius Krumm' 'Hans J. Briegel']" ]
null
null
2402.10193
null
null
http://arxiv.org/pdf/2402.10193v2
2024-02-28T03:42:10Z
2024-02-15T18:50:06Z
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Large Language Models (LLMs) are typically trained in two phases: pre-training on large internet-scale datasets, and fine-tuning for downstream tasks. Given the higher computational demand of pre-training, it's intuitive to assume that fine-tuning adds less new information to the model, and is thus more compressible. We explore this assumption by decomposing the weights of fine-tuned models into their pre-trained components and an additional delta. We introduce a simple method, BitDelta, which successfully quantizes this delta down to 1 bit without compromising performance. This interesting finding not only highlights the potential redundancy of information added during fine-tuning, but also has significant implications for the multi-tenant serving and multi-tenant storage of fine-tuned models. By enabling the use of a single high-precision base model accompanied by multiple 1-bit deltas, BitDelta dramatically reduces GPU memory requirements by more than 10x, which can also be translated to enhanced generation latency in multi-tenant settings. We validate BitDelta through experiments across Llama-2 and Mistral model families, and on models up to 70B parameters, showcasing minimal performance degradation over all tested settings.
[ "['James Liu' 'Guangxuan Xiao' 'Kai Li' 'Jason D. Lee' 'Song Han' 'Tri Dao'\n 'Tianle Cai']" ]
null
null
2402.10198
null
null
http://arxiv.org/pdf/2402.10198v3
2024-06-03T07:34:37Z
2024-02-15T18:55:05Z
SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
Transformer-based architectures achieved breakthrough performance in natural language processing and computer vision, yet they remain inferior to simpler linear baselines in multivariate long-term forecasting. To better understand this phenomenon, we start by studying a toy linear forecasting problem for which we show that transformers are incapable of converging to their true solution despite their high expressive power. We further identify the attention of transformers as being responsible for this low generalization capacity. Building upon this insight, we propose a shallow lightweight transformer model that successfully escapes bad local minima when optimized with sharpness-aware optimization. We empirically demonstrate that this result extends to all commonly used real-world multivariate time series datasets. In particular, SAMformer surpasses current state-of-the-art methods and is on par with the biggest foundation model MOIRAI while having significantly fewer parameters. The code is available at https://github.com/romilbert/samformer.
[ "['Romain Ilbert' 'Ambroise Odonnat' 'Vasilii Feofanov' 'Aladin Virmaux'\n 'Giuseppe Paolo' 'Themis Palpanas' 'Ievgen Redko']" ]
null
null
2402.10202
null
null
http://arxiv.org/pdf/2402.10202v2
2024-06-13T06:23:35Z
2024-02-15T18:56:46Z
Bridging Associative Memory and Probabilistic Modeling
Associative memory and probabilistic modeling are two fundamental topics in artificial intelligence. The first studies recurrent neural networks designed to denoise, complete and retrieve data, whereas the second studies learning and sampling from probability distributions. Based on the observation that associative memory's energy functions can be seen as probabilistic modeling's negative log likelihoods, we build a bridge between the two that enables useful flow of ideas in both directions. We showcase four examples: First, we propose new energy-based models that flexibly adapt their energy functions to new in-context datasets, an approach we term textit{in-context learning of energy functions}. Second, we propose two new associative memory models: one that dynamically creates new memories as necessitated by the training data using Bayesian nonparametrics, and another that explicitly computes proportional memory assignments using the evidence lower bound. Third, using tools from associative memory, we analytically and numerically characterize the memory capacity of Gaussian kernel density estimators, a widespread tool in probababilistic modeling. Fourth, we study a widespread implementation choice in transformers -- normalization followed by self attention -- to show it performs clustering on the hypersphere. Altogether, this work urges further exchange of useful ideas between these two continents of artificial intelligence.
[ "['Rylan Schaeffer' 'Nika Zahedi' 'Mikail Khona' 'Dhruv Pai' 'Sang Truong'\n 'Yilun Du' 'Mitchell Ostrow' 'Sarthak Chandra' 'Andres Carranza'\n 'Ila Rani Fiete' 'Andrey Gromov' 'Sanmi Koyejo']" ]
null
null
2402.10206
null
null
http://arxiv.org/pdf/2402.10206v1
2024-02-15T18:58:18Z
2024-02-15T18:58:18Z
Ising on the Graph: Task-specific Graph Subsampling via the Ising Model
Reducing a graph while preserving its overall structure is an important problem with many applications. Typically, the reduction approaches either remove edges (sparsification) or merge nodes (coarsening) in an unsupervised way with no specific downstream task in mind. In this paper, we present an approach for subsampling graph structures using an Ising model defined on either the nodes or edges and learning the external magnetic field of the Ising model using a graph neural network. Our approach is task-specific as it can learn how to reduce a graph for a specific downstream task in an end-to-end fashion. The utilized loss function of the task does not even have to be differentiable. We showcase the versatility of our approach on three distinct applications: image segmentation, 3D shape sparsification, and sparse approximate matrix inverse determination.
[ "['Maria Bånkestad' 'Jennifer Andersson' 'Sebastian Mair' 'Jens Sjölund']" ]
null
null
2402.10207
null
null
http://arxiv.org/pdf/2402.10207v5
2024-06-05T09:25:41Z
2024-02-15T18:58:31Z
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment
We consider the problem of multi-objective alignment of foundation models with human preferences, which is a critical step towards helpful and harmless AI systems. However, it is generally costly and unstable to fine-tune large foundation models using reinforcement learning (RL), and the multi-dimensionality, heterogeneity, and conflicting nature of human preferences further complicate the alignment process. In this paper, we introduce Rewards-in-Context (RiC), which conditions the response of a foundation model on multiple rewards in its prompt context and applies supervised fine-tuning for alignment. The salient features of RiC are simplicity and adaptivity, as it only requires supervised fine-tuning of a single foundation model and supports dynamic adjustment for user preferences during inference time. Inspired by the analytical solution of an abstracted convex optimization problem, our dynamic inference-time adjustment method approaches the Pareto-optimal solution for multiple objectives. Empirical evidence demonstrates the efficacy of our method in aligning both Large Language Models (LLMs) and diffusion models to accommodate diverse rewards with only around 10% GPU hours compared with multi-objective RL baseline.
[ "['Rui Yang' 'Xiaoman Pan' 'Feng Luo' 'Shuang Qiu' 'Han Zhong' 'Dong Yu'\n 'Jianshu Chen']" ]
null
null
2402.10208
null
null
http://arxiv.org/pdf/2402.10208v2
2024-07-01T12:48:51Z
2024-02-15T18:59:02Z
Recovering the Pre-Fine-Tuning Weights of Generative Models
The dominant paradigm in generative modeling consists of two steps: i) pre-training on a large-scale but unsafe dataset, ii) aligning the pre-trained model with human values via fine-tuning. This practice is considered safe, as no current method can recover the unsafe, pre-fine-tuning model weights. In this paper, we demonstrate that this assumption is often false. Concretely, we present Spectral DeTuning, a method that can recover the weights of the pre-fine-tuning model using a few low-rank (LoRA) fine-tuned models. In contrast to previous attacks that attempt to recover pre-fine-tuning capabilities, our method aims to recover the exact pre-fine-tuning weights. Our approach exploits this new vulnerability against large-scale models such as a personalized Stable Diffusion and an aligned Mistral.
[ "['Eliahu Horwitz' 'Jonathan Kahana' 'Yedid Hoshen']" ]
null
null
2402.10210
null
null
http://arxiv.org/pdf/2402.10210v1
2024-02-15T18:59:18Z
2024-02-15T18:59:18Z
Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation
Fine-tuning Diffusion Models remains an underexplored frontier in generative artificial intelligence (GenAI), especially when compared with the remarkable progress made in fine-tuning Large Language Models (LLMs). While cutting-edge diffusion models such as Stable Diffusion (SD) and SDXL rely on supervised fine-tuning, their performance inevitably plateaus after seeing a certain volume of data. Recently, reinforcement learning (RL) has been employed to fine-tune diffusion models with human preference data, but it requires at least two images ("winner" and "loser" images) for each text prompt. In this paper, we introduce an innovative technique called self-play fine-tuning for diffusion models (SPIN-Diffusion), where the diffusion model engages in competition with its earlier versions, facilitating an iterative self-improvement process. Our approach offers an alternative to conventional supervised fine-tuning and RL strategies, significantly improving both model performance and alignment. Our experiments on the Pick-a-Pic dataset reveal that SPIN-Diffusion outperforms the existing supervised fine-tuning method in aspects of human preference alignment and visual appeal right from its first iteration. By the second iteration, it exceeds the performance of RLHF-based methods across all metrics, achieving these results with less data.
[ "['Huizhuo Yuan' 'Zixiang Chen' 'Kaixuan Ji' 'Quanquan Gu']" ]
null
null
2402.10211
null
null
http://arxiv.org/pdf/2402.10211v1
2024-02-15T18:59:43Z
2024-02-15T18:59:43Z
Hierarchical State Space Models for Continuous Sequence-to-Sequence Modeling
Reasoning from sequences of raw sensory data is a ubiquitous problem across fields ranging from medical devices to robotics. These problems often involve using long sequences of raw sensor data (e.g. magnetometers, piezoresistors) to predict sequences of desirable physical quantities (e.g. force, inertial measurements). While classical approaches are powerful for locally-linear prediction problems, they often fall short when using real-world sensors. These sensors are typically non-linear, are affected by extraneous variables (e.g. vibration), and exhibit data-dependent drift. For many problems, the prediction task is exacerbated by small labeled datasets since obtaining ground-truth labels requires expensive equipment. In this work, we present Hierarchical State-Space Models (HiSS), a conceptually simple, new technique for continuous sequential prediction. HiSS stacks structured state-space models on top of each other to create a temporal hierarchy. Across six real-world sensor datasets, from tactile-based state prediction to accelerometer-based inertial measurement, HiSS outperforms state-of-the-art sequence models such as causal Transformers, LSTMs, S4, and Mamba by at least 23% on MSE. Our experiments further indicate that HiSS demonstrates efficient scaling to smaller datasets and is compatible with existing data-filtering techniques. Code, datasets and videos can be found on https://hiss-csp.github.io.
[ "['Raunaq Bhirangi' 'Chenyu Wang' 'Venkatesh Pattabiraman' 'Carmel Majidi'\n 'Abhinav Gupta' 'Tess Hellebrekers' 'Lerrel Pinto']" ]
null
null
2402.10213
null
null
http://arxiv.org/pdf/2402.10213v1
2023-11-30T02:02:30Z
2023-11-30T02:02:30Z
Clustering Inductive Biases with Unrolled Networks
The classical sparse coding (SC) model represents visual stimuli as a linear combination of a handful of learned basis functions that are Gabor-like when trained on natural image data. However, the Gabor-like filters learned by classical sparse coding far overpredict well-tuned simple cell receptive field profiles observed empirically. While neurons fire sparsely, neuronal populations are also organized in physical space by their sensitivity to certain features. In V1, this organization is a smooth progression of orientations along the cortical sheet. A number of subsequent models have either discarded the sparse dictionary learning framework entirely or whose updates have yet to take advantage of the surge in unrolled, neural dictionary learning architectures. A key missing theme of these updates is a stronger notion of emph{structured sparsity}. We propose an autoencoder architecture (WLSC) whose latent representations are implicitly, locally organized for spectral clustering through a Laplacian quadratic form of a bipartite graph, which generates a diverse set of artificial receptive fields that match primate data in V1 as faithfully as recent contrastive frameworks like Local Low Dimensionality, or LLD citep{lld} that discard sparse dictionary learning. By unifying sparse and smooth coding in models of the early visual cortex through our autoencoder, we also show that our regularization can be interpreted as early-stage specialization of receptive fields to certain classes of stimuli; that is, we induce a weak clustering bias for later stages of cortex where functional and spatial segregation (i.e. topography) are known to occur. The results show an imperative for emph{spatial regularization} of both the receptive fields and firing rates to begin to describe feature disentanglement in V1 and beyond.
[ "['Jonathan Huml' 'Abiy Tasissa' 'Demba Ba']" ]
null
null
2402.10227
null
null
http://arxiv.org/pdf/2402.10227v1
2024-02-04T19:33:44Z
2024-02-04T19:33:44Z
Correlational Lagrangian Schrödinger Bridge: Learning Dynamics with Population-Level Regularization
Accurate modeling of system dynamics holds intriguing potential in broad scientific fields including cytodynamics and fluid mechanics. This task often presents significant challenges when (i) observations are limited to cross-sectional samples (where individual trajectories are inaccessible for learning), and moreover, (ii) the behaviors of individual particles are heterogeneous (especially in biological systems due to biodiversity). To address them, we introduce a novel framework dubbed correlational Lagrangian Schr"odinger bridge (CLSB), aiming to seek for the evolution "bridging" among cross-sectional observations, while regularized for the minimal population "cost". In contrast to prior methods relying on textit{individual}-level regularizers for all particles textit{homogeneously} (e.g. restraining individual motions), CLSB operates at the population level admitting the heterogeneity nature, resulting in a more generalizable modeling in practice. To this end, our contributions include (1) a new class of population regularizers capturing the temporal variations in multivariate relations, with the tractable formulation derived, (2) three domain-informed instantiations based on genetic co-expression stability, and (3) an integration of population regularizers into data-driven generative models as constrained optimization, and a numerical solution, with further extension to conditional generative models. Empirically, we demonstrate the superiority of CLSB in single-cell sequencing data analyses such as simulating cell development over time and predicting cellular responses to drugs of varied doses.
[ "['Yuning You' 'Ruida Zhou' 'Yang Shen']" ]
null
null
2402.10228
null
null
http://arxiv.org/pdf/2402.10228v5
2024-06-14T04:51:07Z
2024-02-05T07:07:30Z
Q-Star Meets Scalable Posterior Sampling: Bridging Theory and Practice via HyperAgent
We propose HyperAgent, a reinforcement learning (RL) algorithm based on the hypermodel framework for exploration in RL. HyperAgent allows for the efficient incremental approximation of posteriors associated with an optimal action-value function ($Q^star$) without the need for conjugacy and follows the greedy policies w.r.t. these approximate posterior samples. We demonstrate that HyperAgent offers robust performance in large-scale deep RL benchmarks. It can solve Deep Sea hard exploration problems with episodes that optimally scale with problem size and exhibits significant efficiency gains in the Atari suite. Implementing HyperAgent requires minimal code addition to well-established deep RL frameworks like DQN. We theoretically prove that, under tabular assumptions, HyperAgent achieves logarithmic per-step computational complexity while attaining sublinear regret, matching the best known randomized tabular RL algorithm.
[ "['Yingru Li' 'Jiawei Xu' 'Lei Han' 'Zhi-Quan Luo']" ]
null
null
2402.10229
null
null
http://arxiv.org/pdf/2402.10229v1
2024-02-08T19:34:24Z
2024-02-08T19:34:24Z
Mixture-Models: a one-stop Python Library for Model-based Clustering using various Mixture Models
texttt{Mixture-Models} is an open-source Python library for fitting Gaussian Mixture Models (GMM) and their variants, such as Parsimonious GMMs, Mixture of Factor Analyzers, MClust models, Mixture of Student's t distributions, etc. It streamlines the implementation and analysis of these models using various first/second order optimization routines such as Gradient Descent and Newton-CG through automatic differentiation (AD) tools. This helps in extending these models to high-dimensional data, which is first of its kind among Python libraries. The library provides user-friendly model evaluation tools, such as BIC, AIC, and log-likelihood estimation. The source-code is licensed under MIT license and can be accessed at url{https://github.com/kasakh/Mixture-Models}. The package is highly extensible, allowing users to incorporate new distributions and optimization techniques with ease. We conduct a large scale simulation to compare the performance of various gradient based approaches against Expectation Maximization on a wide range of settings and identify the corresponding best suited approach.
[ "['Siva Rajesh Kasa' 'Hu Yijie' 'Santhosh Kumar Kasa' 'Vaibhav Rajan']" ]
null
null
2402.10230
null
null
http://arxiv.org/pdf/2402.10230v1
2024-02-08T21:58:53Z
2024-02-08T21:58:53Z
Temporal Analysis of Drifting Hashtags in Textual Data Streams: A Graph-Based Application
Social media has played an important role since its emergence. People use the internet to express opinions about anything, making social media platforms a social sensor. Initially supported by Twitter, the hashtags are now in use on several social media platforms. Hashtags are helpful to tag, track, and group posts on similar topics. In this paper, we analyze hashtag drifts over time using concepts from graph analysis and textual data streams using the Girvan-Newman method to uncover hashtag communities in annual snapshots. More specifically, we analyzed the #mybodymychoice hashtag between 2018 and 2022. In addition, we offer insights about some hashtags found in the study. Furthermore, our approach can be useful for monitoring changes over time in opinions and sentiment patterns about an entity on social media. Even though the hashtag #mybodymychoice was initially coupled with women's rights, abortion, and bodily autonomy, we observe that it suffered drifts during the studied period across topics such as drug legalization, vaccination, political protests, war, and civil rights. The year 2021 was the most significant drifting year, in which the communities detected suggest that #mybodymychoice significantly drifted to vaccination and Covid-19-related topics.
[ "['Cristiano M. Garcia' 'Alceu de Souza Britto Jr' 'Jean Paul Barddal']" ]
null
null
2402.10231
null
null
http://arxiv.org/pdf/2402.10231v1
2024-02-09T16:53:19Z
2024-02-09T16:53:19Z
A Multi-faceted Semi-Synthetic Dataset for Automated Cyberbullying Detection
In recent years, the rising use of social media has propelled automated cyberbullying detection into a prominent research domain. However, challenges persist due to the absence of a standardized definition and universally accepted datasets. Many researchers now view cyberbullying as a facet of cyberaggression, encompassing factors like repetition, peer relationships, and harmful intent in addition to online aggression. Acquiring comprehensive data reflective of all cyberbullying components from social media networks proves to be a complex task. This paper provides a description of an extensive semi-synthetic cyberbullying dataset that incorporates all of the essential aspects of cyberbullying, including aggression, repetition, peer relationships, and intent to harm. The method of creating the dataset is succinctly outlined, and a detailed overview of the publicly accessible dataset is additionally presented. This accompanying data article provides an in-depth look at the dataset, increasing transparency and enabling replication. It also aids in a deeper understanding of the data, supporting broader research use.
[ "['Naveed Ejaz' 'Fakhra Kashif' 'Salimur Choudhury']" ]
null
null
2402.10232
null
null
http://arxiv.org/pdf/2402.10232v3
2024-02-27T12:05:09Z
2024-02-10T15:37:46Z
Simple, unified analysis of Johnson-Lindenstrauss with applications
We present a simple and unified analysis of the Johnson-Lindenstrauss (JL) lemma, a cornerstone in the field of dimensionality reduction critical for managing high-dimensional data. Our approach not only simplifies the understanding but also unifies various constructions under the JL framework, including spherical, binary-coin, sparse JL, Gaussian and sub-Gaussian models. This simplification and unification make significant strides in preserving the intrinsic geometry of data, essential across diverse applications from streaming algorithms to reinforcement learning. Notably, we deliver the first rigorous proof of the spherical construction's effectiveness and provide a general class of sub-Gaussian constructions within this simplified framework. At the heart of our contribution is an innovative extension of the Hanson-Wright inequality to high dimensions, complete with explicit constants. By employing simple yet powerful probabilistic tools and analytical techniques, such as an enhanced diagonalization process, our analysis not only solidifies the JL lemma's theoretical foundation by removing an independence assumption but also extends its practical reach, showcasing its adaptability and importance in contemporary computational algorithms.
[ "['Yingru Li']" ]
null
null
2402.10236
null
null
http://arxiv.org/pdf/2402.10236v1
2024-02-14T14:30:42Z
2024-02-14T14:30:42Z
Discovering Sensorimotor Agency in Cellular Automata using Diversity Search
The research field of Artificial Life studies how life-like phenomena such as autopoiesis, agency, or self-regulation can self-organize in computer simulations. In cellular automata (CA), a key open-question has been whether it it is possible to find environment rules that self-organize robust "individuals" from an initial state with no prior existence of things like "bodies", "brain", "perception" or "action". In this paper, we leverage recent advances in machine learning, combining algorithms for diversity search, curriculum learning and gradient descent, to automate the search of such "individuals", i.e. localized structures that move around with the ability to react in a coherent manner to external obstacles and maintain their integrity, hence primitive forms of sensorimotor agency. We show that this approach enables to find systematically environmental conditions in CA leading to self-organization of such basic forms of agency. Through multiple experiments, we show that the discovered agents have surprisingly robust capabilities to move, maintain their body integrity and navigate among various obstacles. They also show strong generalization abilities, with robustness to changes of scale, random updates or perturbations from the environment not seen during training. We discuss how this approach opens new perspectives in AI and synthetic bioengineering.
[ "['Gautier Hamon' 'Mayalen Etcheverry' 'Bert Wang-Chak Chan'\n 'Clément Moulin-Frier' 'Pierre-Yves Oudeyer']" ]
null
null
2402.10238
null
null
http://arxiv.org/pdf/2402.10238v1
2024-02-14T18:12:42Z
2024-02-14T18:12:42Z
Parametric Learning of Time-Advancement Operators for Unstable Flame Evolution
This study investigates the application of machine learning, specifically Fourier Neural Operator (FNO) and Convolutional Neural Network (CNN), to learn time-advancement operators for parametric partial differential equations (PDEs). Our focus is on extending existing operator learning methods to handle additional inputs representing PDE parameters. The goal is to create a unified learning approach that accurately predicts short-term solutions and provides robust long-term statistics under diverse parameter conditions, facilitating computational cost savings and accelerating development in engineering simulations. We develop and compare parametric learning methods based on FNO and CNN, evaluating their effectiveness in learning parametric-dependent solution time-advancement operators for one-dimensional PDEs and realistic flame front evolution data obtained from direct numerical simulations of the Navier-Stokes equations.
[ "['Rixin Yu' 'Erdzan Hodzic']" ]
null
null
2402.10239
null
null
http://arxiv.org/pdf/2402.10239v1
2024-02-14T18:24:49Z
2024-02-14T18:24:49Z
A Language Model for Particle Tracking
Particle tracking is crucial for almost all physics analysis programs at the Large Hadron Collider. Deep learning models are pervasively used in particle tracking related tasks. However, the current practice is to design and train one deep learning model for one task with supervised learning techniques. The trained models work well for tasks they are trained on but show no or little generalization capabilities. We propose to unify these models with a language model. In this paper, we present a tokenized detector representation that allows us to train a BERT model for particle tracking. The trained BERT model, namely TrackingBERT, offers latent detector module embedding that can be used for other tasks. This work represents the first step towards developing a foundational model for particle detector understanding.
[ "['Andris Huang' 'Yash Melkani' 'Paolo Calafiura' 'Alina Lazar'\n 'Daniel Thomas Murnane' 'Minh-Tuan Pham' 'Xiangyang Ju']" ]
null
null
2402.10240
null
null
http://arxiv.org/pdf/2402.10240v2
2024-02-27T22:11:42Z
2024-02-14T18:44:05Z
A Dynamical View of the Question of Why
We address causal reasoning in multivariate time series data generated by stochastic processes. Existing approaches are largely restricted to static settings, ignoring the continuity and emission of variations across time. In contrast, we propose a learning paradigm that directly establishes causation between events in the course of time. We present two key lemmas to compute causal contributions and frame them as reinforcement learning problems. Our approach offers formal and computational tools for uncovering and quantifying causal relationships in diffusion processes, subsuming various important settings such as discrete-time Markov decision processes. Finally, in fairly intricate experiments and through sheer learning, our framework reveals and quantifies causal links, which otherwise seem inexplicable.
[ "['Mehdi Fatemi' 'Sindhu Gowda']" ]
null
null
2402.10242
null
null
http://arxiv.org/pdf/2402.10242v2
2024-03-15T15:22:50Z
2024-02-14T19:37:30Z
Signed Diverse Multiplex Networks: Clustering and Inference
The paper introduces a Signed Generalized Random Dot Product Graph (SGRDPG) model, which is a variant of the Generalized Random Dot Product Graph (GRDPG), where, in addition, edges can be positive or negative. The setting is extended to a multiplex version, where all layers have the same collection of nodes and follow the SGRDPG. The only common feature of the layers of the network is that they can be partitioned into groups with common subspace structures, while otherwise matrices of connection probabilities can be all different. The setting above is extremely flexible and includes a variety of existing multiplex network models as its particular cases. The paper fulfills two objectives. First, it shows that keeping signs of the edges in the process of network construction leads to a better precision of estimation and clustering and, hence, is beneficial for tackling real world problems such as, for example, analysis of brain networks. Second, by employing novel algorithms, our paper ensures strongly consistent clustering of layers and high accuracy of subspace estimation. In addition to theoretical guarantees, both of those features are demonstrated using numerical simulations and a real data example.
[ "['Marianna Pensky']" ]
null
null
2402.10243
null
null
http://arxiv.org/pdf/2402.10243v1
2024-02-14T22:23:26Z
2024-02-14T22:23:26Z
Understanding team collapse via probabilistic graphical models
In this work, we develop a graphical model to capture team dynamics. We analyze the model and show how to learn its parameters from data. Using our model we study the phenomenon of team collapse from a computational perspective. We use simulations and real-world experiments to find the main causes of team collapse. We also provide the principles of building resilient teams, i.e., teams that avoid collapsing. Finally, we use our model to analyze the structure of NBA teams and dive deeper into games of interest.
[ "['Iasonas Nikolaou' 'Konstantinos Pelechrinis' 'Evimaria Terzi']" ]
null
null
2402.10248
null
null
http://arxiv.org/pdf/2402.10248v1
2024-02-15T11:09:22Z
2024-02-15T11:09:22Z
A Data-Driven Supervised Machine Learning Approach to Estimating Global Ambient Air Pollution Concentrations With Associated Prediction Intervals
Global ambient air pollution, a transboundary challenge, is typically addressed through interventions relying on data from spatially sparse and heterogeneously placed monitoring stations. These stations often encounter temporal data gaps due to issues such as power outages. In response, we have developed a scalable, data-driven, supervised machine learning framework. This model is designed to impute missing temporal and spatial measurements, thereby generating a comprehensive dataset for pollutants including NO$_2$, O$_3$, PM$_{10}$, PM$_{2.5}$, and SO$_2$. The dataset, with a fine granularity of 0.25$^{circ}$ at hourly intervals and accompanied by prediction intervals for each estimate, caters to a wide range of stakeholders relying on outdoor air pollution data for downstream assessments. This enables more detailed studies. Additionally, the model's performance across various geographical locations is examined, providing insights and recommendations for strategic placement of future monitoring stations to further enhance the model's accuracy.
[ "['Liam J Berrisford' 'Hugo Barbosa' 'Ronaldo Menezes']" ]
null
null
2402.10251
null
null
http://arxiv.org/pdf/2402.10251v4
2024-03-28T13:55:31Z
2024-02-15T16:04:11Z
Brant-2: Foundation Model for Brain Signals
Foundational models benefit from pre-training on large amounts of unlabeled data and enable strong performance in a wide variety of applications with a small amount of labeled data. Such models can be particularly effective in analyzing brain signals, as this field encompasses numerous application scenarios, and it is costly to perform large-scale annotation. In this work, we present the largest foundation model in brain signals, Brant-2. Compared to Brant, a foundation model designed for intracranial neural signals, Brant-2 not only exhibits robustness towards data variations and modeling scales but also can be applied to a broader range of brain neural data. By experimenting on an extensive range of tasks, we demonstrate that Brant-2 is adaptive to various application scenarios in brain signals. Further analyses reveal the scalability of the Brant-2, validate each component's effectiveness, and showcase our model's ability to maintain performance in scenarios with scarce labels.
[ "['Zhizhang Yuan' 'Daoze Zhang' 'Junru Chen' 'Gefei Gu' 'Yang Yang']" ]
null
null
2402.10252
null
null
http://arxiv.org/pdf/2402.10252v1
2024-02-15T16:16:30Z
2024-02-15T16:16:30Z
Online Control of Linear Systems with Unbounded and Degenerate Noise
This paper investigates the problem of controlling a linear system under possibly unbounded and degenerate noise with unknown cost functions, known as an online control problem. In contrast to the existing work, which assumes the boundedness of noise, we reveal that for convex costs, an $ widetilde{O}(sqrt{T}) $ regret bound can be achieved even for unbounded noise, where $ T $ denotes the time horizon. Moreover, when the costs are strongly convex, we establish an $ O({rm poly} (log T)) $ regret bound without the assumption that noise covariance is non-degenerate, which has been required in the literature. The key ingredient in removing the rank assumption on noise is a system transformation associated with the noise covariance. This simultaneously enables the parameter reduction of an online control algorithm.
[ "['Kaito Ito' 'Taira Tsuchiya']" ]
null
null
2402.10254
null
null
http://arxiv.org/pdf/2402.10254v1
2024-02-07T12:28:52Z
2024-02-07T12:28:52Z
Personalized Federated Learning for Statistical Heterogeneity
The popularity of federated learning (FL) is on the rise, along with growing concerns about data privacy in artificial intelligence applications. FL facilitates collaborative multi-party model learning while simultaneously ensuring the preservation of data confidentiality. Nevertheless, the problem of statistical heterogeneity caused by the presence of diverse client data distributions gives rise to certain challenges, such as inadequate personalization and slow convergence. In order to address the above issues, this paper offers a brief summary of the current research progress in the field of personalized federated learning (PFL). It outlines the PFL concept, examines related techniques, and highlights current endeavors. Furthermore, this paper also discusses potential further research and obstacles associated with PFL.
[ "['Muhammad Firdaus' 'Kyung-Hyune Rhee']" ]
null
null
2402.10260
null
null
http://arxiv.org/pdf/2402.10260v1
2024-02-15T18:58:09Z
2024-02-15T18:58:09Z
A StrongREJECT for Empty Jailbreaks
The rise of large language models (LLMs) has drawn attention to the existence of "jailbreaks" that allow the models to be used maliciously. However, there is no standard benchmark for measuring the severity of a jailbreak, leaving authors of jailbreak papers to create their own. We show that these benchmarks often include vague or unanswerable questions and use grading criteria that are biased towards overestimating the misuse potential of low-quality model responses. Some jailbreak techniques make the problem worse by decreasing the quality of model responses even on benign questions: we show that several jailbreaking techniques substantially reduce the zero-shot performance of GPT-4 on MMLU. Jailbreaks can also make it harder to elicit harmful responses from an "uncensored" open-source model. We present a new benchmark, StrongREJECT, which better discriminates between effective and ineffective jailbreaks by using a higher-quality question set and a more accurate response grading algorithm. We show that our new grading scheme better accords with human judgment of response quality and overall jailbreak effectiveness, especially on the sort of low-quality responses that contribute the most to over-estimation of jailbreak performance on existing benchmarks. We release our code and data at https://github.com/alexandrasouly/strongreject.
[ "['Alexandra Souly' 'Qingyuan Lu' 'Dillon Bowen' 'Tu Trinh' 'Elvis Hsieh'\n 'Sana Pandey' 'Pieter Abbeel' 'Justin Svegliato' 'Scott Emmons'\n 'Olivia Watkins' 'Sam Toyer']" ]
null
null
2402.10280
null
null
http://arxiv.org/pdf/2402.10280v1
2024-02-15T19:15:15Z
2024-02-15T19:15:15Z
SusFL: Energy-Aware Federated Learning-based Monitoring for Sustainable Smart Farms
We propose a novel energy-aware federated learning (FL)-based system, namely SusFL, for sustainable smart farming to address the challenge of inconsistent health monitoring due to fluctuating energy levels of solar sensors. This system equips animals, such as cattle, with solar sensors with computational capabilities, including Raspberry Pis, to train a local deep-learning model on health data. These sensors periodically update Long Range (LoRa) gateways, forming a wireless sensor network (WSN) to detect diseases like mastitis. Our proposed SusFL system incorporates mechanism design, a game theory concept, for intelligent client selection to optimize monitoring quality while minimizing energy use. This strategy ensures the system's sustainability and resilience against adversarial attacks, including data poisoning and privacy threats, that could disrupt FL operations. Through extensive comparative analysis using real-time datasets, we demonstrate that our FL-based monitoring system significantly outperforms existing methods in prediction accuracy, operational efficiency, system reliability (i.e., mean time between failures or MTBF), and social welfare maximization by the mechanism designer. Our findings validate the superiority of our system for effective and sustainable animal health monitoring in smart farms. The experimental results show that SusFL significantly improves system performance, including a $10%$ reduction in energy consumption, a $15%$ increase in social welfare, and a $34%$ rise in Mean Time Between Failures (MTBF), alongside a marginal increase in the global model's prediction accuracy.
[ "['Dian Chen' 'Paul Yang' 'Ing-Ray Chen' 'Dong Sam Ha' 'Jin-Hee Cho']" ]
null
null
2402.10282
null
null
http://arxiv.org/pdf/2402.10282v1
2024-02-15T19:18:47Z
2024-02-15T19:18:47Z
Information Capacity Regret Bounds for Bandits with Mediator Feedback
This work addresses the mediator feedback problem, a bandit game where the decision set consists of a number of policies, each associated with a probability distribution over a common space of outcomes. Upon choosing a policy, the learner observes an outcome sampled from its distribution and incurs the loss assigned to this outcome in the present round. We introduce the policy set capacity as an information-theoretic measure for the complexity of the policy set. Adopting the classical EXP4 algorithm, we provide new regret bounds depending on the policy set capacity in both the adversarial and the stochastic settings. For a selection of policy set families, we prove nearly-matching lower bounds, scaling similarly with the capacity. We also consider the case when the policies' distributions can vary between rounds, thus addressing the related bandits with expert advice problem, which we improve upon its prior results. Additionally, we prove a lower bound showing that exploiting the similarity between the policies is not possible in general under linear bandit feedback. Finally, for a full-information variant, we provide a regret bound scaling with the information radius of the policy set.
[ "['Khaled Eldowa' 'Nicolò Cesa-Bianchi' 'Alberto Maria Metelli'\n 'Marcello Restelli']" ]
null
null
2402.10283
null
null
http://arxiv.org/pdf/2402.10283v1
2024-02-15T19:19:54Z
2024-02-15T19:19:54Z
Backdoor Attack against One-Class Sequential Anomaly Detection Models
Deep anomaly detection on sequential data has garnered significant attention due to the wide application scenarios. However, deep learning-based models face a critical security threat - their vulnerability to backdoor attacks. In this paper, we explore compromising deep sequential anomaly detection models by proposing a novel backdoor attack strategy. The attack approach comprises two primary steps, trigger generation and backdoor injection. Trigger generation is to derive imperceptible triggers by crafting perturbed samples from the benign normal data, of which the perturbed samples are still normal. The backdoor injection is to properly inject the backdoor triggers to comprise the model only for the samples with triggers. The experimental results demonstrate the effectiveness of our proposed attack strategy by injecting backdoors on two well-established one-class anomaly detection models.
[ "['He Cheng' 'Shuhan Yuan']" ]
null
null
2402.10289
null
null
http://arxiv.org/pdf/2402.10289v1
2024-02-15T19:37:39Z
2024-02-15T19:37:39Z
Thompson Sampling in Partially Observable Contextual Bandits
Contextual bandits constitute a classical framework for decision-making under uncertainty. In this setting, the goal is to learn the arms of highest reward subject to contextual information, while the unknown reward parameters of each arm need to be learned by experimenting that specific arm. Accordingly, a fundamental problem is that of balancing exploration (i.e., pulling different arms to learn their parameters), versus exploitation (i.e., pulling the best arms to gain reward). To study this problem, the existing literature mostly considers perfectly observed contexts. However, the setting of partial context observations remains unexplored to date, despite being theoretically more general and practically more versatile. We study bandit policies for learning to select optimal arms based on the data of observations, which are noisy linear functions of the unobserved context vectors. Our theoretical analysis shows that the Thompson sampling policy successfully balances exploration and exploitation. Specifically, we establish the followings: (i) regret bounds that grow poly-logarithmically with time, (ii) square-root consistency of parameter estimation, and (iii) scaling of the regret with other quantities including dimensions and number of arms. Extensive numerical experiments with both real and synthetic data are presented as well, corroborating the efficacy of Thompson sampling. To establish the results, we introduce novel martingale techniques and concentration inequalities to address partially observed dependent random variables generated from unspecified distributions, and also leverage problem-dependent information to sharpen probabilistic bounds for time-varying suboptimality gaps. These techniques pave the road towards studying other decision-making problems with contextual information as well as partial observations.
[ "['Hongju Park' 'Mohamad Kazem Shirani Faradonbeh']" ]
null
null
2402.10291
null
null
http://arxiv.org/pdf/2402.10291v2
2024-04-05T03:09:07Z
2024-02-15T19:45:24Z
An Evaluation of Real-time Adaptive Sampling Change Point Detection Algorithm using KCUSUM
Detecting abrupt changes in real-time data streams from scientific simulations presents a challenging task, demanding the deployment of accurate and efficient algorithms. Identifying change points in live data stream involves continuous scrutiny of incoming observations for deviations in their statistical characteristics, particularly in high-volume data scenarios. Maintaining a balance between sudden change detection and minimizing false alarms is vital. Many existing algorithms for this purpose rely on known probability distributions, limiting their feasibility. In this study, we introduce the Kernel-based Cumulative Sum (KCUSUM) algorithm, a non-parametric extension of the traditional Cumulative Sum (CUSUM) method, which has gained prominence for its efficacy in online change point detection under less restrictive conditions. KCUSUM splits itself by comparing incoming samples directly with reference samples and computes a statistic grounded in the Maximum Mean Discrepancy (MMD) non-parametric framework. This approach extends KCUSUM's pertinence to scenarios where only reference samples are available, such as atomic trajectories of proteins in vacuum, facilitating the detection of deviations from the reference sample without prior knowledge of the data's underlying distribution. Furthermore, by harnessing MMD's inherent random-walk structure, we can theoretically analyze KCUSUM's performance across various use cases, including metrics like expected delay and mean runtime to false alarms. Finally, we discuss real-world use cases from scientific simulations such as NWChem CODAR and protein folding data, demonstrating KCUSUM's practical effectiveness in online change point detection.
[ "['Vijayalakshmi Saravanan' 'Perry Siehien' 'Shinjae Yoo'\n 'Hubertus Van Dam' 'Thomas Flynn' 'Christopher Kelly' 'Khaled Z Ibrahim']" ]
null
null
2402.10309
null
null
http://arxiv.org/pdf/2402.10309v2
2024-05-27T20:58:38Z
2024-02-15T20:20:35Z
Discrete Probabilistic Inference as Control in Multi-path Environments
We consider the problem of sampling from a discrete and structured distribution as a sequential decision problem, where the objective is to find a stochastic policy such that objects are sampled at the end of this sequential process proportionally to some predefined reward. While we could use maximum entropy Reinforcement Learning (MaxEnt RL) to solve this problem for some distributions, it has been shown that in general, the distribution over states induced by the optimal policy may be biased in cases where there are multiple ways to generate the same object. To address this issue, Generative Flow Networks (GFlowNets) learn a stochastic policy that samples objects proportionally to their reward by approximately enforcing a conservation of flows across the whole Markov Decision Process (MDP). In this paper, we extend recent methods correcting the reward in order to guarantee that the marginal distribution induced by the optimal MaxEnt RL policy is proportional to the original reward, regardless of the structure of the underlying MDP. We also prove that some flow-matching objectives found in the GFlowNet literature are in fact equivalent to well-established MaxEnt RL algorithms with a corrected reward. Finally, we study empirically the performance of multiple MaxEnt RL and GFlowNet algorithms on multiple problems involving sampling from discrete distributions.
[ "['Tristan Deleu' 'Padideh Nouri' 'Nikolay Malkin' 'Doina Precup'\n 'Yoshua Bengio']" ]
null
null
2402.10310
null
null
http://arxiv.org/pdf/2402.10310v1
2024-02-15T20:21:40Z
2024-02-15T20:21:40Z
Interpretable Generative Adversarial Imitation Learning
Imitation learning methods have demonstrated considerable success in teaching autonomous systems complex tasks through expert demonstrations. However, a limitation of these methods is their lack of interpretability, particularly in understanding the specific task the learning agent aims to accomplish. In this paper, we propose a novel imitation learning method that combines Signal Temporal Logic (STL) inference and control synthesis, enabling the explicit representation of the task as an STL formula. This approach not only provides a clear understanding of the task but also allows for the incorporation of human knowledge and adaptation to new scenarios through manual adjustments of the STL formulae. Additionally, we employ a Generative Adversarial Network (GAN)-inspired training approach for both the inference and the control policy, effectively narrowing the gap between the expert and learned policies. The effectiveness of our algorithm is demonstrated through two case studies, showcasing its practical applicability and adaptability.
[ "['Wenliang Liu' 'Danyang Li' 'Erfan Aasi' 'Roberto Tron' 'Calin Belta']" ]
null
null
2402.10334
null
null
http://arxiv.org/pdf/2402.10334v1
2024-02-15T21:43:56Z
2024-02-15T21:43:56Z
HI-GAN: Hierarchical Inpainting GAN with Auxiliary Inputs for Combined RGB and Depth Inpainting
Inpainting involves filling in missing pixels or areas in an image, a crucial technique employed in Mixed Reality environments for various applications, particularly in Diminished Reality (DR) where content is removed from a user's visual environment. Existing methods rely on digital replacement techniques which necessitate multiple cameras and incur high costs. AR devices and smartphones use ToF depth sensors to capture scene depth maps aligned with RGB images. Despite speed and affordability, ToF cameras create imperfect depth maps with missing pixels. To address the above challenges, we propose Hierarchical Inpainting GAN (HI-GAN), a novel approach comprising three GANs in a hierarchical fashion for RGBD inpainting. EdgeGAN and LabelGAN inpaint masked edge and segmentation label images respectively, while CombinedRGBD-GAN combines their latent representation outputs and performs RGB and Depth inpainting. Edge images and particularly segmentation label images as auxiliary inputs significantly enhance inpainting performance by complementary context and hierarchical optimization. We believe we make the first attempt to incorporate label images into inpainting process.Unlike previous approaches requiring multiple sequential models and separate outputs, our work operates in an end-to-end manner, training all three models simultaneously and hierarchically. Specifically, EdgeGAN and LabelGAN are first optimized separately and further optimized inside CombinedRGBD-GAN to enhance inpainting quality. Experiments demonstrate that HI-GAN works seamlessly and achieves overall superior performance compared with existing approaches.
[ "['Ankan Dash' 'Jingyi Gu' 'Guiling Wang']" ]
null
null
2402.10339
null
null
http://arxiv.org/pdf/2402.10339v1
2024-02-15T21:57:43Z
2024-02-15T21:57:43Z
What to Do When Your Discrete Optimization Is the Size of a Neural Network?
Oftentimes, machine learning applications using neural networks involve solving discrete optimization problems, such as in pruning, parameter-isolation-based continual learning and training of binary networks. Still, these discrete problems are combinatorial in nature and are also not amenable to gradient-based optimization. Additionally, classical approaches used in discrete settings do not scale well to large neural networks, forcing scientists and empiricists to rely on alternative methods. Among these, two main distinct sources of top-down information can be used to lead the model to good solutions: (1) extrapolating gradient information from points outside of the solution set (2) comparing evaluations between members of a subset of the valid solutions. We take continuation path (CP) methods to represent using purely the former and Monte Carlo (MC) methods to represent the latter, while also noting that some hybrid methods combine the two. The main goal of this work is to compare both approaches. For that purpose, we first overview the two classes while also discussing some of their drawbacks analytically. Then, on the experimental section, we compare their performance, starting with smaller microworld experiments, which allow more fine-grained control of problem variables, and gradually moving towards larger problems, including neural network regression and neural network pruning for image classification, where we additionally compare against magnitude-based pruning.
[ "['Hugo Silva' 'Martha White']" ]
null
null
2402.10342
null
null
http://arxiv.org/pdf/2402.10342v2
2024-07-15T04:19:50Z
2024-02-15T22:11:18Z
Exploration-Driven Policy Optimization in RLHF: Theoretical Insights on Efficient Data Utilization
Reinforcement Learning from Human Feedback (RLHF) has achieved impressive empirical successes while relying on a small amount of human feedback. However, there is limited theoretical justification for this phenomenon. Additionally, most recent studies focus on value-based algorithms despite the recent empirical successes of policy-based algorithms. In this work, we consider an RLHF algorithm based on policy optimization (PO-RLHF). The algorithm is based on the popular Policy Cover-Policy Gradient (PC-PG) algorithm, which assumes knowledge of the reward function. In PO-RLHF, knowledge of the reward function is not assumed, and the algorithm uses trajectory-based comparison feedback to infer the reward function. We provide performance bounds for PO-RLHF with low query complexity, which provides insight into why a small amount of human feedback may be sufficient to achieve good performance with RLHF. A key novelty is a trajectory-level elliptical potential analysis, which bounds the reward estimation error when comparison feedback (rather than numerical reward observation) is given. We provide and analyze algorithms PG-RLHF and NN-PG-RLHF for two settings: linear and neural function approximation, respectively.
[ "['Yihan Du' 'Anna Winnicki' 'Gal Dalal' 'Shie Mannor' 'R. Srikant']" ]
null
null
2402.10350
null
null
http://arxiv.org/pdf/2402.10350v1
2024-02-15T22:43:02Z
2024-02-15T22:43:02Z
Large Language Models for Forecasting and Anomaly Detection: A Systematic Literature Review
This systematic literature review comprehensively examines the application of Large Language Models (LLMs) in forecasting and anomaly detection, highlighting the current state of research, inherent challenges, and prospective future directions. LLMs have demonstrated significant potential in parsing and analyzing extensive datasets to identify patterns, predict future events, and detect anomalous behavior across various domains. However, this review identifies several critical challenges that impede their broader adoption and effectiveness, including the reliance on vast historical datasets, issues with generalizability across different contexts, the phenomenon of model hallucinations, limitations within the models' knowledge boundaries, and the substantial computational resources required. Through detailed analysis, this review discusses potential solutions and strategies to overcome these obstacles, such as integrating multimodal data, advancements in learning methodologies, and emphasizing model explainability and computational efficiency. Moreover, this review outlines critical trends that are likely to shape the evolution of LLMs in these fields, including the push toward real-time processing, the importance of sustainable modeling practices, and the value of interdisciplinary collaboration. Conclusively, this review underscores the transformative impact LLMs could have on forecasting and anomaly detection while emphasizing the need for continuous innovation, ethical considerations, and practical solutions to realize their full potential.
[ "['Jing Su' 'Chufeng Jiang' 'Xin Jin' 'Yuxin Qiao' 'Tingsong Xiao'\n 'Hongda Ma' 'Rong Wei' 'Zhi Jing' 'Jiajun Xu' 'Junhong Lin']" ]
null
null
2402.10353
null
null
http://arxiv.org/pdf/2402.10353v1
2024-02-15T22:54:24Z
2024-02-15T22:54:24Z
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of Language Models
Prompt learning is susceptible to intrinsic bias present in pre-trained language models (LMs), resulting in sub-optimal performance of prompt-based zero/few-shot learning. In this work, we propose a null-input prompting method to calibrate intrinsic bias encoded in pre-trained LMs. Different from prior efforts that address intrinsic bias primarily for social fairness and often involve excessive computational cost, our objective is to explore enhancing LMs' performance in downstream zero/few-shot learning while emphasizing the efficiency of intrinsic bias calibration. Specifically, we leverage a diverse set of auto-selected null-meaning inputs generated from GPT-4 to prompt pre-trained LMs for intrinsic bias probing. Utilizing the bias-reflected probability distribution, we formulate a distribution disparity loss for bias calibration, where we exclusively update bias parameters ($0.1%$ of total parameters) of LMs towards equal probability distribution. Experimental results show that the calibration promotes an equitable starting point for LMs while preserving language modeling abilities. Across a wide range of datasets, including sentiment analysis and topic classification, our method significantly improves zero/few-shot learning performance of LMs for both in-context learning and prompt-based fine-tuning (on average $9%$ and $2%$, respectively).
[ "['Kang He' 'Yinghan Long' 'Kaushik Roy']" ]
null
null
2402.10357
null
null
http://arxiv.org/pdf/2402.10357v1
2024-02-15T22:59:14Z
2024-02-15T22:59:14Z
Efficient Sampling on Riemannian Manifolds via Langevin MCMC
We study the task of efficiently sampling from a Gibbs distribution $d pi^* = e^{-h} d {vol}_g$ over a Riemannian manifold $M$ via (geometric) Langevin MCMC; this algorithm involves computing exponential maps in random Gaussian directions and is efficiently implementable in practice. The key to our analysis of Langevin MCMC is a bound on the discretization error of the geometric Euler-Murayama scheme, assuming $nabla h$ is Lipschitz and $M$ has bounded sectional curvature. Our error bound matches the error of Euclidean Euler-Murayama in terms of its stepsize dependence. Combined with a contraction guarantee for the geometric Langevin Diffusion under Kendall-Cranston coupling, we prove that the Langevin MCMC iterates lie within $epsilon$-Wasserstein distance of $pi^*$ after $tilde{O}(epsilon^{-2})$ steps, which matches the iteration complexity for Euclidean Langevin MCMC. Our results apply in general settings where $h$ can be nonconvex and $M$ can have negative Ricci curvature. Under additional assumptions that the Riemannian curvature tensor has bounded derivatives, and that $pi^*$ satisfies a $CD(cdot,infty)$ condition, we analyze the stochastic gradient version of Langevin MCMC, and bound its iteration complexity by $tilde{O}(epsilon^{-2})$ as well.
[ "['Xiang Cheng' 'Jingzhao Zhang' 'Suvrit Sra']" ]
null
null
2402.10359
null
null
http://arxiv.org/abs/2402.10359v2
2024-03-16T19:56:04Z
2024-02-15T23:09:42Z
Can we Soft Prompt LLMs for Graph Learning Tasks?
Graph plays an important role in representing complex relationships in real-world applications such as social networks, biological data and citation networks. In recent years, Large Language Models (LLMs) have achieved tremendous success in various domains, which makes applying LLMs to graphs particularly appealing. However, directly applying LLMs to graph modalities presents unique challenges due to the discrepancy and mismatch between the graph and text modalities. Hence, to further investigate LLMs' potential for comprehending graph information, we introduce GraphPrompter, a novel framework designed to align graph information with LLMs via soft prompts. Specifically, GraphPrompter consists of two main components: a graph neural network to encode complex graph information and an LLM that effectively processes textual information. Comprehensive experiments on various benchmark datasets under node classification and link prediction tasks demonstrate the effectiveness of our proposed method. The GraphPrompter framework unveils the substantial capabilities of LLMs as predictors in graph-related tasks, enabling researchers to utilize LLMs across a spectrum of real-world graph scenarios more effectively.
[ "['Zheyuan Liu' 'Xiaoxin He' 'Yijun Tian' 'Nitesh V. Chawla']" ]
null
null
2402.10360
null
null
http://arxiv.org/pdf/2402.10360v2
2024-06-03T22:23:32Z
2024-02-15T23:10:45Z
Transductive Sample Complexities Are Compact
We demonstrate a compactness result holding broadly across supervised learning with a general class of loss functions: Any hypothesis class $H$ is learnable with transductive sample complexity $m$ precisely when all of its finite projections are learnable with sample complexity $m$. We prove that this exact form of compactness holds for realizable and agnostic learning with respect to any proper metric loss function (e.g., any norm on $mathbb{R}^d$) and any continuous loss on a compact space (e.g., cross-entropy, squared loss). For realizable learning with improper metric losses, we show that exact compactness of sample complexity can fail, and provide matching upper and lower bounds of a factor of 2 on the extent to which such sample complexities can differ. We conjecture that larger gaps are possible for the agnostic case. Furthermore, invoking the equivalence between sample complexities in the PAC and transductive models (up to lower order factors, in the realizable case) permits us to directly port our results to the PAC model, revealing an almost-exact form of compactness holding broadly in PAC learning.
[ "['Julian Asilis' 'Siddartha Devic' 'Shaddin Dughmi' 'Vatsal Sharan'\n 'Shang-Hua Teng']" ]
null
null
2402.10373
null
null
http://arxiv.org/pdf/2402.10373v2
2024-06-09T15:19:09Z
2024-02-15T23:39:04Z
BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
[ "['Yanis Labrak' 'Adrien Bazoge' 'Emmanuel Morin' 'Pierre-Antoine Gourraud'\n 'Mickael Rouvier' 'Richard Dufour']" ]
null
null
2402.10374
null
null
http://arxiv.org/pdf/2402.10374v2
2024-07-09T14:16:53Z
2024-02-15T23:43:53Z
Revisiting Experience Replayable Conditions
Experience replay (ER) used in (deep) reinforcement learning is considered to be applicable only to off-policy algorithms. However, there have been some cases in which ER has been applied for on-policy algorithms, suggesting that off-policyness might be a sufficient condition for applying ER. This paper reconsiders more strict "experience replayable conditions" (ERC) and proposes the way of modifying the existing algorithms to satisfy ERC. In light of this, it is postulated that the instability of policy improvements represents a pivotal factor in ERC. The instability factors are revealed from the viewpoint of metric learning as i) repulsive forces from negative samples and ii) replays of inappropriate experiences. Accordingly, the corresponding stabilization tricks are derived. As a result, it is confirmed through numerical simulations that the proposed stabilization tricks make ER applicable to an advantage actor-critic, an on-policy algorithm. Moreover, its learning performance is comparable to that of a soft actor-critic, a state-of-the-art off-policy algorithm.
[ "['Taisuke Kobayashi']" ]
null
null
2402.10376
null
null
http://arxiv.org/pdf/2402.10376v1
2024-02-16T00:04:36Z
2024-02-16T00:04:36Z
Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
CLIP embeddings have demonstrated remarkable performance across a wide range of computer vision tasks. However, these high-dimensional, dense vector representations are not easily interpretable, restricting their usefulness in downstream applications that require transparency. In this work, we empirically show that CLIP's latent space is highly structured, and consequently that CLIP representations can be decomposed into their underlying semantic components. We leverage this understanding to propose a novel method, Sparse Linear Concept Embeddings (SpLiCE), for transforming CLIP representations into sparse linear combinations of human-interpretable concepts. Distinct from previous work, SpLiCE does not require concept labels and can be applied post hoc. Through extensive experimentation with multiple real-world datasets, we validate that the representations output by SpLiCE can explain and even replace traditional dense CLIP representations, maintaining equivalent downstream performance while significantly improving their interpretability. We also demonstrate several use cases of SpLiCE representations including detecting spurious correlations, model editing, and quantifying semantic shifts in datasets.
[ "['Usha Bhalla' 'Alex Oesterling' 'Suraj Srinivas' 'Flavio P. Calmon'\n 'Himabindu Lakkaraju']" ]
null
null
2402.10379
null
null
http://arxiv.org/pdf/2402.10379v2
2024-05-27T19:54:44Z
2024-02-16T00:10:26Z
DataDreamer: A Tool for Synthetic Data Generation and Reproducible LLM Workflows
Large language models (LLMs) have become a dominant and important tool for NLP researchers in a wide range of tasks. Today, many researchers use LLMs in synthetic data generation, task evaluation, fine-tuning, distillation, and other model-in-the-loop research workflows. However, challenges arise when using these models that stem from their scale, their closed source nature, and the lack of standardized tooling for these new and emerging workflows. The rapid rise to prominence of these models and these unique challenges has had immediate adverse impacts on open science and on the reproducibility of work that uses them. In this paper, we introduce DataDreamer, an open source Python library that allows researchers to write simple code to implement powerful LLM workflows. DataDreamer also helps researchers adhere to best practices that we propose to encourage open science and reproducibility. The library and documentation are available at https://github.com/datadreamer-dev/DataDreamer .
[ "['Ajay Patel' 'Colin Raffel' 'Chris Callison-Burch']" ]
null
null
2402.10380
null
null
http://arxiv.org/pdf/2402.10380v1
2024-02-16T00:25:24Z
2024-02-16T00:25:24Z
Subgraph-level Universal Prompt Tuning
In the evolving landscape of machine learning, the adaptation of pre-trained models through prompt tuning has become increasingly prominent. This trend is particularly observable in the graph domain, where diverse pre-training strategies present unique challenges in developing effective prompt-based tuning methods for graph neural networks. Previous approaches have been limited, focusing on specialized prompting functions tailored to models with edge prediction pre-training tasks. These methods, however, suffer from a lack of generalizability across different pre-training strategies. Recently, a simple prompt tuning method has been designed for any pre-training strategy, functioning within the input graph's feature space. This allows it to theoretically emulate any type of prompting function, thereby significantly increasing its versatility for a range of downstream applications. Nevertheless, the capacity of such simple prompts to fully grasp the complex contexts found in graphs remains an open question, necessitating further investigation. Addressing this challenge, our work introduces the Subgraph-level Universal Prompt Tuning (SUPT) approach, focusing on the detailed context within subgraphs. In SUPT, prompt features are assigned at the subgraph-level, preserving the method's universal capability. This requires extremely fewer tuning parameters than fine-tuning-based methods, outperforming them in 42 out of 45 full-shot scenario experiments with an average improvement of over 2.5%. In few-shot scenarios, it excels in 41 out of 45 experiments, achieving an average performance increase of more than 6.6%.
[ "['Junhyun Lee' 'Wooseong Yang' 'Jaewoo Kang']" ]
null
null
2402.10387
null
null
http://arxiv.org/pdf/2402.10387v1
2024-02-16T00:48:20Z
2024-02-16T00:48:20Z
MFBind: a Multi-Fidelity Approach for Evaluating Drug Compounds in Practical Generative Modeling
Current generative models for drug discovery primarily use molecular docking to evaluate the quality of generated compounds. However, such models are often not useful in practice because even compounds with high docking scores do not consistently show experimental activity. More accurate methods for activity prediction exist, such as molecular dynamics based binding free energy calculations, but they are too computationally expensive to use in a generative model. We propose a multi-fidelity approach, Multi-Fidelity Bind (MFBind), to achieve the optimal trade-off between accuracy and computational cost. MFBind integrates docking and binding free energy simulators to train a multi-fidelity deep surrogate model with active learning. Our deep surrogate model utilizes a pretraining technique and linear prediction heads to efficiently fit small amounts of high-fidelity data. We perform extensive experiments and show that MFBind (1) outperforms other state-of-the-art single and multi-fidelity baselines in surrogate modeling, and (2) boosts the performance of generative models with markedly higher quality compounds.
[ "['Peter Eckmann' 'Dongxia Wu' 'Germano Heinzelmann' 'Michael K Gilson'\n 'Rose Yu']" ]