categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.20397 | null | null | http://arxiv.org/pdf/2405.20397v1 | 2024-05-30T18:06:14Z | 2024-05-30T18:06:14Z | Explainable Data-driven Modeling of Adsorption Energy in Heterogeneous
Catalysis | The increasing popularity of machine learning (ML) in catalysis has spurred interest in leveraging these techniques to enhance catalyst design. Our study aims to bridge the gap between physics-based studies and data-driven methodologies by integrating ML techniques with eXplainable AI (XAI). Specifically, we employ two XAI techniques: Post-hoc XAI analysis and Symbolic Regression. These techniques help us unravel the correlation between adsorption energy and the properties of the adsorbate-catalyst system. Leveraging a large dataset such as the Open Catalyst Dataset (OC20), we employ a combination of shallow ML techniques and XAI methodologies. Our investigation involves utilizing multiple shallow machine learning techniques to predict adsorption energy, followed by post-hoc analysis for feature importance, inter-feature correlations, and the influence of various feature values on the prediction of adsorption energy. The post-hoc analysis reveals that adsorbate properties exert a greater influence than catalyst properties in our dataset. The top five features based on higher Shapley values are adsorbate electronegativity, the number of adsorbate atoms, catalyst electronegativity, effective coordination number, and the sum of atomic numbers of the adsorbate molecule. There is a positive correlation between catalyst and adsorbate electronegativity with the prediction of adsorption energy. Additionally, symbolic regression yields results consistent with SHAP analysis. It deduces a mathematical relationship indicating that the square of the catalyst electronegativity is directly proportional to the adsorption energy. These consistent correlations resemble those derived from physics-based equations in previous research. Our work establishes a robust framework that integrates ML techniques with XAI, leveraging large datasets like OC20 to enhance catalyst design through model explainability. | [
"['Tirtha Vinchurkar' 'Janghoon Ock' 'Amir Barati Farimani']"
] |
null | null | 2405.20400 | null | null | http://arxiv.org/pdf/2405.20400v1 | 2024-05-30T18:10:02Z | 2024-05-30T18:10:02Z | Fast leave-one-cluster-out cross-validation by clustered Network
Information Criteria (NICc) | This paper introduced a clustered estimator of the Network Information Criterion (NICc) to approximate leave-one-cluster-out cross-validated deviance, which can be used as an alternative to cluster-based cross-validation when modeling clustered data. Stone proved that Akaike Information Criterion (AIC) is an asymptotic equivalence to leave-one-observation-out cross-validation if the parametric model is true. Ripley pointed out that the Network Information Criterion (NIC) derived in Stone's proof, is a better approximation to leave-one-observation-out cross-validation when the model is not true. For clustered data, we derived a clustered estimator of NIC, referred to as NICc, by substituting the Fisher information matrix in NIC with its estimator that adjusts for clustering. This adjustment imposes a larger penalty in NICc than the unclustered estimator of NIC when modeling clustered data, thereby preventing overfitting more effectively. In a simulation study and an empirical example, we used linear and logistic regression to model clustered data with Gaussian or binomial response, respectively. We showed that NICc is a better approximation to leave-one-cluster-out deviance and prevents overfitting more effectively than AIC and Bayesian Information Criterion (BIC). NICc leads to more accurate model selection, as determined by cluster-based cross-validation, compared to AIC and BIC. | [
"['Jiaxing Qiu' 'Douglas E. Lake' 'Teague R. Henry']"
] |
null | null | 2405.20404 | null | null | http://arxiv.org/pdf/2405.20404v1 | 2024-05-30T18:16:41Z | 2024-05-30T18:16:41Z | XPrompt:Explaining Large Language Model's Generation via Joint Prompt
Attribution | Large Language Models (LLMs) have demonstrated impressive performances in complex text generation tasks. However, the contribution of the input prompt to the generated content still remains obscure to humans, underscoring the necessity of elucidating and explaining the causality between input and output pairs. Existing works for providing prompt-specific explanation often confine model output to be classification or next-word prediction. Few initial attempts aiming to explain the entire language generation often treat input prompt texts independently, ignoring their combinatorial effects on the follow-up generation. In this study, we introduce a counterfactual explanation framework based on joint prompt attribution, XPrompt, which aims to explain how a few prompt texts collaboratively influences the LLM's complete generation. Particularly, we formulate the task of prompt attribution for generation interpretation as a combinatorial optimization problem, and introduce a probabilistic algorithm to search for the casual input combination in the discrete space. We define and utilize multiple metrics to evaluate the produced explanations, demonstrating both faithfulness and efficiency of our framework. | [
"['Yurui Chang' 'Bochuan Cao' 'Yujia Wang' 'Jinghui Chen' 'Lu Lin']"
] |
null | null | 2405.20405 | null | null | http://arxiv.org/pdf/2405.20405v1 | 2024-05-30T18:20:35Z | 2024-05-30T18:20:35Z | Private Mean Estimation with Person-Level Differential Privacy | We study differentially private (DP) mean estimation in the case where each person holds multiple samples. Commonly referred to as the "user-level" setting, DP here requires the usual notion of distributional stability when all of a person's datapoints can be modified. Informally, if $n$ people each have $m$ samples from an unknown $d$-dimensional distribution with bounded $k$-th moments, we show that [n = tilde Thetaleft(frac{d}{alpha^2 m} + frac{d }{ alpha m^{1/2} varepsilon} + frac{d}{alpha^{k/(k-1)} m varepsilon} + frac{d}{varepsilon}right)] people are necessary and sufficient to estimate the mean up to distance $alpha$ in $ell_2$-norm under $varepsilon$-differential privacy (and its common relaxations). In the multivariate setting, we give computationally efficient algorithms under approximate DP (with slightly degraded sample complexity) and computationally inefficient algorithms under pure DP, and our nearly matching lower bounds hold for the most permissive case of approximate DP. Our computationally efficient estimators are based on the well known noisy-clipped-mean approach, but the analysis for our setting requires new bounds on the tails of sums of independent, vector-valued, bounded-moments random variables, and a new argument for bounding the bias introduced by clipping. | [
"['Sushant Agarwal' 'Gautam Kamath' 'Mahbod Majid' 'Argyris Mouzakis'\n 'Rose Silver' 'Jonathan Ullman']"
] |
null | null | 2405.20407 | null | null | http://arxiv.org/pdf/2405.20407v2 | 2024-06-03T16:11:03Z | 2024-05-30T18:25:19Z | Convolutional L2LFlows: Generating Accurate Showers in Highly Granular
Calorimeters Using Convolutional Normalizing Flows | In the quest to build generative surrogate models as computationally efficient alternatives to rule-based simulations, the quality of the generated samples remains a crucial frontier. So far, normalizing flows have been among the models with the best fidelity. However, as the latent space in such models is required to have the same dimensionality as the data space, scaling up normalizing flows to high dimensional datasets is not straightforward. The prior L2LFlows approach successfully used a series of separate normalizing flows and sequence of conditioning steps to circumvent this problem. In this work, we extend L2LFlows to simulate showers with a 9-times larger profile in the lateral direction. To achieve this, we introduce convolutional layers and U-Net-type connections, move from masked autoregressive flows to coupling layers, and demonstrate the successful modelling of showers in the ILD Electromagnetic Calorimeter as well as Dataset 3 from the public CaloChallenge dataset. | [
"['Thorsten Buss' 'Frank Gaede' 'Gregor Kasieczka' 'Claudius Krause'\n 'David Shih']"
] |
null | null | 2405.20412 | null | null | http://arxiv.org/abs/2405.20412v1 | 2024-05-30T18:37:21Z | 2024-05-30T18:37:21Z | Audio2Rig: Artist-oriented deep learning tool for facial animation | Creating realistic or stylized facial and lip sync animation is a tedious task. It requires lot of time and skills to sync the lips with audio and convey the right emotion to the character's face. To allow animators to spend more time on the artistic and creative part of the animation, we present Audio2Rig: a new deep learning based tool leveraging previously animated sequences of a show, to generate facial and lip sync rig animation from an audio file. Based in Maya, it learns from any production rig without any adjustment and generates high quality and stylized animations which mimic the style of the show. Audio2Rig fits in the animator workflow: since it generates keys on the rig controllers, the animation can be easily retaken. The method is based on 3 neural network modules which can learn an arbitrary number of controllers. Hence, different configurations can be created for specific parts of the face (such as the tongue, lips or eyes). With Audio2Rig, animators can also pick different emotions and adjust their intensities to experiment or customize the output, and have high level controls on the keyframes setting. Our method shows excellent results, generating fine animation details while respecting the show style. Finally, as the training relies on the studio data and is done internally, it ensures data privacy and prevents from copyright infringement. | [
"['Bastien Arcelin' 'Nicolas Chaverou']"
] |
null | null | 2405.20413 | null | null | http://arxiv.org/pdf/2405.20413v1 | 2024-05-30T18:38:36Z | 2024-05-30T18:38:36Z | Jailbreaking Large Language Models Against Moderation Guardrails via
Cipher Characters | Large Language Models (LLMs) are typically harmless but remain vulnerable to carefully crafted prompts known as ``jailbreaks'', which can bypass protective measures and induce harmful behavior. Recent advancements in LLMs have incorporated moderation guardrails that can filter outputs, which trigger processing errors for certain malicious questions. Existing red-teaming benchmarks often neglect to include questions that trigger moderation guardrails, making it difficult to evaluate jailbreak effectiveness. To address this issue, we introduce JAMBench, a harmful behavior benchmark designed to trigger and evaluate moderation guardrails. JAMBench involves 160 manually crafted instructions covering four major risk categories at multiple severity levels. Furthermore, we propose a jailbreak method, JAM (Jailbreak Against Moderation), designed to attack moderation guardrails using jailbreak prefixes to bypass input-level filters and a fine-tuned shadow model functionally equivalent to the guardrail model to generate cipher characters to bypass output-level filters. Our extensive experiments on four LLMs demonstrate that JAM achieves higher jailbreak success ($sim$ $times$ 19.88) and lower filtered-out rates ($sim$ $times$ 1/6) than baselines. | [
"['Haibo Jin' 'Andy Zhou' 'Joe D. Menke' 'Haohan Wang']"
] |
null | null | 2405.20414 | null | null | http://arxiv.org/abs/2405.20414v1 | 2024-05-30T18:40:27Z | 2024-05-30T18:40:27Z | The Impact of Ontology on the Prediction of Cardiovascular Disease
Compared to Machine Learning Algorithms | Cardiovascular disease is one of the chronic diseases that is on the rise. The complications occur when cardiovascular disease is not discovered early and correctly diagnosed at the right time. Various machine learning approaches, including ontology-based Machine Learning techniques, have lately played an essential role in medical science by building an automated system that can identify heart illness. This paper compares and reviews the most prominent machine learning algorithms, as well as ontology-based Machine Learning classification. Random Forest, Logistic regression, Decision Tree, Naive Bayes, k-Nearest Neighbours, Artificial Neural Network, and Support Vector Machine were among the classification methods explored. The dataset used consists of 70000 instances and can be downloaded from the Kaggle website. The findings are assessed using performance measures generated from the confusion matrix, such as F-Measure, Accuracy, Recall, and Precision. The results showed that the ontology outperformed all the machine learning algorithms. | [
"['Hakim El Massari' 'Noreddine Gherabi' 'Sajida Mhammedi' 'Hamza Ghandi'\n 'Mohamed Bahaj' 'Muhammad Raza Naqvi']"
] |
null | null | 2405.20419 | null | null | http://arxiv.org/pdf/2405.20419v1 | 2024-05-30T18:53:53Z | 2024-05-30T18:53:53Z | Enhancing Antibiotic Stewardship using a Natural Language Approach for
Better Feature Representation | The rapid emergence of antibiotic-resistant bacteria is recognized as a global healthcare crisis, undermining the efficacy of life-saving antibiotics. This crisis is driven by the improper and overuse of antibiotics, which escalates bacterial resistance. In response, this study explores the use of clinical decision support systems, enhanced through the integration of electronic health records (EHRs), to improve antibiotic stewardship. However, EHR systems present numerous data-level challenges, complicating the effective synthesis and utilization of data. In this work, we transform EHR data into a serialized textual representation and employ pretrained foundation models to demonstrate how this enhanced feature representation can aid in antibiotic susceptibility predictions. Our results suggest that this text representation, combined with foundation models, provides a valuable tool to increase interpretability and support antibiotic stewardship efforts. | [
"['Simon A. Lee' 'Trevor Brokowski' 'Jeffrey N. Chiang']"
] |
null | null | 2405.20420 | null | null | http://arxiv.org/pdf/2405.20420v1 | 2024-05-30T18:55:50Z | 2024-05-30T18:55:50Z | Back to the Basics on Predicting Transfer Performance | In the evolving landscape of deep learning, selecting the best pre-trained models from a growing number of choices is a challenge. Transferability scorers propose alleviating this scenario, but their recent proliferation, ironically, poses the challenge of their own assessment. In this work, we propose both robust benchmark guidelines for transferability scorers, and a well-founded technique to combine multiple scorers, which we show consistently improves their results. We extensively evaluate 13 scorers from literature across 11 datasets, comprising generalist, fine-grained, and medical imaging datasets. We show that few scorers match the predictive performance of the simple raw metric of models on ImageNet, and that all predictors suffer on medical datasets. Our results highlight the potential of combining different information sources for reliably predicting transferability across varied domains. | [
"['Levy Chaves' 'Eduardo Valle' 'Alceu Bissoto' 'Sandra Avila']"
] |
null | null | 2405.20430 | null | null | http://arxiv.org/pdf/2405.20430v1 | 2024-05-30T19:15:38Z | 2024-05-30T19:15:38Z | Enhancing Performance for Highly Imbalanced Medical Data via Data
Regularization in a Federated Learning Setting | The increased availability of medical data has significantly impacted healthcare by enabling the application of machine / deep learning approaches in various instances. However, medical datasets are usually small and scattered across multiple providers, suffer from high class-imbalance, and are subject to stringent data privacy constraints. In this paper, the application of a data regularization algorithm, suitable for learning under high class-imbalance, in a federated learning setting is proposed. Specifically, the goal of the proposed method is to enhance model performance for cardiovascular disease prediction by tackling the class-imbalance that typically characterizes datasets used for this purpose, as well as by leveraging patient data available in different nodes of a federated ecosystem without compromising their privacy and enabling more resource sensitive allocation. The method is evaluated across four datasets for cardiovascular disease prediction, which are scattered across different clients, achieving improved performance. Meanwhile, its robustness under various hyperparameter settings, as well as its ability to adapt to different resource allocation scenarios, is verified. | [
"['Georgios Tsoumplekas' 'Ilias Siniosoglou' 'Vasileios Argyriou'\n 'Ioannis D. Moscholios' 'Panagiotis Sarigiannidis']"
] |
null | null | 2405.20431 | null | null | http://arxiv.org/pdf/2405.20431v1 | 2024-05-30T19:21:33Z | 2024-05-30T19:21:33Z | Exploring the Practicality of Federated Learning: A Survey Towards the
Communication Perspective | Federated Learning (FL) is a promising paradigm that offers significant advancements in privacy-preserving, decentralized machine learning by enabling collaborative training of models across distributed devices without centralizing data. However, the practical deployment of FL systems faces a significant bottleneck: the communication overhead caused by frequently exchanging large model updates between numerous devices and a central server. This communication inefficiency can hinder training speed, model performance, and the overall feasibility of real-world FL applications. In this survey, we investigate various strategies and advancements made in communication-efficient FL, highlighting their impact and potential to overcome the communication challenges inherent in FL systems. Specifically, we define measures for communication efficiency, analyze sources of communication inefficiency in FL systems, and provide a taxonomy and comprehensive review of state-of-the-art communication-efficient FL methods. Additionally, we discuss promising future research directions for enhancing the communication efficiency of FL systems. By addressing the communication bottleneck, FL can be effectively applied and enable scalable and practical deployment across diverse applications that require privacy-preserving, decentralized machine learning, such as IoT, healthcare, or finance. | [
"['Khiem Le' 'Nhan Luong-Ha' 'Manh Nguyen-Duc' 'Danh Le-Phuoc' 'Cuong Do'\n 'Kok-Seng Wong']"
] |
null | null | 2405.20435 | null | null | http://arxiv.org/pdf/2405.20435v1 | 2024-05-30T19:26:51Z | 2024-05-30T19:26:51Z | Deep Learning for Computing Convergence Rates of Markov Chains | Convergence rate analysis for general state-space Markov chains is fundamentally important in areas such as Markov chain Monte Carlo and algorithmic analysis (for computing explicit convergence bounds). This problem, however, is notoriously difficult because traditional analytical methods often do not generate practically useful convergence bounds for realistic Markov chains. We propose the Deep Contractive Drift Calculator (DCDC), the first general-purpose sample-based algorithm for bounding the convergence of Markov chains to stationarity in Wasserstein distance. The DCDC has two components. First, inspired by the new convergence analysis framework in (Qu et.al, 2023), we introduce the Contractive Drift Equation (CDE), the solution of which leads to an explicit convergence bound. Second, we develop an efficient neural-network-based CDE solver. Equipped with these two components, DCDC solves the CDE and converts the solution into a convergence bound. We analyze the sample complexity of the algorithm and further demonstrate the effectiveness of the DCDC by generating convergence bounds for realistic Markov chains arising from stochastic processing networks as well as constant step-size stochastic optimization. | [
"['Yanlin Qu' 'Jose Blanchet' 'Peter Glynn']"
] |
null | null | 2405.20439 | null | null | http://arxiv.org/pdf/2405.20439v1 | 2024-05-30T19:32:56Z | 2024-05-30T19:32:56Z | Sharpness-Aware Minimization Enhances Feature Quality via Balanced
Learning | Sharpness-Aware Minimization (SAM) has emerged as a promising alternative optimizer to stochastic gradient descent (SGD). The originally-proposed motivation behind SAM was to bias neural networks towards flatter minima that are believed to generalize better. However, recent studies have shown conflicting evidence on the relationship between flatness and generalization, suggesting that flatness does fully explain SAM's success. Sidestepping this debate, we identify an orthogonal effect of SAM that is beneficial out-of-distribution: we argue that SAM implicitly balances the quality of diverse features. SAM achieves this effect by adaptively suppressing well-learned features which gives remaining features opportunity to be learned. We show that this mechanism is beneficial in datasets that contain redundant or spurious features where SGD falls for the simplicity bias and would not otherwise learn all available features. Our insights are supported by experiments on real data: we demonstrate that SAM improves the quality of features in datasets containing redundant or spurious features, including CelebA, Waterbirds, CIFAR-MNIST, and DomainBed. | [
"['Jacob Mitchell Springer' 'Vaishnavh Nagarajan' 'Aditi Raghunathan']"
] |
null | null | 2405.20445 | null | null | http://arxiv.org/pdf/2405.20445v2 | 2024-06-03T02:08:54Z | 2024-05-30T19:43:29Z | GraphAny: A Foundation Model for Node Classification on Any Graph | Foundation models that can perform inference on any new task without requiring specific training have revolutionized machine learning in vision and language applications. However, applications involving graph-structured data remain a tough nut for foundation models, due to challenges in the unique feature- and label spaces associated with each graph. Traditional graph ML models such as graph neural networks (GNNs) trained on graphs cannot perform inference on a new graph with feature and label spaces different from the training ones. Furthermore, existing models learn functions specific to the training graph and cannot generalize to new graphs. In this work, we tackle these two challenges with a new foundational architecture for inductive node classification named GraphAny. GraphAny models inference on a new graph as an analytical solution to a LinearGNN, thereby solving the first challenge. To solve the second challenge, we learn attention scores for each node to fuse the predictions of multiple LinearGNNs. Specifically, the attention module is carefully parameterized as a function of the entropy-normalized distance-features between multiple LinearGNNs predictions to ensure generalization to new graphs. Empirically, GraphAny trained on the Wisconsin dataset with only 120 labeled nodes can effectively generalize to 30 new graphs with an average accuracy of 67.26% in an inductive manner, surpassing GCN and GAT trained in the supervised regime, as well as other inductive baselines. | [
"['Jianan Zhao' 'Hesham Mostafa' 'Mikhail Galkin' 'Michael Bronstein'\n 'Zhaocheng Zhu' 'Jian Tang']"
] |
null | null | 2405.20446 | null | null | http://arxiv.org/pdf/2405.20446v2 | 2024-06-07T09:39:39Z | 2024-05-30T19:46:36Z | Is My Data in Your Retrieval Database? Membership Inference Attacks
Against Retrieval Augmented Generation | Retrieval Augmented Generation (RAG) systems have shown great promise in natural language processing. However, their reliance on data stored in a retrieval database, which may contain proprietary or sensitive information, introduces new privacy concerns. Specifically, an attacker may be able to infer whether a certain text passage appears in the retrieval database by observing the outputs of the RAG system, an attack known as a Membership Inference Attack (MIA). Despite the significance of this threat, MIAs against RAG systems have yet remained under-explored. This study addresses this gap by introducing an efficient and easy-to-use method for conducting MIA against RAG systems. We demonstrate the effectiveness of our attack using two benchmark datasets and multiple generative models, showing that the membership of a document in the retrieval database can be efficiently determined through the creation of an appropriate prompt in both black-box and gray-box settings. Moreover, we introduce an initial defense strategy based on adding instructions to the RAG template, which shows high effectiveness for some datasets and models. Our findings highlight the importance of implementing security countermeasures in deployed RAG systems and developing more advanced defenses to protect the privacy and security of retrieval databases. | [
"['Maya Anderson' 'Guy Amit' 'Abigail Goldsteen']"
] |
null | null | 2405.20447 | null | null | http://arxiv.org/pdf/2405.20447v1 | 2024-05-30T19:46:47Z | 2024-05-30T19:46:47Z | Algorithmic Fairness in Performative Policy Learning: Escaping the
Impossibility of Group Fairness | In many prediction problems, the predictive model affects the distribution of the prediction target. This phenomenon is known as performativity and is often caused by the behavior of individuals with vested interests in the outcome of the predictive model. Although performativity is generally problematic because it manifests as distribution shifts, we develop algorithmic fairness practices that leverage performativity to achieve stronger group fairness guarantees in social classification problems (compared to what is achievable in non-performative settings). In particular, we leverage the policymaker's ability to steer the population to remedy inequities in the long term. A crucial benefit of this approach is that it is possible to resolve the incompatibilities between conflicting group fairness definitions. | [
"['Seamus Somerstep' \"Ya'acov Ritov\" 'Yuekai Sun']"
] |
null | null | 2405.20448 | null | null | http://arxiv.org/pdf/2405.20448v2 | 2024-06-03T14:40:28Z | 2024-05-30T19:47:34Z | Knockout: A simple way to handle missing inputs | Deep learning models can extract predictive and actionable information from complex inputs. The richer the inputs, the better these models usually perform. However, models that leverage rich inputs (e.g., multi-modality) can be difficult to deploy widely, because some inputs may be missing at inference. Current popular solutions to this problem include marginalization, imputation, and training multiple models. Marginalization can obtain calibrated predictions but it is computationally costly and therefore only feasible for low dimensional inputs. Imputation may result in inaccurate predictions because it employs point estimates for missing variables and does not work well for high dimensional inputs (e.g., images). Training multiple models whereby each model takes different subsets of inputs can work well but requires knowing missing input patterns in advance. Furthermore, training and retaining multiple models can be costly. We propose an efficient way to learn both the conditional distribution using full inputs and the marginal distributions. Our method, Knockout, randomly replaces input features with appropriate placeholder values during training. We provide a theoretical justification of Knockout and show that it can be viewed as an implicit marginalization strategy. We evaluate Knockout in a wide range of simulations and real-world datasets and show that it can offer strong empirical performance. | [
"['Minh Nguyen' 'Batuhan K. Karaman' 'Heejong Kim' 'Alan Q. Wang'\n 'Fengbei Liu' 'Mert R. Sabuncu']"
] |
null | null | 2405.20451 | null | null | http://arxiv.org/pdf/2405.20451v1 | 2024-05-30T19:57:28Z | 2024-05-30T19:57:28Z | Statistical Properties of Robust Satisficing | The Robust Satisficing (RS) model is an emerging approach to robust optimization, offering streamlined procedures and robust generalization across various applications. However, the statistical theory of RS remains unexplored in the literature. This paper fills in the gap by comprehensively analyzing the theoretical properties of the RS model. Notably, the RS structure offers a more straightforward path to deriving statistical guarantees compared to the seminal Distributionally Robust Optimization (DRO), resulting in a richer set of results. In particular, we establish two-sided confidence intervals for the optimal loss without the need to solve a minimax optimization problem explicitly. We further provide finite-sample generalization error bounds for the RS optimizer. Importantly, our results extend to scenarios involving distribution shifts, where discrepancies exist between the sampling and target distributions. Our numerical experiments show that the RS model consistently outperforms the baseline empirical risk minimization in small-sample regimes and under distribution shifts. Furthermore, compared to the DRO model, the RS model exhibits lower sensitivity to hyperparameter tuning, highlighting its practicability for robustness considerations. | [
"['Zhiyi Li' 'Yunbei Xu' 'Ruohan Zhan']"
] |
null | null | 2405.20452 | null | null | http://arxiv.org/pdf/2405.20452v1 | 2024-05-30T19:58:01Z | 2024-05-30T19:58:01Z | Understanding Encoder-Decoder Structures in Machine Learning Using
Information Measures | We present new results to model and understand the role of encoder-decoder design in machine learning (ML) from an information-theoretic angle. We use two main information concepts, information sufficiency (IS) and mutual information loss (MIL), to represent predictive structures in machine learning. Our first main result provides a functional expression that characterizes the class of probabilistic models consistent with an IS encoder-decoder latent predictive structure. This result formally justifies the encoder-decoder forward stages many modern ML architectures adopt to learn latent (compressed) representations for classification. To illustrate IS as a realistic and relevant model assumption, we revisit some known ML concepts and present some interesting new examples: invariant, robust, sparse, and digital models. Furthermore, our IS characterization allows us to tackle the fundamental question of how much performance (predictive expressiveness) could be lost, using the cross entropy risk, when a given encoder-decoder architecture is adopted in a learning setting. Here, our second main result shows that a mutual information loss quantifies the lack of expressiveness attributed to the choice of a (biased) encoder-decoder ML design. Finally, we address the problem of universal cross-entropy learning with an encoder-decoder design where necessary and sufficiency conditions are established to meet this requirement. In all these results, Shannon's information measures offer new interpretations and explanations for representation learning. | [
"['Jorge F. Silva' 'Victor Faraggi' 'Camilo Ramirez' 'Alvaro Egana'\n 'Eduardo Pavez']"
] |
null | null | 2405.20456 | null | null | http://arxiv.org/pdf/2405.20456v1 | 2024-05-30T20:10:24Z | 2024-05-30T20:10:24Z | Scaling Laws for the Value of Individual Data Points in Machine Learning | Recent works have shown that machine learning models improve at a predictable rate with the total amount of training data, leading to scaling laws that describe the relationship between error and dataset size. These scaling laws can help design a model's training dataset, but they typically take an aggregate view of the data by only considering the dataset's size. We introduce a new perspective by investigating scaling behavior for the value of individual data points: we find that a data point's contribution to model's performance shrinks predictably with the size of the dataset in a log-linear manner. Interestingly, there is significant variability in the scaling exponent among different data points, indicating that certain points are more valuable in small datasets while others are relatively more useful as a part of large datasets. We provide learning theory to support our scaling law, and we observe empirically that it holds across diverse model classes. We further propose a maximum likelihood estimator and an amortized estimator to efficiently learn the individualized scaling behaviors from a small number of noisy observations per data point. Using our estimators, we provide insights into factors that influence the scaling behavior of different data points. Finally, we demonstrate applications of the individualized scaling laws to data valuation and data subset selection. Overall, our work represents a first step towards understanding and utilizing scaling properties for the value of individual data points. | [
"['Ian Covert' 'Wenlong Ji' 'Tatsunori Hashimoto' 'James Zou']"
] |
null | null | 2405.20465 | null | null | http://arxiv.org/pdf/2405.20465v1 | 2024-05-30T20:26:47Z | 2024-05-30T20:26:47Z | ENTIRe-ID: An Extensive and Diverse Dataset for Person Re-Identification | The growing importance of person reidentification in computer vision has highlighted the need for more extensive and diverse datasets. In response, we introduce the ENTIRe-ID dataset, an extensive collection comprising over 4.45 million images from 37 different cameras in varied environments. This dataset is uniquely designed to tackle the challenges of domain variability and model generalization, areas where existing datasets for person re-identification have fallen short. The ENTIRe-ID dataset stands out for its coverage of a wide array of real-world scenarios, encompassing various lighting conditions, angles of view, and diverse human activities. This design ensures a realistic and robust training platform for ReID models. The ENTIRe-ID dataset is publicly available at https://serdaryildiz.github.io/ENTIRe-ID | [
"['Serdar Yildiz' 'Ahmet Nezih Kasim']"
] |
null | null | 2405.20467 | null | null | http://arxiv.org/pdf/2405.20467v1 | 2024-05-30T20:29:52Z | 2024-05-30T20:29:52Z | Performance of NPG in Countable State-Space Average-Cost RL | We consider policy optimization methods in reinforcement learning settings where the state space is arbitrarily large, or even countably infinite. The motivation arises from control problems in communication networks, matching markets, and other queueing systems. We consider Natural Policy Gradient (NPG), which is a popular algorithm for finite state spaces. Under reasonable assumptions, we derive a performance bound for NPG that is independent of the size of the state space, provided the error in policy evaluation is within a factor of the true value function. We obtain this result by establishing new policy-independent bounds on the solution to Poisson's equation, i.e., the relative value function, and by combining these bounds with previously known connections between MDPs and learning from experts. | [
"['Yashaswini Murthy' 'Isaac Grosof' 'Siva Theja Maguluri' 'R. Srikant']"
] |
null | null | 2405.20468 | null | null | http://arxiv.org/pdf/2405.20468v2 | 2024-06-17T14:14:54Z | 2024-05-30T20:34:37Z | MTEB-French: Resources for French Sentence Embedding Evaluation and
Analysis | Recently, numerous embedding models have been made available and widely used for various NLP tasks. The Massive Text Embedding Benchmark (MTEB) has primarily simplified the process of choosing a model that performs well for several tasks in English, but extensions to other languages remain challenging. This is why we expand MTEB to propose the first massive benchmark of sentence embeddings for French. We gather 15 existing datasets in an easy-to-use interface and create three new French datasets for a global evaluation of 8 task categories. We compare 51 carefully selected embedding models on a large scale, conduct comprehensive statistical tests, and analyze the correlation between model performance and many of their characteristics. We find out that even if no model is the best on all tasks, large multilingual models pre-trained on sentence similarity perform exceptionally well. Our work comes with open-source code, new datasets and a public leaderboard. | [
"['Mathieu Ciancone' 'Imene Kerboua' 'Marion Schaeffer' 'Wissam Siblini']"
] |
null | null | 2405.20482 | null | null | http://arxiv.org/pdf/2405.20482v2 | 2024-06-10T15:06:41Z | 2024-05-30T21:08:14Z | Sparsity regularization via tree-structured environments for
disentangled representations | Many causal systems such as biological processes in cells can only be observed indirectly via measurements, such as gene expression. Causal representation learning -- the task of correctly mapping low-level observations to latent causal variables -- could advance scientific understanding by enabling inference of latent variables such as pathway activation. In this paper, we develop methods for inferring latent variables from multiple related datasets (environments) and tasks. As a running example, we consider the task of predicting a phenotype from gene expression, where we often collect data from multiple cell types or organisms that are related in known ways. The key insight is that the mapping from latent variables driven by gene expression to the phenotype of interest changes sparsely across closely related environments. To model sparse changes, we introduce Tree-Based Regularization (TBR), an objective that minimizes both prediction error and regularizes closely related environments to learn similar predictors. We prove that under assumptions about the degree of sparse changes, TBR identifies the true latent variables up to some simple transformations. We evaluate the theory empirically with both simulations and ground-truth gene expression data. We find that TBR recovers the latent causal variables better than related methods across these settings, even under settings that violate some assumptions of the theory. | [
"['Elliot Layne' 'Jason Hartford' 'Sébastien Lachapelle'\n 'Mathieu Blanchette' 'Dhanya Sridhar']"
] |
null | null | 2405.20485 | null | null | http://arxiv.org/pdf/2405.20485v1 | 2024-05-30T21:19:24Z | 2024-05-30T21:19:24Z | Phantom: General Trigger Attacks on Retrieval Augmented Language
Generation | Retrieval Augmented Generation (RAG) expands the capabilities of modern large language models (LLMs) in chatbot applications, enabling developers to adapt and personalize the LLM output without expensive training or fine-tuning. RAG systems use an external knowledge database to retrieve the most relevant documents for a given query, providing this context to the LLM generator. While RAG achieves impressive utility in many applications, its adoption to enable personalized generative models introduces new security risks. In this work, we propose new attack surfaces for an adversary to compromise a victim's RAG system, by injecting a single malicious document in its knowledge database. We design Phantom, general two-step attack framework against RAG augmented LLMs. The first step involves crafting a poisoned document designed to be retrieved by the RAG system within the top-k results only when an adversarial trigger, a specific sequence of words acting as backdoor, is present in the victim's queries. In the second step, a specially crafted adversarial string within the poisoned document triggers various adversarial attacks in the LLM generator, including denial of service, reputation damage, privacy violations, and harmful behaviors. We demonstrate our attacks on multiple LLM architectures, including Gemma, Vicuna, and Llama. | [
"['Harsh Chaudhari' 'Giorgio Severi' 'John Abascal' 'Matthew Jagielski'\n 'Christopher A. Choquette-Choo' 'Milad Nasr' 'Cristina Nita-Rotaru'\n 'Alina Oprea']"
] |
null | null | 2405.20486 | null | null | http://arxiv.org/pdf/2405.20486v1 | 2024-05-30T21:21:33Z | 2024-05-30T21:21:33Z | Policy Trees for Prediction: Interpretable and Adaptive Model Selection
for Machine Learning | As a multitude of capable machine learning (ML) models become widely available in forms such as open-source software and public APIs, central questions remain regarding their use in real-world applications, especially in high-stakes decision-making. Is there always one best model that should be used? When are the models likely to be error-prone? Should a black-box or interpretable model be used? In this work, we develop a prescriptive methodology to address these key questions, introducing a tree-based approach, Optimal Predictive-Policy Trees (OP2T), that yields interpretable policies for adaptively selecting a predictive model or ensemble, along with a parameterized option to reject making a prediction. We base our methods on learning globally optimized prescriptive trees. Our approach enables interpretable and adaptive model selection and rejection while only assuming access to model outputs. By learning policies over different feature spaces, including the model outputs, our approach works with both structured and unstructured datasets. We evaluate our approach on real-world datasets, including regression and classification tasks with both structured and unstructured data. We demonstrate that our approach provides both strong performance against baseline methods while yielding insights that help answer critical questions about which models to use, and when. | [
"['Dimitris Bertsimas' 'Matthew Peroni']"
] |
null | null | 2405.20494 | null | null | http://arxiv.org/pdf/2405.20494v1 | 2024-05-30T21:35:48Z | 2024-05-30T21:35:48Z | Slight Corruption in Pre-training Data Makes Better Diffusion Models | Diffusion models (DMs) have shown remarkable capabilities in generating realistic high-quality images, audios, and videos. They benefit significantly from extensive pre-training on large-scale datasets, including web-crawled data with paired data and conditions, such as image-text and image-class pairs. Despite rigorous filtering, these pre-training datasets often inevitably contain corrupted pairs where conditions do not accurately describe the data. This paper presents the first comprehensive study on the impact of such corruption in pre-training data of DMs. We synthetically corrupt ImageNet-1K and CC3M to pre-train and evaluate over 50 conditional DMs. Our empirical findings reveal that various types of slight corruption in pre-training can significantly enhance the quality, diversity, and fidelity of the generated images across different DMs, both during pre-training and downstream adaptation stages. Theoretically, we consider a Gaussian mixture model and prove that slight corruption in the condition leads to higher entropy and a reduced 2-Wasserstein distance to the ground truth of the data distribution generated by the corruptly trained DMs. Inspired by our analysis, we propose a simple method to improve the training of DMs on practical datasets by adding condition embedding perturbations (CEP). CEP significantly improves the performance of various DMs in both pre-training and downstream tasks. We hope that our study provides new insights into understanding the data and pre-training processes of DMs. | [
"['Hao Chen' 'Yujin Han' 'Diganta Misra' 'Xiang Li' 'Kai Hu' 'Difan Zou'\n 'Masashi Sugiyama' 'Jindong Wang' 'Bhiksha Raj']"
] |
null | null | 2405.20495 | null | null | http://arxiv.org/pdf/2405.20495v1 | 2024-05-30T21:36:12Z | 2024-05-30T21:36:12Z | Transfer Q Star: Principled Decoding for LLM Alignment | Aligning foundation models is essential for their safe and trustworthy deployment. However, traditional fine-tuning methods are computationally intensive and require updating billions of model parameters. A promising alternative, alignment via decoding, adjusts the response distribution directly without model updates to maximize a target reward $r$, thus providing a lightweight and adaptable framework for alignment. However, principled decoding methods rely on oracle access to an optimal Q-function ($Q^*$), which is often unavailable in practice. Hence, prior SoTA methods either approximate this $Q^*$ using $Q^{pi_{texttt{sft}}}$ (derived from the reference $texttt{SFT}$ model) or rely on short-term rewards, resulting in sub-optimal decoding performance. In this work, we propose Transfer $Q^*$, which implicitly estimates the optimal value function for a target reward $r$ through a baseline model $rho_{texttt{BL}}$ aligned with a baseline reward $rho_{texttt{BL}}$ (which can be different from the target reward $r$). Theoretical analyses of Transfer $Q^*$ provide a rigorous characterization of its optimality, deriving an upper bound on the sub-optimality gap and identifying a hyperparameter to control the deviation from the pre-trained reference $texttt{SFT}$ model based on user needs. Our approach significantly reduces the sub-optimality gap observed in prior SoTA methods and demonstrates superior empirical performance across key metrics such as coherence, diversity, and quality in extensive tests on several synthetic and real datasets. | [
"['Souradip Chakraborty' 'Soumya Suvra Ghosal' 'Ming Yin' 'Dinesh Manocha'\n 'Mengdi Wang' 'Amrit Singh Bedi' 'Furong Huang']"
] |
null | null | 2405.20500 | null | null | http://arxiv.org/pdf/2405.20500v1 | 2024-05-30T21:42:33Z | 2024-05-30T21:42:33Z | Hybrid Reinforcement Learning Framework for Mixed-Variable Problems | Optimization problems characterized by both discrete and continuous variables are common across various disciplines, presenting unique challenges due to their complex solution landscapes and the difficulty of navigating mixed-variable spaces effectively. To Address these challenges, we introduce a hybrid Reinforcement Learning (RL) framework that synergizes RL for discrete variable selection with Bayesian Optimization for continuous variable adjustment. This framework stands out by its strategic integration of RL and continuous optimization techniques, enabling it to dynamically adapt to the problem's mixed-variable nature. By employing RL for exploring discrete decision spaces and Bayesian Optimization to refine continuous parameters, our approach not only demonstrates flexibility but also enhances optimization performance. Our experiments on synthetic functions and real-world machine learning hyperparameter tuning tasks reveal that our method consistently outperforms traditional RL, random search, and standalone Bayesian optimization in terms of effectiveness and efficiency. | [
"['Haoyan Zhai' 'Qianli Hu' 'Jiangning Chen']"
] |
null | null | 2405.20501 | null | null | http://arxiv.org/abs/2405.20501v1 | 2024-05-30T21:42:54Z | 2024-05-30T21:42:54Z | ShelfHelp: Empowering Humans to Perform Vision-Independent Manipulation
Tasks with a Socially Assistive Robotic Cane | The ability to shop independently, especially in grocery stores, is important for maintaining a high quality of life. This can be particularly challenging for people with visual impairments (PVI). Stores carry thousands of products, with approximately 30,000 new products introduced each year in the US market alone, presenting a challenge even for modern computer vision solutions. Through this work, we present a proof-of-concept socially assistive robotic system we call ShelfHelp, and propose novel technical solutions for enhancing instrumented canes traditionally meant for navigation tasks with additional capability within the domain of shopping. ShelfHelp includes a novel visual product locator algorithm designed for use in grocery stores and a novel planner that autonomously issues verbal manipulation guidance commands to guide the user during product retrieval. Through a human subjects study, we show the system's success in locating and providing effective manipulation guidance to retrieve desired products with novice users. We compare two autonomous verbal guidance modes achieving comparable performance to a human assistance baseline and present encouraging findings that validate our system's efficiency and effectiveness and through positive subjective metrics including competence, intelligence, and ease of use. | [
"['Shivendra Agrawal' 'Suresh Nayak' 'Ashutosh Naik' 'Bradley Hayes']"
] |
null | null | 2405.20503 | null | null | http://arxiv.org/abs/2405.20503v1 | 2024-05-30T21:48:56Z | 2024-05-30T21:48:56Z | Optimizing cnn-Bigru performance: Mish activation and comparative
analysis with Relu | Deep learning is currently extensively employed across a range of research domains. The continuous advancements in deep learning techniques contribute to solving intricate challenges. Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data. By introducing non-linearities, AF empowers neural networks to model and adapt to the diverse and nuanced nature of real-world data, enhancing their ability to make accurate predictions across various tasks. In the context of intrusion detection, the Mish, a recent AF, was implemented in the CNN-BiGRU model, using three datasets: ASNM-TUN, ASNM-CDX, and HOGZILLA. The comparison with Rectified Linear Unit (ReLU), a widely used AF, revealed that Mish outperforms ReLU, showcasing superior performance across the evaluated datasets. This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems. | [
"['Asmaa Benchama' 'Khalid Zebbara']"
] |
null | null | 2405.20504 | null | null | http://arxiv.org/pdf/2405.20504v1 | 2024-05-30T21:49:14Z | 2024-05-30T21:49:14Z | FCOM: A Federated Collaborative Online Monitoring Framework via
Representation Learning | Online learning has demonstrated notable potential to dynamically allocate limited resources to monitor a large population of processes, effectively balancing the exploitation of processes yielding high rewards, and the exploration of uncertain processes. However, most online learning algorithms were designed under 1) a centralized setting that requires data sharing across processes to obtain an accurate prediction or 2) a homogeneity assumption that estimates a single global model from the decentralized data. To facilitate the online learning of heterogeneous processes from the decentralized data, we propose a federated collaborative online monitoring method, which captures the latent representative models inherent in the population through representation learning and designs a novel federated collaborative UCB algorithm to estimate the representative models from sequentially observed decentralized data. The efficiency of our method is illustrated through theoretical analysis, simulation studies, and decentralized cognitive degradation monitoring in Alzheimer's disease. | [
"['Tanapol Kosolwattana' 'Huazheng Wang' 'Raed Al Kontar' 'Ying Lin']"
] |
null | null | 2405.20505 | null | null | http://arxiv.org/pdf/2405.20505v1 | 2024-05-30T21:51:01Z | 2024-05-30T21:51:01Z | SPOT: Text Source Prediction from Originality Score Thresholding | The wide acceptance of large language models (LLMs) has unlocked new applications and social risks. Popular countermeasures aim at detecting misinformation, usually involve domain specific models trained to recognize the relevance of any information. Instead of evaluating the validity of the information, we propose to investigate LLM generated text from the perspective of trust. In this study, we define trust as the ability to know if an input text was generated by a LLM or a human. To do so, we design SPOT, an efficient method, that classifies the source of any, standalone, text input based on originality score. This score is derived from the prediction of a given LLM to detect other LLMs. We empirically demonstrate the robustness of the method to the architecture, training data, evaluation data, task and compression of modern LLMs. | [
"['Edouard Yvinec' 'Gabriel Kasser']"
] |
null | null | 2405.20512 | null | null | http://arxiv.org/pdf/2405.20512v1 | 2024-05-30T22:08:20Z | 2024-05-30T22:08:20Z | How Multilingual Are Large Language Models Fine-Tuned for Translation? | A new paradigm for machine translation has recently emerged: fine-tuning large language models (LLM) on parallel text has been shown to outperform dedicated translation systems trained in a supervised fashion on much larger amounts of parallel data (Xu et al., 2024a; Alves et al., 2024). However, it remains unclear whether this paradigm can enable massively multilingual machine translation or whether it requires fine-tuning dedicated models for a small number of language pairs. How does translation fine-tuning impact the MT capabilities of LLMs for zero-shot languages, zero-shot language pairs, and translation tasks that do not involve English? To address these questions, we conduct an extensive empirical evaluation of the translation quality of the TOWER family of language models (Alves et al., 2024) on 132 translation tasks from the multi-parallel FLORES-200 data. We find that translation fine-tuning improves translation quality even for zero-shot languages on average, but that the impact is uneven depending on the language pairs involved. These results call for further research to effectively enable massively multilingual translation with LLMs. | [
"['Aquia Richburg' 'Marine Carpuat']"
] |
null | null | 2405.20513 | null | null | http://arxiv.org/pdf/2405.20513v1 | 2024-05-30T22:13:17Z | 2024-05-30T22:13:17Z | Deep Modeling of Non-Gaussian Aleatoric Uncertainty | Deep learning offers promising new ways to accurately model aleatoric uncertainty in robotic estimation systems, particularly when the uncertainty distributions do not conform to traditional assumptions of being fixed and Gaussian. In this study, we formulate and evaluate three fundamental deep learning approaches for conditional probability density modeling to quantify non-Gaussian aleatoric uncertainty: parametric, discretized, and generative modeling. We systematically compare the respective strengths and weaknesses of these three methods on simulated non-Gaussian densities as well as on real-world terrain-relative navigation data. Our results show that these deep learning methods can accurately capture complex uncertainty patterns, highlighting their potential for improving the reliability and robustness of estimation systems. | [
"['Aastha Acharya' 'Caleb Lee' \"Marissa D'Alonzo\" 'Jared Shamwell'\n 'Nisar R. Ahmed' 'Rebecca Russell']"
] |
null | null | 2405.20516 | null | null | http://arxiv.org/pdf/2405.20516v1 | 2024-05-30T22:18:16Z | 2024-05-30T22:18:16Z | WaveCastNet: An AI-enabled Wavefield Forecasting Framework for
Earthquake Early Warning | Large earthquakes can be destructive and quickly wreak havoc on a landscape. To mitigate immediate threats, early warning systems have been developed to alert residents, emergency responders, and critical infrastructure operators seconds to a minute before seismic waves arrive. These warnings provide time to take precautions and prevent damage. The success of these systems relies on fast, accurate predictions of ground motion intensities, which is challenging due to the complex physics of earthquakes, wave propagation, and their intricate spatial and temporal interactions. To improve early warning, we propose a novel AI-enabled framework, WaveCastNet, for forecasting ground motions from large earthquakes. WaveCastNet integrates a novel convolutional Long Expressive Memory (ConvLEM) model into a sequence to sequence (seq2seq) forecasting framework to model long-term dependencies and multi-scale patterns in both space and time. WaveCastNet, which shares weights across spatial and temporal dimensions, requires fewer parameters compared to more resource-intensive models like transformers and thus, in turn, reduces inference times. Importantly, WaveCastNet also generalizes better than transformer-based models to different seismic scenarios, including to more rare and critical situations with higher magnitude earthquakes. Our results using simulated data from the San Francisco Bay Area demonstrate the capability to rapidly predict the intensity and timing of destructive ground motions. Importantly, our proposed approach does not require estimating earthquake magnitudes and epicenters, which are prone to errors using conventional approaches; nor does it require empirical ground motion models, which fail to capture strongly heterogeneous wave propagation effects. | [
"['Dongwei Lyu' 'Rie Nakata' 'Pu Ren' 'Michael W. Mahoney' 'Arben Pitarka'\n 'Nori Nakata' 'N. Benjamin Erichson']"
] |
null | null | 2405.20531 | null | null | http://arxiv.org/pdf/2405.20531v1 | 2024-05-30T23:13:01Z | 2024-05-30T23:13:01Z | Mitigating the Impact of Labeling Errors on Training via Rockafellian
Relaxation | Labeling errors in datasets are common, if not systematic, in practice. They naturally arise in a variety of contexts-human labeling, noisy labeling, and weak labeling (i.e., image classification), for example. This presents a persistent and pervasive stress on machine learning practice. In particular, neural network (NN) architectures can withstand minor amounts of dataset imperfection with traditional countermeasures such as regularization, data augmentation, and batch normalization. However, major dataset imperfections often prove insurmountable. We propose and study the implementation of Rockafellian Relaxation (RR), a new loss reweighting, architecture-independent methodology, for neural network training. Experiments indicate RR can enhance standard neural network methods to achieve robust performance across classification tasks in computer vision and natural language processing (sentiment analysis). We find that RR can mitigate the effects of dataset corruption due to both (heavy) labeling error and/or adversarial perturbation, demonstrating effectiveness across a variety of data domains and machine learning tasks. | [
"['Louis L. Chen' 'Bobbie Chern' 'Eric Eckstrand' 'Amogh Mahapatra'\n 'Johannes O. Royset']"
] |
null | null | 2405.20534 | null | null | http://arxiv.org/pdf/2405.20534v1 | 2024-05-30T23:20:23Z | 2024-05-30T23:20:23Z | Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement
Learning | An exciting and promising frontier for Deep Reinforcement Learning (DRL) is its application to real-world robotic systems. While modern DRL approaches achieved remarkable successes in many robotic scenarios (including mobile robotics, surgical assistance, and autonomous driving) unpredictable and non-stationary environments can pose critical challenges to such methods. These features can significantly undermine fundamental requirements for a successful training process, such as the Markovian properties of the transition model. To address this challenge, we propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and DRL. In more detail, we show that our benchmarking environment is problematic even for state-of-the-art DRL approaches that may struggle to generate reliable policies in terms of generalization power and safety. Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques (such as curriculum learning and learnable hyperparameters). Our extensive empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results. Our simulation environment and training baselines are freely available to facilitate further research on this open problem and encourage collaboration in the field. | [
"['Davide Corsi' 'Davide Camponogara' 'Alessandro Farinelli']"
] |
null | null | 2405.20538 | null | null | http://arxiv.org/pdf/2405.20538v1 | 2024-05-30T23:22:36Z | 2024-05-30T23:22:36Z | Q-learning as a monotone scheme | Stability issues with reinforcement learning methods persist. To better understand some of these stability and convergence issues involving deep reinforcement learning methods, we examine a simple linear quadratic example. We interpret the convergence criterion of exact Q-learning in the sense of a monotone scheme and discuss consequences of function approximation on monotonicity properties. | [
"['Lingyi Yang']"
] |
null | null | 2405.20539 | null | null | http://arxiv.org/pdf/2405.20539v1 | 2024-05-30T23:31:25Z | 2024-05-30T23:31:25Z | SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement
Learning Agents | Reinforcement learning (RL) is an actively growing field that is seeing increased usage in real-world, safety-critical applications -- making it paramount to ensure the robustness of RL algorithms against adversarial attacks. In this work we explore a particularly stealthy form of training-time attacks against RL -- backdoor poisoning. Here the adversary intercepts the training of an RL agent with the goal of reliably inducing a particular action when the agent observes a pre-determined trigger at inference time. We uncover theoretical limitations of prior work by proving their inability to generalize across domains and MDPs. Motivated by this, we formulate a novel poisoning attack framework which interlinks the adversary's objectives with those of finding an optimal policy -- guaranteeing attack success in the limit. Using insights from our theoretical analysis we develop ``SleeperNets'' as a universal backdoor attack which exploits a newly proposed threat model and leverages dynamic reward poisoning techniques. We evaluate our attack in 6 environments spanning multiple domains and demonstrate significant improvements in attack success over existing methods, while preserving benign episodic return. | [
"['Ethan Rathbun' 'Christopher Amato' 'Alina Oprea']"
] |
null | null | 2405.20540 | null | null | http://arxiv.org/pdf/2405.20540v1 | 2024-05-30T23:41:01Z | 2024-05-30T23:41:01Z | Fully Unconstrained Online Learning | We provide an online learning algorithm that obtains regret $G|w_star|sqrt{Tlog(|w_star|Gsqrt{T})} + |w_star|^2 + G^2$ on $G$-Lipschitz convex losses for any comparison point $w_star$ without knowing either $G$ or $|w_star|$. Importantly, this matches the optimal bound $G|w_star|sqrt{T}$ available with such knowledge (up to logarithmic factors), unless either $|w_star|$ or $G$ is so large that even $G|w_star|sqrt{T}$ is roughly linear in $T$. Thus, it matches the optimal bound in all cases in which one can achieve sublinear regret, which arguably most "interesting" scenarios. | [
"['Ashok Cutkosky' 'Zakaria Mhammedi']"
] |
null | null | 2405.20541 | null | null | http://arxiv.org/pdf/2405.20541v1 | 2024-05-30T23:50:20Z | 2024-05-30T23:50:20Z | Perplexed by Perplexity: Perplexity-Based Data Pruning With Small
Reference Models | In this work, we investigate whether small language models can determine high-quality subsets of large-scale text datasets that improve the performance of larger language models. While existing work has shown that pruning based on the perplexity of a larger model can yield high-quality data, we investigate whether smaller models can be used for perplexity-based pruning and how pruning is affected by the domain composition of the data being pruned. We demonstrate that for multiple dataset compositions, perplexity-based pruning of pretraining data can emph{significantly} improve downstream task performance: pruning based on perplexities computed with a 125 million parameter model improves the average performance on downstream tasks of a 3 billion parameter model by up to 2.04 and achieves up to a $1.45times$ reduction in pretraining steps to reach commensurate baseline performance. Furthermore, we demonstrate that such perplexity-based data pruning also yields downstream performance gains in the over-trained and data-constrained regimes. | [
"['Zachary Ankner' 'Cody Blakeney' 'Kartik Sreenivasan' 'Max Marion'\n 'Matthew L. Leavitt' 'Mansheej Paul']"
] |
null | null | 2405.20542 | null | null | http://arxiv.org/pdf/2405.20542v1 | 2024-05-30T23:54:17Z | 2024-05-30T23:54:17Z | On the Connection Between Non-negative Matrix Factorization and Latent
Dirichlet Allocation | Non-negative matrix factorization with the generalized Kullback-Leibler divergence (NMF) and latent Dirichlet allocation (LDA) are two popular approaches for dimensionality reduction of non-negative data. Here, we show that NMF with $ell_1$ normalization constraints on the columns of both matrices of the decomposition and a Dirichlet prior on the columns of one matrix is equivalent to LDA. To show this, we demonstrate that explicitly accounting for the scaling ambiguity of NMF by adding $ell_1$ normalization constraints to the optimization problem allows a joint update of both matrices in the widely used multiplicative updates (MU) algorithm. When both of the matrices are normalized, the joint MU algorithm leads to probabilistic latent semantic analysis (PLSA), which is LDA without a Dirichlet prior. Our approach of deriving joint updates for NMF also reveals that a Lasso penalty on one matrix together with an $ell_1$ normalization constraint on the other matrix is insufficient to induce any sparsity. | [
"['Benedikt Geiger' 'Peter J. Park']"
] |
null | null | 2405.20543 | null | null | http://arxiv.org/pdf/2405.20543v1 | 2024-05-31T00:02:07Z | 2024-05-31T00:02:07Z | Towards a General GNN Framework for Combinatorial Optimization | Graph neural networks (GNNs) have achieved great success for a variety of tasks such as node classification, graph classification, and link prediction. However, the use of GNNs (and machine learning more generally) to solve combinatorial optimization (CO) problems is much less explored. Here, we introduce a novel GNN architecture which leverages a complex filter bank and localized attention mechanisms designed to solve CO problems on graphs. We show how our method differentiates itself from prior GNN-based CO solvers and how it can be effectively applied to the maximum clique, minimum dominating set, and maximum cut problems in a self-supervised learning setting. In addition to demonstrating competitive overall performance across all tasks, we establish state-of-the-art results for the max cut problem. | [
"['Frederik Wenkel' 'Semih Cantürk' 'Michael Perlmutter' 'Guy Wolf']"
] |
null | null | 2405.20550 | null | null | http://arxiv.org/pdf/2405.20550v1 | 2024-05-31T00:20:19Z | 2024-05-31T00:20:19Z | Uncertainty Quantification for Deep Learning | A complete and statistically consistent uncertainty quantification for deep learning is provided, including the sources of uncertainty arising from (1) the new input data, (2) the training and testing data (3) the weight vectors of the neural network, and (4) the neural network because it is not a perfect predictor. Using Bayes Theorem and conditional probability densities, we demonstrate how each uncertainty source can be systematically quantified. We also introduce a fast and practical way to incorporate and combine all sources of errors for the first time. For illustration, the new method is applied to quantify errors in cloud autoconversion rates, predicted from an artificial neural network that was trained by aircraft cloud probe measurements in the Azores and the stochastic collection equation formulated as a two-moment bin model. For this specific example, the output uncertainty arising from uncertainty in the training and testing data is dominant, followed by uncertainty in the input data, in the trained neural network, and uncertainty in the weights. We discuss the usefulness of the methodology for machine learning practice, and how, through inclusion of uncertainty in the training data, the new methodology is less sensitive to input data that falls outside of the training data set. | [
"['Peter Jan van Leeuwen' 'J. Christine Chiu' 'C. Kevin Yang']"
] |
null | null | 2405.20551 | null | null | http://arxiv.org/abs/2405.20551v1 | 2024-05-31T00:32:04Z | 2024-05-31T00:32:04Z | EM-Assist: Safe Automated ExtractMethod Refactoring with LLMs | Excessively long methods, loaded with multiple responsibilities, are challenging to understand, debug, reuse, and maintain. The solution lies in the widely recognized Extract Method refactoring. While the application of this refactoring is supported in modern IDEs, recommending which code fragments to extract has been the topic of many research tools. However, they often struggle to replicate real-world developer practices, resulting in recommendations that do not align with what a human developer would do in real life. To address this issue, we introduce EM-Assist, an IntelliJ IDEA plugin that uses LLMs to generate refactoring suggestions and subsequently validates, enhances, and ranks them. Finally, EM-Assist uses the IntelliJ IDE to apply the user-selected recommendation. In our extensive evaluation of 1,752 real-world refactorings that actually took place in open-source projects, EM-Assist's recall rate was 53.4% among its top-5 recommendations, compared to 39.4% for the previous best-in-class tool that relies solely on static analysis. Moreover, we conducted a usability survey with 18 industrial developers and 94.4% gave a positive rating. | [
"['Dorin Pomian' 'Abhiram Bellur' 'Malinda Dilhara' 'Zarina Kurbatova'\n 'Egor Bogomolov' 'Andrey Sokolov' 'Timofey Bryksin' 'Danny Dig']"
] |
null | null | 2405.20555 | null | null | http://arxiv.org/pdf/2405.20555v1 | 2024-05-31T00:41:04Z | 2024-05-31T00:41:04Z | Diffusion Actor-Critic: Formulating Constrained Policy Iteration as
Diffusion Noise Regression for Offline Reinforcement Learning | In offline reinforcement learning (RL), it is necessary to manage out-of-distribution actions to prevent overestimation of value functions. Policy-regularized methods address this problem by constraining the target policy to stay close to the behavior policy. Although several approaches suggest representing the behavior policy as an expressive diffusion model to boost performance, it remains unclear how to regularize the target policy given a diffusion-modeled behavior sampler. In this paper, we propose Diffusion Actor-Critic (DAC) that formulates the Kullback-Leibler (KL) constraint policy iteration as a diffusion noise regression problem, enabling direct representation of target policies as diffusion models. Our approach follows the actor-critic learning paradigm that we alternatively train a diffusion-modeled target policy and a critic network. The actor training loss includes a soft Q-guidance term from the Q-gradient. The soft Q-guidance grounds on the theoretical solution of the KL constraint policy iteration, which prevents the learned policy from taking out-of-distribution actions. For critic training, we train a Q-ensemble to stabilize the estimation of Q-gradient. Additionally, DAC employs lower confidence bound (LCB) to address the overestimation and underestimation of value targets due to function approximation error. Our approach is evaluated on the D4RL benchmarks and outperforms the state-of-the-art in almost all environments. Code is available at href{https://github.com/Fang-Lin93/DAC}{texttt{github.com/Fang-Lin93/DAC}}. | [
"['Linjiajie Fang' 'Ruoxue Liu' 'Jing Zhang' 'Wenjia Wang' 'Bing-Yi Jing']"
] |
null | null | 2405.20556 | null | null | http://arxiv.org/pdf/2405.20556v1 | 2024-05-31T00:46:04Z | 2024-05-31T00:46:04Z | Certifying Global Robustness for Deep Neural Networks | A globally robust deep neural network resists perturbations on all meaningful inputs. Current robustness certification methods emphasize local robustness, struggling to scale and generalize. This paper presents a systematic and efficient method to evaluate and verify global robustness for deep neural networks, leveraging the PAC verification framework for solid guarantees on verification results. We utilize probabilistic programs to characterize meaningful input regions, setting a realistic standard for global robustness. Additionally, we introduce the cumulative robustness curve as a criterion in evaluating global robustness. We design a statistical method that combines multi-level splitting and regression analysis for the estimation, significantly reducing the execution time. Experimental results demonstrate the efficiency and effectiveness of our verification method and its capability to find rare and diversified counterexamples for adversarial training. | [
"['You Li' 'Guannan Zhao' 'Shuyu Kong' 'Yunqi He' 'Hai Zhou']"
] |
null | null | 2405.20562 | null | null | http://arxiv.org/pdf/2405.20562v1 | 2024-05-31T01:04:46Z | 2024-05-31T01:04:46Z | Can Machine Learning Assist in Diagnosis of Primary Immune
Thrombocytopenia? A feasibility study | Primary Immune thrombocytopenia (ITP) is a rare autoimmune disease characterised by immune-mediated destruction of peripheral blood platelets in patients leading to low platelet counts and bleeding. The diagnosis and effective management of ITP is challenging because there is no established test to confirm the disease and no biomarker with which one can predict the response to treatment and outcome. In this work we conduct a feasibility study to check if machine learning can be applied effectively for diagnosis of ITP using routine blood tests and demographic data in a non-acute outpatient setting. Various ML models, including Logistic Regression, Support Vector Machine, k-Nearest Neighbor, Decision Tree and Random Forest, were applied to data from the UK Adult ITP Registry and a general hematology clinic. Two different approaches were investigated: a demographic-unaware and a demographic-aware one. We conduct extensive experiments to evaluate the predictive performance of these models and approaches, as well as their bias. The results revealed that Decision Tree and Random Forest models were both superior and fair, achieving nearly perfect predictive and fairness scores, with platelet count identified as the most significant variable. Models not provided with demographic information performed better in terms of predictive accuracy but showed lower fairness score, illustrating a trade-off between predictive performance and fairness. | [
"['Haroon Miah' 'Dimitrios Kollias' 'Giacinto Luca Pedone' 'Drew Provan'\n 'Frederick Chen']"
] |
null | null | 2405.20568 | null | null | http://arxiv.org/pdf/2405.20568v1 | 2024-05-31T01:25:40Z | 2024-05-31T01:25:40Z | Generative AI for Deep Reinforcement Learning: Framework, Analysis, and
Use Cases | As a form of artificial intelligence (AI) technology based on interactive learning, deep reinforcement learning (DRL) has been widely applied across various fields and has achieved remarkable accomplishments. However, DRL faces certain limitations, including low sample efficiency and poor generalization. Therefore, we present how to leverage generative AI (GAI) to address these issues above and enhance the performance of DRL algorithms in this paper. We first introduce several classic GAI and DRL algorithms and demonstrate the applications of GAI-enhanced DRL algorithms. Then, we discuss how to use GAI to improve DRL algorithms from the data and policy perspectives. Subsequently, we introduce a framework that demonstrates an actual and novel integration of GAI with DRL, i.e., GAI-enhanced DRL. Additionally, we provide a case study of the framework on UAV-assisted integrated near-field/far-field communication to validate the performance of the proposed framework. Moreover, we present several future directions. Finally, the related code is available at: https://xiewenwen22.github.io/GAI-enhanced-DRL. | [
"['Geng Sun' 'Wenwen Xie' 'Dusit Niyato' 'Fang Mei' 'Jiawen Kang'\n 'Hongyang Du' 'Shiwen Mao']"
] |
null | null | 2405.20573 | null | null | http://arxiv.org/pdf/2405.20573v1 | 2024-05-31T02:00:25Z | 2024-05-31T02:00:25Z | Enhancing Generative Molecular Design via Uncertainty-guided Fine-tuning
of Variational Autoencoders | In recent years, deep generative models have been successfully adopted for various molecular design tasks, particularly in the life and material sciences. A critical challenge for pre-trained generative molecular design (GMD) models is to fine-tune them to be better suited for downstream design tasks aimed at optimizing specific molecular properties. However, redesigning and training an existing effective generative model from scratch for each new design task is impractical. Furthermore, the black-box nature of typical downstream tasks$unicode{x2013}$such as property prediction$unicode{x2013}$makes it nontrivial to optimize the generative model in a task-specific manner. In this work, we propose a novel approach for a model uncertainty-guided fine-tuning of a pre-trained variational autoencoder (VAE)-based GMD model through performance feedback in an active learning setting. The main idea is to quantify model uncertainty in the generative model, which is made efficient by working within a low-dimensional active subspace of the high-dimensional VAE parameters explaining most of the variability in the model's output. The inclusion of model uncertainty expands the space of viable molecules through decoder diversity. We then explore the resulting model uncertainty class via black-box optimization made tractable by low-dimensionality of the active subspace. This enables us to identify and leverage a diverse set of high-performing models to generate enhanced molecules. Empirical results across six target molecular properties, using multiple VAE-based generative models, demonstrate that our uncertainty-guided fine-tuning approach consistently outperforms the original pre-trained models. | [
"['A N M Nafiz Abeer' 'Sanket Jantre' 'Nathan M Urban' 'Byung-Jun Yoon']"
] |
null | null | 2405.20579 | null | null | http://arxiv.org/pdf/2405.20579v2 | 2024-07-05T02:11:54Z | 2024-05-31T02:17:51Z | HOPE: A Reinforcement Learning-based Hybrid Policy Path Planner for
Diverse Parking Scenarios | Automated parking stands as a highly anticipated application of autonomous driving technology. However, existing path planning methodologies fall short of addressing this need due to their incapability to handle the diverse and complex parking scenarios in reality. While non-learning methods provide reliable planning results, they are vulnerable to intricate occasions, whereas learning-based ones are good at exploration but unstable in converging to feasible solutions. To leverage the strengths of both approaches, we introduce Hybrid pOlicy Path plannEr (HOPE). This novel solution integrates a reinforcement learning agent with Reeds-Shepp curves, enabling effective planning across diverse scenarios. HOPE guides the exploration of the reinforcement learning agent by applying an action mask mechanism and employs a transformer to integrate the perceived environmental information with the mask. To facilitate the training and evaluation of the proposed planner, we propose a criterion for categorizing the difficulty level of parking scenarios based on space and obstacle distribution. Experimental results demonstrate that our approach outperforms typical rule-based algorithms and traditional reinforcement learning methods, showing higher planning success rates and generalization across various scenarios. We also conduct real-world experiments to verify the practicability of HOPE. The code for our solution will be openly available on href{GitHub}{https://github.com/jiamiya/HOPE}. | [
"['Mingyang Jiang' 'Yueyuan Li' 'Songan Zhang' 'Siyuan Chen'\n 'Chunxiang Wang' 'Ming Yang']"
] |
null | null | 2405.20582 | null | null | http://arxiv.org/pdf/2405.20582v1 | 2024-05-31T02:28:41Z | 2024-05-31T02:28:41Z | The Point of View of a Sentiment: Towards Clinician Bias Detection in
Psychiatric Notes | In psychiatry, negative patient descriptions and stigmatizing language can contribute to healthcare disparities in two ways: (1) read by patients they can harm their trust and engagement with the medical center; (2) read by future providers they may negatively influence the future perspective of a patient. By leveraging large language models, this work aims to identify the sentiment expressed in psychiatric clinical notes based on the reader's point of view. Extracting sentences from the Mount Sinai Health System's large and diverse clinical notes, we used prompts and in-context learning to adapt three large language models (GPT-3.5, Llama 2, Mistral) to classify the sentiment conveyed by the sentences according to the provider or non-provider point of view. Results showed that GPT-3.5 aligns best to provider point of view, whereas Mistral aligns best to non-provider point of view. | [
"['Alissa A. Valentine' 'Lauren A. Lepow' 'Alexander W. Charney'\n 'Isotta Landi']"
] |
null | null | 2405.20589 | null | null | http://arxiv.org/pdf/2405.20589v1 | 2024-05-31T02:59:25Z | 2024-05-31T02:59:25Z | Selective Knowledge Sharing for Personalized Federated Learning Under
Capacity Heterogeneity | Federated Learning (FL) stands to gain significant advantages from collaboratively training capacity-heterogeneous models, enabling the utilization of private data and computing power from low-capacity devices. However, the focus on personalizing capacity-heterogeneous models based on client-specific data has been limited, resulting in suboptimal local model utility, particularly for low-capacity clients. The heterogeneity in both data and device capacity poses two key challenges for model personalization: 1) accurately retaining necessary knowledge embedded within reduced submodels for each client, and 2) effectively sharing knowledge through aggregating size-varying parameters. To this end, we introduce Pa3dFL, a novel framework designed to enhance local model performance by decoupling and selectively sharing knowledge among capacity-heterogeneous models. First, we decompose each layer of the model into general and personal parameters. Then, we maintain uniform sizes for the general parameters across clients and aggregate them through direct averaging. Subsequently, we employ a hyper-network to generate size-varying personal parameters for clients using learnable embeddings. Finally, we facilitate the implicit aggregation of personal parameters by aggregating client embeddings through a self-attention module. We conducted extensive experiments on three datasets to evaluate the effectiveness of Pa3dFL. Our findings indicate that Pa3dFL consistently outperforms baseline methods across various heterogeneity settings. Moreover, Pa3dFL demonstrates competitive communication and computation efficiency compared to baseline approaches, highlighting its practicality and adaptability in adverse system conditions. | [
"['Zheng Wang' 'Zheng Wang' 'Zhaopeng Peng' 'Zihui Wang' 'Cheng Wang']"
] |
null | null | 2405.20590 | null | null | http://arxiv.org/pdf/2405.20590v1 | 2024-05-31T03:03:19Z | 2024-05-31T03:03:19Z | Class-Based Time Series Data Augmentation to Mitigate Extreme Class
Imbalance for Solar Flare Prediction | Time series data plays a crucial role across various domains, making it valuable for decision-making and predictive modeling. Machine learning (ML) and deep learning (DL) have shown promise in this regard, yet their performance hinges on data quality and quantity, often constrained by data scarcity and class imbalance, particularly for rare events like solar flares. Data augmentation techniques offer a potential solution to address these challenges, yet their effectiveness on multivariate time series datasets remains underexplored. In this study, we propose a novel data augmentation method for time series data named Mean Gaussian Noise (MGN). We investigate the performance of MGN compared to eight existing basic data augmentation methods on a multivariate time series dataset for solar flare prediction, SWAN-SF, using a ML algorithm for time series data, TimeSeriesSVC. The results demonstrate the efficacy of MGN and highlight its potential for improving classification performance in scenarios with extremely imbalanced data. Our time complexity analysis shows that MGN also has a competitive computational cost compared to the investigated alternative methods. | [
"['Junzhi Wen' 'Rafal A. Angryk']"
] |
null | null | 2405.20591 | null | null | http://arxiv.org/pdf/2405.20591v1 | 2024-05-31T03:03:27Z | 2024-05-31T03:03:27Z | Weak-Form Inference for Hybrid Dynamical Systems in Ecology | Species subject to predation and environmental threats commonly exhibit variable periods of population boom and bust over long timescales. Understanding and predicting such behavior, especially given the inherent heterogeneity and stochasticity of exogenous driving factors over short timescales, is an ongoing challenge. A modeling paradigm gaining popularity in the ecological sciences for such multi-scale effects is to couple short-term continuous dynamics to long-term discrete updates. We develop a data-driven method utilizing weak-form equation learning to extract such hybrid governing equations for population dynamics and to estimate the requisite parameters using sparse intermittent measurements of the discrete and continuous variables. The method produces a set of short-term continuous dynamical system equations parametrized by long-term variables, and long-term discrete equations parametrized by short-term variables, allowing direct assessment of interdependencies between the two time scales. We demonstrate the utility of the method on a variety of ecological scenarios and provide extensive tests using models previously derived for epizootics experienced by the North American spongy moth (Lymantria dispar dispar). | [
"['Daniel Messenger' 'Greg Dwyer' 'Vanja Dukic']"
] |
null | null | 2405.20592 | null | null | http://arxiv.org/pdf/2405.20592v1 | 2024-05-31T03:04:57Z | 2024-05-31T03:04:57Z | LInK: Learning Joint Representations of Design and Performance Spaces
through Contrastive Learning for Mechanism Synthesis | In this paper, we introduce LInK, a novel framework that integrates contrastive learning of performance and design space with optimization techniques for solving complex inverse problems in engineering design with discrete and continuous variables. We focus on the path synthesis problem for planar linkage mechanisms. By leveraging a multi-modal and transformation-invariant contrastive learning framework, LInK learns a joint representation that captures complex physics and design representations of mechanisms, enabling rapid retrieval from a vast dataset of over 10 million mechanisms. This approach improves precision through the warm start of a hierarchical unconstrained nonlinear optimization algorithm, combining the robustness of traditional optimization with the speed and adaptability of modern deep learning methods. Our results on an existing benchmark demonstrate that LInK outperforms existing methods with 28 times less error compared to a state-of-the-art approach while taking 20 times less time on an existing benchmark. Moreover, we introduce a significantly more challenging benchmark, named LINK-ABC, which involves synthesizing linkages that trace the trajectories of English capital alphabets - an inverse design benchmark task that existing methods struggle with due to large non-linearities and tiny feasible space. Our results demonstrate that LInK not only advances the field of mechanism design but also broadens the applicability of contrastive learning and optimization to other areas of engineering. | [
"['Amin Heyrani Nobari' 'Akash Srivastava' 'Dan Gutfreund' 'Kai Xu'\n 'Faez Ahmed']"
] |
null | null | 2405.20594 | null | null | http://arxiv.org/pdf/2405.20594v1 | 2024-05-31T03:11:19Z | 2024-05-31T03:11:19Z | Deep Learning without Weight Symmetry | Backpropagation (BP), a foundational algorithm for training artificial neural networks, predominates in contemporary deep learning. Although highly successful, it is often considered biologically implausible. A significant limitation arises from the need for precise symmetry between connections in the backward and forward pathways to backpropagate gradient signals accurately, which is not observed in biological brains. Researchers have proposed several algorithms to alleviate this symmetry constraint, such as feedback alignment and direct feedback alignment. However, their divergence from backpropagation dynamics presents challenges, particularly in deeper networks and convolutional layers. Here we introduce the Product Feedback Alignment (PFA) algorithm. Our findings demonstrate that PFA closely approximates BP and achieves comparable performance in deep convolutional networks while avoiding explicit weight symmetry. Our results offer a novel solution to the longstanding weight symmetry problem, leading to more biologically plausible learning in deep convolutional networks compared to earlier methods. | [
"['Li Ji-An' 'Marcus K. Benna']"
] |
null | null | 2405.20596 | null | null | http://arxiv.org/pdf/2405.20596v1 | 2024-05-31T03:13:45Z | 2024-05-31T03:13:45Z | Generalized Semi-Supervised Learning via Self-Supervised Feature
Adaptation | Traditional semi-supervised learning (SSL) assumes that the feature distributions of labeled and unlabeled data are consistent which rarely holds in realistic scenarios. In this paper, we propose a novel SSL setting, where unlabeled samples are drawn from a mixed distribution that deviates from the feature distribution of labeled samples. Under this setting, previous SSL methods tend to predict wrong pseudo-labels with the model fitted on labeled data, resulting in noise accumulation. To tackle this issue, we propose Self-Supervised Feature Adaptation (SSFA), a generic framework for improving SSL performance when labeled and unlabeled data come from different distributions. SSFA decouples the prediction of pseudo-labels from the current model to improve the quality of pseudo-labels. Particularly, SSFA incorporates a self-supervised task into the SSL framework and uses it to adapt the feature extractor of the model to the unlabeled data. In this way, the extracted features better fit the distribution of unlabeled data, thereby generating high-quality pseudo-labels. Extensive experiments show that our proposed SSFA is applicable to various pseudo-label-based SSL learners and significantly improves performance in labeled, unlabeled, and even unseen distributions. | [
"['Jiachen Liang' 'Ruibing Hou' 'Hong Chang' 'Bingpeng Ma' 'Shiguang Shan'\n 'Xilin Chen']"
] |
null | null | 2405.20602 | null | null | http://arxiv.org/pdf/2405.20602v1 | 2024-05-31T03:26:42Z | 2024-05-31T03:26:42Z | Masked Language Modeling Becomes Conditional Density Estimation for
Tabular Data Synthesis | In this paper, our goal is to generate synthetic data for heterogeneous (mixed-type) tabular datasets with high machine learning utility (MLu). Given that the MLu performance relies on accurately approximating the conditional distributions, we focus on devising a synthetic data generation method based on conditional distribution estimation. We propose a novel synthetic data generation method, MaCoDE, by redefining the multi-class classification task of Masked Language Modeling (MLM) as histogram-based non-parametric conditional density estimation. Our proposed method enables estimating conditional densities across arbitrary combinations of target and conditional variables. Furthermore, we demonstrate that our proposed method bridges the theoretical gap between distributional learning and MLM. To validate the effectiveness of our proposed model, we conduct synthetic data generation experiments on 10 real-world datasets. Given the analogy between predicting masked input tokens in MLM and missing data imputation, we also evaluate the performance of multiple imputations on incomplete datasets with various missing data mechanisms. Moreover, our proposed model offers the advantage of enabling adjustments to data privacy levels without requiring re-training. | [
"['Seunghwan An' 'Gyeongdong Woo' 'Jaesung Lim' 'ChangHyun Kim'\n 'Sungchul Hong' 'Jong-June Jeon']"
] |
null | null | 2405.20603 | null | null | http://arxiv.org/pdf/2405.20603v1 | 2024-05-31T03:31:17Z | 2024-05-31T03:31:17Z | Advancing Financial Risk Prediction Through Optimized LSTM Model
Performance and Comparative Analysis | This paper focuses on the application and optimization of LSTM model in financial risk prediction. The study starts with an overview of the architecture and algorithm foundation of LSTM, and then details the model training process and hyperparameter tuning strategy, and adjusts network parameters through experiments to improve performance. Comparative experiments show that the optimized LSTM model shows significant advantages in AUC index compared with random forest, BP neural network and XGBoost, which verifies its efficiency and practicability in the field of financial risk prediction, especially its ability to deal with complex time series data, which lays a solid foundation for the application of the model in the actual production environment. | [
"['Ke Xu' 'Yu Cheng' 'Shiqing Long' 'Junjie Guo' 'Jue Xiao' 'Mengfang Sun']"
] |
null | null | 2405.20605 | null | null | http://arxiv.org/pdf/2405.20605v1 | 2024-05-31T03:39:26Z | 2024-05-31T03:39:26Z | Searching for internal symbols underlying deep learning | Deep learning (DL) enables deep neural networks (DNNs) to automatically learn complex tasks or rules from given examples without instructions or guiding principles. As we do not engineer DNNs' functions, it is extremely difficult to diagnose their decisions, and multiple lines of studies proposed to explain principles of DNNs/DL operations. Notably, one line of studies suggests that DNNs may learn concepts, the high level features recognizable to humans. Thus, we hypothesized that DNNs develop abstract codes, not necessarily recognizable to humans, which can be used to augment DNNs' decision-making. To address this hypothesis, we combined foundation segmentation models and unsupervised learning to extract internal codes and identify potential use of abstract codes to make DL's decision-making more reliable and safer. | [
"['Jung H. Lee' 'Sujith Vijayan']"
] |
null | null | 2405.20606 | null | null | http://arxiv.org/pdf/2405.20606v1 | 2024-05-31T03:40:15Z | 2024-05-31T03:40:15Z | Vision-Language Meets the Skeleton: Progressively Distillation with
Cross-Modal Knowledge for 3D Action Representation Learning | Supervised and self-supervised learning are two main training paradigms for skeleton-based human action recognition. However, the former one-hot classification requires labor-intensive predefined action categories annotations, while the latter involves skeleton transformations (e.g., cropping) in the pretext tasks that may impair the skeleton structure. To address these challenges, we introduce a novel skeleton-based training framework (C$^2$VL) based on Cross-modal Contrastive learning that uses the progressive distillation to learn task-agnostic human skeleton action representation from the Vision-Language knowledge prompts. Specifically, we establish the vision-language action concept space through vision-language knowledge prompts generated by pre-trained large multimodal models (LMMs), which enrich the fine-grained details that the skeleton action space lacks. Moreover, we propose the intra-modal self-similarity and inter-modal cross-consistency softened targets in the cross-modal contrastive process to progressively control and guide the degree of pulling vision-language knowledge prompts and corresponding skeletons closer. These soft instance discrimination and self-knowledge distillation strategies contribute to the learning of better skeleton-based action representations from the noisy skeleton-vision-language pairs. During the inference phase, our method requires only the skeleton data as the input for action recognition and no longer for vision-language prompts. Extensive experiments show that our method achieves state-of-the-art results on NTU RGB+D 60, NTU RGB+D 120, and PKU-MMD datasets. The code will be available in the future. | [
"['Yang Chen' 'Tian He' 'Junfeng Fu' 'Ling Wang' 'Jingcai Guo' 'Hong Cheng']"
] |
null | null | 2405.20611 | null | null | http://arxiv.org/pdf/2405.20611v1 | 2024-05-31T03:57:19Z | 2024-05-31T03:57:19Z | Bi-Directional Transformers vs. word2vec: Discovering Vulnerabilities in
Lifted Compiled Code | Detecting vulnerabilities within compiled binaries is challenging due to lost high-level code structures and other factors such as architectural dependencies, compilers, and optimization options. To address these obstacles, this research explores vulnerability detection by using natural language processing (NLP) embedding techniques with word2vec, BERT, and RoBERTa to learn semantics from intermediate representation (LLVM) code. Long short-term memory (LSTM) neural networks were trained on embeddings from encoders created using approximately 118k LLVM functions from the Juliet dataset. This study is pioneering in its comparison of word2vec models with multiple bidirectional transformer (BERT, RoBERTa) embeddings built using LLVM code to train neural networks to detect vulnerabilities in compiled binaries. word2vec Continuous Bag of Words (CBOW) models achieved 92.3% validation accuracy in detecting vulnerabilities, outperforming word2vec Skip-Gram, BERT, and RoBERTa. This suggests that complex contextual NLP embeddings may not provide advantages over simpler word2vec models for this task when a limited number (e.g. 118K) of data samples are used to train the bidirectional transformer-based models. The comparative results provide novel insights into selecting optimal embeddings for learning compiler-independent semantic code representations to advance machine learning detection of vulnerabilities in compiled binaries. | [
"['Gary A. McCully' 'John D. Hastings' 'Shengjie Xu' 'Adam Fortier']"
] |
null | null | 2405.20620 | null | null | http://arxiv.org/pdf/2405.20620v1 | 2024-05-31T05:10:30Z | 2024-05-31T05:10:30Z | "Forgetting" in Machine Learning and Beyond: A Survey | This survey investigates the multifaceted nature of forgetting in machine learning, drawing insights from neuroscientific research that posits forgetting as an adaptive function rather than a defect, enhancing the learning process and preventing overfitting. This survey focuses on the benefits of forgetting and its applications across various machine learning sub-fields that can help improve model performance and enhance data privacy. Moreover, the paper discusses current challenges, future directions, and ethical considerations regarding the integration of forgetting mechanisms into machine learning models. | [
"['Alyssa Shuang Sha' 'Bernardo Pereira Nunes' 'Armin Haller']"
] |
null | null | 2405.20622 | null | null | http://arxiv.org/pdf/2405.20622v2 | 2024-06-04T02:19:30Z | 2024-05-31T05:18:05Z | Superfast Selection for Decision Tree Algorithms | We present a novel and systematic method, called Superfast Selection, for selecting the "optimal split" for decision tree and feature selection algorithms over tabular data. The method speeds up split selection on a single feature by lowering the time complexity, from O(MN) (using the standard selection methods) to O(M), where M represents the number of input examples and N the number of unique values. Additionally, the need for pre-encoding, such as one-hot or integer encoding, for feature value heterogeneity is eliminated. To demonstrate the efficiency of Superfast Selection, we empower the CART algorithm by integrating Superfast Selection into it, creating what we call Ultrafast Decision Tree (UDT). This enhancement enables UDT to complete the training process with a time complexity O(KM$^2$) (K is the number of features). Additionally, the Training Only Once Tuning enables UDT to avoid the repetitive training process required to find the optimal hyper-parameter. Experiments show that the UDT can finish a single training on KDD99-10% dataset (494K examples with 41 features) within 1 second and tuning with 214.8 sets of hyper-parameters within 0.25 second on a laptop. | [
"['Huaduo Wang' 'Gopal Gupta']"
] |
null | null | 2405.20623 | null | null | http://arxiv.org/pdf/2405.20623v1 | 2024-05-31T05:21:12Z | 2024-05-31T05:21:12Z | Prune at the Clients, Not the Server: Accelerated Sparse Training in
Federated Learning | In the recent paradigm of Federated Learning (FL), multiple clients train a shared model while keeping their local data private. Resource constraints of clients and communication costs pose major problems for training large models in FL. On the one hand, addressing the resource limitations of the clients, sparse training has proven to be a powerful tool in the centralized setting. On the other hand, communication costs in FL can be addressed by local training, where each client takes multiple gradient steps on its local data. Recent work has shown that local training can provably achieve the optimal accelerated communication complexity [Mishchenko et al., 2022]. Hence, one would like an accelerated sparse training algorithm. In this work we show that naive integration of sparse training and acceleration at the server fails, and how to fix it by letting the clients perform these tasks appropriately. We introduce Sparse-ProxSkip, our method developed for the nonconvex setting, inspired by RandProx [Condat and Richt'arik, 2022], which provably combines sparse training and acceleration in the convex setting. We demonstrate the good performance of Sparse-ProxSkip in extensive experiments. | [
"['Georg Meinhardt' 'Kai Yi' 'Laurent Condat' 'Peter Richtárik']"
] |
null | null | 2405.20630 | null | null | http://arxiv.org/pdf/2405.20630v2 | 2024-06-03T03:11:45Z | 2024-05-31T05:42:47Z | Stochastic Optimal Control for Diffusion Bridges in Function Spaces | Recent advancements in diffusion models and diffusion bridges primarily focus on finite-dimensional spaces, yet many real-world problems necessitate operations in infinite-dimensional function spaces for more natural and interpretable formulations. In this paper, we present a theory of stochastic optimal control (SOC) tailored to infinite-dimensional spaces, aiming to extend diffusion-based algorithms to function spaces. Specifically, we demonstrate how Doob's $h$-transform, the fundamental tool for constructing diffusion bridges, can be derived from the SOC perspective and expanded to infinite dimensions. This expansion presents a challenge, as infinite-dimensional spaces typically lack closed-form densities. Leveraging our theory, we establish that solving the optimal control problem with a specific objective function choice is equivalent to learning diffusion-based generative models. We propose two applications: (1) learning bridges between two infinite-dimensional distributions and (2) generative models for sampling from an infinite-dimensional distribution. Our approach proves effective for diverse problems involving continuous function space representations, such as resolution-free images, time-series data, and probability density functions. | [
"['Byoungwoo Park' 'Jungwon Choi' 'Sungbin Lim' 'Juho Lee']"
] |
null | null | 2405.20640 | null | null | http://arxiv.org/pdf/2405.20640v1 | 2024-05-31T06:40:56Z | 2024-05-31T06:40:56Z | Heterophilous Distribution Propagation for Graph Neural Networks | Graph Neural Networks (GNNs) have achieved remarkable success in various graph mining tasks by aggregating information from neighborhoods for representation learning. The success relies on the homophily assumption that nearby nodes exhibit similar behaviors, while it may be violated in many real-world graphs. Recently, heterophilous graph neural networks (HeterGNNs) have attracted increasing attention by modifying the neural message passing schema for heterophilous neighborhoods. However, they suffer from insufficient neighborhood partition and heterophily modeling, both of which are critical but challenging to break through. To tackle these challenges, in this paper, we propose heterophilous distribution propagation (HDP) for graph neural networks. Instead of aggregating information from all neighborhoods, HDP adaptively separates the neighbors into homophilous and heterphilous parts based on the pseudo assignments during training. The heterophilous neighborhood distribution is learned with orthogonality-oriented constraint via a trusted prototype contrastive learning paradigm. Both the homophilous and heterophilous patterns are propagated with a novel semantic-aware message passing mechanism. We conduct extensive experiments on 9 benchmark datasets with different levels of homophily. Experimental results show that our method outperforms representative baselines on heterophilous datasets. | [
"['Zhuonan Zheng' 'Sheng Zhou' 'Hongjia Xu' 'Ming Gu' 'Yilun Xu' 'Ao Li'\n 'Yuhong Li' 'Jingjun Gu' 'Jiajun Bu']"
] |
null | null | 2405.20642 | null | null | http://arxiv.org/pdf/2405.20642v1 | 2024-05-31T07:01:49Z | 2024-05-31T07:01:49Z | Principal-Agent Multitasking: the Uniformity of Optimal Contracts and
its Efficient Learning via Instrumental Regression | This work studies the multitasking principal-agent problem. I first show a ``uniformity'' result. Specifically, when the tasks are perfect substitutes, and the agent's cost function is homogeneous to a certain degree, then the optimal contract only depends on the marginal utility of each task and the degree of homogeneity. I then study a setting where the marginal utility of each task is unknown so that the optimal contract must be learned or estimated with observational data. I identify this problem as a regression problem with measurement errors and observe that this problem can be cast as an instrumental regression problem. The current works observe that both the contract and the repeated observations (when available) can act as valid instrumental variables, and propose using the generalized method of moments estimator to compute an approximately optimal contract from offline data. I also study an online setting and show how the optimal contract can be efficiently learned in an online fashion using the two estimators. Here the principal faces an exploration-exploitation tradeoff: she must experiment with new contracts and observe their outcome whilst at the same time ensuring her experimentations are not deviating too much from the optimal contract. This work shows when repeated observations are available and agents are sufficiently ``diverse", the principal can achieve a very low $widetilde{O}(d)$ cumulative utility loss, even with a ``pure exploitation" algorithm. | [
"['Shiliang Zuo']"
] |
null | null | 2405.20648 | null | null | http://arxiv.org/pdf/2405.20648v1 | 2024-05-31T07:30:24Z | 2024-05-31T07:30:24Z | Shotluck Holmes: A Family of Efficient Small-Scale Large Language Vision
Models For Video Captioning and Summarization | Video is an increasingly prominent and information-dense medium, yet it poses substantial challenges for language models. A typical video consists of a sequence of shorter segments, or shots, that collectively form a coherent narrative. Each shot is analogous to a word in a sentence where multiple data streams of information (such as visual and auditory data) must be processed simultaneously. Comprehension of the entire video requires not only understanding the visual-audio information of each shot but also requires that the model links the ideas between each shot to generate a larger, all-encompassing story. Despite significant progress in the field, current works often overlook videos' more granular shot-by-shot semantic information. In this project, we propose a family of efficient large language vision models (LLVMs) to boost video summarization and captioning called Shotluck Holmes. By leveraging better pretraining and data collection strategies, we extend the abilities of existing small LLVMs from being able to understand a picture to being able to understand a sequence of frames. Specifically, we show that Shotluck Holmes achieves better performance than state-of-the-art results on the Shot2Story video captioning and summary task with significantly smaller and more computationally efficient models. | [
"['Richard Luo' 'Austin Peng' 'Adithya Vasudev' 'Rishabh Jain']"
] |
null | null | 2405.20649 | null | null | http://arxiv.org/pdf/2405.20649v1 | 2024-05-31T07:30:34Z | 2024-05-31T07:30:34Z | Reward-based Input Construction for Cross-document Relation Extraction | Relation extraction (RE) is a fundamental task in natural language processing, aiming to identify relations between target entities in text. While many RE methods are designed for a single sentence or document, cross-document RE has emerged to address relations across multiple long documents. Given the nature of long documents in cross-document RE, extracting document embeddings is challenging due to the length constraints of pre-trained language models. Therefore, we propose REward-based Input Construction (REIC), the first learning-based sentence selector for cross-document RE. REIC extracts sentences based on relational evidence, enabling the RE module to effectively infer relations. Since supervision of evidence sentences is generally unavailable, we train REIC using reinforcement learning with RE prediction scores as rewards. Experimental results demonstrate the superiority of our method over heuristic methods for different RE structures and backbones in cross-document RE. Our code is publicly available at https://github.com/aailabkaist/REIC. | [
"['Byeonghu Na' 'Suhyeon Jo' 'Yeongmin Kim' 'Il-Chul Moon']"
] |
null | null | 2405.20652 | null | null | http://arxiv.org/pdf/2405.20652v1 | 2024-05-31T07:39:22Z | 2024-05-31T07:39:22Z | Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning
on Heterophilic Graphs | Graph Neural Networks (GNNs) have gained significant attention as a powerful modeling and inference method, especially for homophilic graph-structured data. To empower GNNs in heterophilic graphs, where adjacent nodes exhibit dissimilar labels or features, Signed Message Passing (SMP) has been widely adopted. However, there is a lack of theoretical and empirical analysis regarding the limitations of SMP. In this work, we unveil some potential pitfalls of SMP and their remedies. We first identify two limitations of SMP: undesirable representation update for multi-hop neighbors and vulnerability against oversmoothing issues. To overcome these challenges, we propose a novel message passing function called Multiset to Multiset GNN(M2M-GNN). Our theoretical analyses and extensive experiments demonstrate that M2M-GNN effectively alleviates the aforementioned limitations of SMP, yielding superior performance in comparison | [
"['Langzhang Liang' 'Sunwoo Kim' 'Kijung Shin' 'Zenglin Xu' 'Shirui Pan'\n 'Yuan Qi']"
] |
null | null | 2405.20664 | null | null | http://arxiv.org/pdf/2405.20664v1 | 2024-05-31T08:03:52Z | 2024-05-31T08:03:52Z | Weak Robust Compatibility Between Learning Algorithms and Counterfactual
Explanation Generation Algorithms | Counterfactual explanation generation is a powerful method for Explainable Artificial Intelligence. It can help users understand why machine learning models make specific decisions, and how to change those decisions. Evaluating the robustness of counterfactual explanation algorithms is therefore crucial. Previous literature has widely studied the robustness based on the perturbation of input instances. However, the robustness defined from the perspective of perturbed instances is sometimes biased, because this definition ignores the impact of learning algorithms on robustness. In this paper, we propose a more reasonable definition, Weak Robust Compatibility, based on the perspective of explanation strength. In practice, we propose WRC-Test to help us generate more robust counterfactuals. Meanwhile, we designed experiments to verify the effectiveness of WRC-Test. Theoretically, we introduce the concepts of PAC learning theory and define the concept of PAC WRC-Approximability. Based on reasonable assumptions, we establish oracle inequalities about weak robustness, which gives a sufficient condition for PAC WRC-Approximability. | [
"['Ao Xu' 'Tieru Wu']"
] |
null | null | 2405.20668 | null | null | http://arxiv.org/pdf/2405.20668v1 | 2024-05-31T08:09:36Z | 2024-05-31T08:09:36Z | Improving Paratope and Epitope Prediction by Multi-Modal Contrastive
Learning and Interaction Informativeness Estimation | Accurately predicting antibody-antigen binding residues, i.e., paratopes and epitopes, is crucial in antibody design. However, existing methods solely focus on uni-modal data (either sequence or structure), disregarding the complementary information present in multi-modal data, and most methods predict paratopes and epitopes separately, overlooking their specific spatial interactions. In this paper, we propose a novel Multi-modal contrastive learning and Interaction informativeness estimation-based method for Paratope and Epitope prediction, named MIPE, by using both sequence and structure data of antibodies and antigens. MIPE implements a multi-modal contrastive learning strategy, which maximizes representations of binding and non-binding residues within each modality and meanwhile aligns uni-modal representations towards effective modal representations. To exploit the spatial interaction information, MIPE also incorporates an interaction informativeness estimation that computes the estimated interaction matrices between antibodies and antigens, thereby approximating them to the actual ones. Extensive experiments demonstrate the superiority of our method compared to baselines. Additionally, the ablation studies and visualizations demonstrate the superiority of MIPE owing to the better representations acquired through multi-modal contrastive learning and the interaction patterns comprehended by the interaction informativeness estimation. | [
"['Zhiwei Wang' 'Yongkang Wang' 'Wen Zhang']"
] |
null | null | 2405.20671 | null | null | http://arxiv.org/pdf/2405.20671v1 | 2024-05-31T08:13:35Z | 2024-05-31T08:13:35Z | Position Coupling: Leveraging Task Structure for Improved Length
Generalization of Transformers | Even for simple arithmetic tasks like integer addition, it is challenging for Transformers to generalize to longer sequences than those encountered during training. To tackle this problem, we propose position coupling, a simple yet effective method that directly embeds the structure of the tasks into the positional encoding of a (decoder-only) Transformer. Taking a departure from the vanilla absolute position mechanism assigning unique position IDs to each of the tokens, we assign the same position IDs to two or more "relevant" tokens; for integer addition tasks, we regard digits of the same significance as in the same position. On the empirical side, we show that with the proposed position coupling, a small (1-layer) Transformer trained on 1 to 30-digit additions can generalize up to 200-digit additions (6.67x of the trained length). On the theoretical side, we prove that a 1-layer Transformer with coupled positions can solve the addition task involving exponentially many digits, whereas any 1-layer Transformer without positional information cannot entirely solve it. We also demonstrate that position coupling can be applied to other algorithmic tasks such as addition with multiple summands, Nx2 multiplication, copy/reverse, and a two-dimensional task. | [
"['Hanseul Cho' 'Jaeyoung Cha' 'Pranjal Awasthi' 'Srinadh Bhojanapalli'\n 'Anupam Gupta' 'Chulhee Yun']"
] |
null | null | 2405.20675 | null | null | http://arxiv.org/pdf/2405.20675v1 | 2024-05-31T08:19:44Z | 2024-05-31T08:19:44Z | Adv-KD: Adversarial Knowledge Distillation for Faster Diffusion Sampling | Diffusion Probabilistic Models (DPMs) have emerged as a powerful class of deep generative models, achieving remarkable performance in image synthesis tasks. However, these models face challenges in terms of widespread adoption due to their reliance on sequential denoising steps during sample generation. This dependence leads to substantial computational requirements, making them unsuitable for resource-constrained or real-time processing systems. To address these challenges, we propose a novel method that integrates denoising phases directly into the model's architecture, thereby reducing the need for resource-intensive computations. Our approach combines diffusion models with generative adversarial networks (GANs) through knowledge distillation, enabling more efficient training and evaluation. By utilizing a pre-trained diffusion model as a teacher model, we train a student model through adversarial learning, employing layerwise transformations for denoising and submodules for predicting the teacher model's output at various points in time. This integration significantly reduces the number of parameters and denoising steps required, leading to improved sampling speed at test time. We validate our method with extensive experiments, demonstrating comparable performance with reduced computational requirements compared to existing approaches. By enabling the deployment of diffusion models on resource-constrained devices, our research mitigates their computational burden and paves the way for wider accessibility and practical use across the research community and end-users. Our code is publicly available at https://github.com/kidist-amde/Adv-KD | [
"['Kidist Amde Mekonnen' \"Nicola Dall'Asen\" 'Paolo Rota']"
] |
null | null | 2405.20677 | null | null | http://arxiv.org/pdf/2405.20677v1 | 2024-05-31T08:21:09Z | 2024-05-31T08:21:09Z | Provably Efficient Interactive-Grounded Learning with Personalized
Reward | Interactive-Grounded Learning (IGL) [Xie et al., 2021] is a powerful framework in which a learner aims at maximizing unobservable rewards through interacting with an environment and observing reward-dependent feedback on the taken actions. To deal with personalized rewards that are ubiquitous in applications such as recommendation systems, Maghakian et al. [2022] study a version of IGL with context-dependent feedback, but their algorithm does not come with theoretical guarantees. In this work, we consider the same problem and provide the first provably efficient algorithms with sublinear regret under realizability. Our analysis reveals that the step-function estimator of prior work can deviate uncontrollably due to finite-sample effects. Our solution is a novel Lipschitz reward estimator which underestimates the true reward and enjoys favorable generalization performances. Building on this estimator, we propose two algorithms, one based on explore-then-exploit and the other based on inverse-gap weighting. We apply IGL to learning from image feedback and learning from text feedback, which are reward-free settings that arise in practice. Experimental results showcase the importance of using our Lipschitz reward estimator and the overall effectiveness of our algorithms. | [
"['Mengxiao Zhang' 'Yuheng Zhang' 'Haipeng Luo' 'Paul Mineiro']"
] |
null | null | 2405.20678 | null | null | http://arxiv.org/pdf/2405.20678v1 | 2024-05-31T08:21:11Z | 2024-05-31T08:21:11Z | No-Regret Learning for Fair Multi-Agent Social Welfare Optimization | We consider the problem of online multi-agent Nash social welfare (NSW) maximization. While previous works of Hossain et al. [2021], Jones et al. [2023] study similar problems in stochastic multi-agent multi-armed bandits and show that $sqrt{T}$-regret is possible after $T$ rounds, their fairness measure is the product of all agents' rewards, instead of their NSW (that is, their geometric mean). Given the fundamental role of NSW in the fairness literature, it is more than natural to ask whether no-regret fair learning with NSW as the objective is possible. In this work, we provide a complete answer to this question in various settings. Specifically, in stochastic $N$-agent $K$-armed bandits, we develop an algorithm with $widetilde{mathcal{O}}left(K^{frac{2}{N}}T^{frac{N-1}{N}}right)$ regret and prove that the dependence on $T$ is tight, making it a sharp contrast to the $sqrt{T}$-regret bounds of Hossain et al. [2021], Jones et al. [2023]. We then consider a more challenging version of the problem with adversarial rewards. Somewhat surprisingly, despite NSW being a concave function, we prove that no algorithm can achieve sublinear regret. To circumvent such negative results, we further consider a setting with full-information feedback and design two algorithms with $sqrt{T}$-regret: the first one has no dependence on $N$ at all and is applicable to not just NSW but a broad class of welfare functions, while the second one has better dependence on $K$ and is preferable when $N$ is small. Finally, we also show that logarithmic regret is possible whenever there exists one agent who is indifferent about different arms. | [
"['Mengxiao Zhang' 'Ramiro Deo-Campo Vuong' 'Haipeng Luo']"
] |
null | null | 2405.20685 | null | null | http://arxiv.org/pdf/2405.20685v1 | 2024-05-31T08:26:53Z | 2024-05-31T08:26:53Z | Enhancing Counterfactual Image Generation Using Mahalanobis Distance
with Distribution Preferences in Feature Space | In the realm of Artificial Intelligence (AI), the importance of Explainable Artificial Intelligence (XAI) is increasingly recognized, particularly as AI models become more integral to our lives. One notable single-instance XAI approach is counterfactual explanation, which aids users in comprehending a model's decisions and offers guidance on altering these decisions. Specifically in the context of image classification models, effective image counterfactual explanations can significantly enhance user understanding. This paper introduces a novel method for computing feature importance within the feature space of a black-box model. By employing information fusion techniques, our method maximizes the use of data to address feature counterfactual explanations in the feature space. Subsequently, we utilize an image generation model to transform these feature counterfactual explanations into image counterfactual explanations. Our experiments demonstrate that the counterfactual explanations generated by our method closely resemble the original images in both pixel and feature spaces. Additionally, our method outperforms established baselines, achieving impressive experimental results. | [
"['Yukai Zhang' 'Ao Xu' 'Zihao Li' 'Tieru Wu']"
] |
null | null | 2405.20687 | null | null | http://arxiv.org/pdf/2405.20687v1 | 2024-05-31T08:31:26Z | 2024-05-31T08:31:26Z | Conditioning GAN Without Training Dataset | Deep learning algorithms have a large number of trainable parameters often with sizes of hundreds of thousands or more. Training this algorithm requires a large amount of training data and generating a sufficiently large dataset for these algorithms is costlycite{noguchi2019image}. GANs are generative neural networks that use two deep learning networks that are competing with each other. The networks are generator and discriminator networks. The generator tries to generate realistic images which resemble the actual training dataset by approximating the training data distribution and the discriminator is trained to classify images as real or fake(generated)cite{goodfellow2016nips}. Training these GAN algorithms also requires a large amount of training datasetcite{noguchi2019image}. In this study, the aim is to address the question, "Given an unconditioned pretrained generator network and a pretrained classifier, is it feasible to develop a conditioned generator without relying on any training dataset?" The paper begins with a general introduction to the problem. The subsequent sections are structured as follows: Section 2 provides background information on the problem. Section 3 reviews relevant literature on the topic. Section 4 outlines the methodology employed in this study. Section 5 presents the experimental results. Section 6 discusses the findings and proposes potential future research directions. Finally, Section 7 offers concluding remarks. The implementation can be accessed href{https://github.com/kidist-amde/BigGAN-PyTorch}{here}. | [
"['Kidist Amde Mekonnen']"
] |
null | null | 2405.20690 | null | null | http://arxiv.org/pdf/2405.20690v1 | 2024-05-31T08:35:56Z | 2024-05-31T08:35:56Z | Unleashing the Potential of Diffusion Models for Incomplete Data
Imputation | This paper introduces DiffPuter, an iterative method for missing data imputation that leverages the Expectation-Maximization (EM) algorithm and Diffusion Models. By treating missing data as hidden variables that can be updated during model training, we frame the missing data imputation task as an EM problem. During the M-step, DiffPuter employs a diffusion model to learn the joint distribution of both the observed and currently estimated missing data. In the E-step, DiffPuter re-estimates the missing data based on the conditional probability given the observed data, utilizing the diffusion model learned in the M-step. Starting with an initial imputation, DiffPuter alternates between the M-step and E-step until convergence. Through this iterative process, DiffPuter progressively refines the complete data distribution, yielding increasingly accurate estimations of the missing data. Our theoretical analysis demonstrates that the unconditional training and conditional sampling processes of the diffusion model align precisely with the objectives of the M-step and E-step, respectively. Empirical evaluations across 10 diverse datasets and comparisons with 16 different imputation methods highlight DiffPuter's superior performance. Notably, DiffPuter achieves an average improvement of 8.10% in MAE and 5.64% in RMSE compared to the most competitive existing method. | [
"['Hengrui Zhang' 'Liancheng Fang' 'Philip S. Yu']"
] |
null | null | 2405.20692 | null | null | http://arxiv.org/pdf/2405.20692v1 | 2024-05-31T08:38:25Z | 2024-05-31T08:38:25Z | In-Context Decision Transformer: Reinforcement Learning via Hierarchical
Chain-of-Thought | In-context learning is a promising approach for offline reinforcement learning (RL) to handle online tasks, which can be achieved by providing task prompts. Recent works demonstrated that in-context RL could emerge with self-improvement in a trial-and-error manner when treating RL tasks as an across-episodic sequential prediction problem. Despite the self-improvement not requiring gradient updates, current works still suffer from high computational costs when the across-episodic sequence increases with task horizons. To this end, we propose an In-context Decision Transformer (IDT) to achieve self-improvement in a high-level trial-and-error manner. Specifically, IDT is inspired by the efficient hierarchical structure of human decision-making and thus reconstructs the sequence to consist of high-level decisions instead of low-level actions that interact with environments. As one high-level decision can guide multi-step low-level actions, IDT naturally avoids excessively long sequences and solves online tasks more efficiently. Experimental results show that IDT achieves state-of-the-art in long-horizon tasks over current in-context RL methods. In particular, the online evaluation time of our IDT is textbf{36$times$} times faster than baselines in the D4RL benchmark and textbf{27$times$} times faster in the Grid World benchmark. | [
"['Sili Huang' 'Jifeng Hu' 'Hechang Chen' 'Lichao Sun' 'Bo Yang']"
] |
null | null | 2405.20717 | null | null | http://arxiv.org/pdf/2405.20717v1 | 2024-05-31T09:14:36Z | 2024-05-31T09:14:36Z | Cyclic image generation using chaotic dynamics | Successive image generation using cyclic transformations is demonstrated by extending the CycleGAN model to transform images among three different categories. Repeated application of the trained generators produces sequences of images that transition among the different categories. The generated image sequences occupy a more limited region of the image space compared with the original training dataset. Quantitative evaluation using precision and recall metrics indicates that the generated images have high quality but reduced diversity relative to the training dataset. Such successive generation processes are characterized as chaotic dynamics in terms of dynamical system theory. Positive Lyapunov exponents estimated from the generated trajectories confirm the presence of chaotic dynamics, with the Lyapunov dimension of the attractor found to be comparable to the intrinsic dimension of the training data manifold. The results suggest that chaotic dynamics in the image space defined by the deep generative model contribute to the diversity of the generated images, constituting a novel approach for multi-class image generation. This model can be interpreted as an extension of classical associative memory to perform hetero-association among image categories. | [
"['Takaya Tanaka' 'Yutaka Yamaguti']"
] |
null | null | 2405.20724 | null | null | http://arxiv.org/pdf/2405.20724v1 | 2024-05-31T09:26:26Z | 2024-05-31T09:26:26Z | Learning on Large Graphs using Intersecting Communities | Message Passing Neural Networks (MPNNs) are a staple of graph machine learning. MPNNs iteratively update each node's representation in an input graph by aggregating messages from the node's neighbors, which necessitates a memory complexity of the order of the number of graph edges. This complexity might quickly become prohibitive for large graphs provided they are not very sparse. In this paper, we propose a novel approach to alleviate this problem by approximating the input graph as an intersecting community graph (ICG) -- a combination of intersecting cliques. The key insight is that the number of communities required to approximate a graph does not depend on the graph size. We develop a new constructive version of the Weak Graph Regularity Lemma to efficiently construct an approximating ICG for any input graph. We then devise an efficient graph learning algorithm operating directly on ICG in linear memory and time with respect to the number of nodes (rather than edges). This offers a new and fundamentally different pipeline for learning on very large non-sparse graphs, whose applicability is demonstrated empirically on node classification tasks and spatio-temporal data processing. | [
"['Ben Finkelshtein' 'İsmail İlkan Ceylan' 'Michael Bronstein' 'Ron Levie']"
] |
null | null | 2405.20731 | null | null | http://arxiv.org/pdf/2405.20731v1 | 2024-05-31T09:39:41Z | 2024-05-31T09:39:41Z | Maximum Temperature Prediction Using Remote Sensing Data Via
Convolutional Neural Network | Urban heat islands, defined as specific zones exhibiting substantially higher temperatures than their immediate environs, pose significant threats to environmental sustainability and public health. This study introduces a novel machine-learning model that amalgamates data from the Sentinel-3 satellite, meteorological predictions, and additional remote sensing inputs. The primary aim is to generate detailed spatiotemporal maps that forecast the peak temperatures within a 24-hour period in Turin. Experimental results validate the model's proficiency in predicting temperature patterns, achieving a Mean Absolute Error (MAE) of 2.09 degrees Celsius for the year 2023 at a resolution of 20 meters per pixel, thereby enriching our knowledge of urban climatic behavior. This investigation enhances the understanding of urban microclimates, emphasizing the importance of cross-disciplinary data integration, and laying the groundwork for informed policy-making aimed at alleviating the negative impacts of extreme urban temperatures. | [
"['Lorenzo Innocenti' 'Giacomo Blanco' 'Luca Barco' 'Claudio Rossi']"
] |
null | null | 2405.20738 | null | null | http://arxiv.org/pdf/2405.20738v1 | 2024-05-31T10:07:24Z | 2024-05-31T10:07:24Z | Federated Random Forest for Partially Overlapping Clinical Data | In the healthcare sector, a consciousness surrounding data privacy and corresponding data protection regulations, as well as heterogeneous and non-harmonized data, pose huge challenges to large-scale data analysis. Moreover, clinical data often involves partially overlapping features, as some observations may be missing due to various reasons, such as differences in procedures, diagnostic tests, or other recorded patient history information across hospitals or institutes. To address the challenges posed by partially overlapping features and incomplete data in clinical datasets, a comprehensive approach is required. Particularly in the domain of medical data, promising outcomes are achieved by federated random forests whenever features align. However, for most standard algorithms, like random forest, it is essential that all data sets have identical parameters. Therefore, in this work the concept of federated random forest is adapted to a setting with partially overlapping features. Moreover, our research assesses the effectiveness of the newly developed federated random forest models for partially overlapping clinical data. For aggregating the federated, globally optimized model, only features available locally at each site can be used. We tackled two issues in federation: (i) the quantity of involved parties, (ii) the varying overlap of features. This evaluation was conducted across three clinical datasets. The federated random forest model even in cases where only a subset of features overlaps consistently demonstrates superior performance compared to its local counterpart. This holds true across various scenarios, including datasets with imbalanced classes. Consequently, federated random forests for partially overlapped data offer a promising solution to transcend barriers in collaborative research and corporate cooperation. | [
"['Youngjun Park' 'Cord Eric Schmidt' 'Benedikt Marcel Batton'\n 'Anne-Christin Hauschild']"
] |
null | null | 2405.20743 | null | null | http://arxiv.org/pdf/2405.20743v1 | 2024-05-31T10:13:17Z | 2024-05-31T10:13:17Z | Trajectory Forecasting through Low-Rank Adaptation of Discrete Latent
Codes | Trajectory forecasting is crucial for video surveillance analytics, as it enables the anticipation of future movements for a set of agents, e.g. basketball players engaged in intricate interactions with long-term intentions. Deep generative models offer a natural learning approach for trajectory forecasting, yet they encounter difficulties in achieving an optimal balance between sampling fidelity and diversity. We address this challenge by leveraging Vector Quantized Variational Autoencoders (VQ-VAEs), which utilize a discrete latent space to tackle the issue of posterior collapse. Specifically, we introduce an instance-based codebook that allows tailored latent representations for each example. In a nutshell, the rows of the codebook are dynamically adjusted to reflect contextual information (i.e., past motion patterns extracted from the observed trajectories). In this way, the discretization process gains flexibility, leading to improved reconstructions. Notably, instance-level dynamics are injected into the codebook through low-rank updates, which restrict the customization of the codebook to a lower dimension space. The resulting discrete space serves as the basis of the subsequent step, which regards the training of a diffusion-based predictive model. We show that such a two-fold framework, augmented with instance-level discretization, leads to accurate and diverse forecasts, yielding state-of-the-art performance on three established benchmarks. | [
"['Riccardo Benaglia' 'Angelo Porrello' 'Pietro Buzzega' 'Simone Calderara'\n 'Rita Cucchiara']"
] |
null | null | 2405.20748 | null | null | http://arxiv.org/pdf/2405.20748v1 | 2024-05-31T10:30:14Z | 2024-05-31T10:30:14Z | OpenTensor: Reproducing Faster Matrix Multiplication Discovering
Algorithms | OpenTensor is a reproduction of AlphaTensor, which discovered a new algorithm that outperforms the state-of-the-art methods for matrix multiplication by Deep Reinforcement Learning (DRL). While AlphaTensor provides a promising framework for solving scientific problems, it is really hard to reproduce due to the massive tricks and lack of source codes. In this paper, we clean up the algorithm pipeline, clarify the technical details, and make some improvements to the training process. Computational results show that OpenTensor can successfully find efficient matrix multiplication algorithms. | [
"['Yiwen Sun' 'Wenye Li']"
] |
null | null | 2405.20759 | null | null | http://arxiv.org/pdf/2405.20759v1 | 2024-05-31T12:20:02Z | 2024-05-31T12:20:02Z | Information Theoretic Text-to-Image Alignment | Diffusion models for Text-to-Image (T2I) conditional generation have seen tremendous success recently. Despite their success, accurately capturing user intentions with these models still requires a laborious trial and error process. This challenge is commonly identified as a model alignment problem, an issue that has attracted considerable attention by the research community. Instead of relying on fine-grained linguistic analyses of prompts, human annotation, or auxiliary vision-language models to steer image generation, in this work we present a novel method that relies on an information-theoretic alignment measure. In a nutshell, our method uses self-supervised fine-tuning and relies on point-wise mutual information between prompts and images to define a synthetic training set to induce model alignment. Our comparative analysis shows that our method is on-par or superior to the state-of-the-art, yet requires nothing but a pre-trained denoising network to estimate MI and a lightweight fine-tuning strategy. | [
"['Chao Wang' 'Giulio Franzese' 'Alessandro Finamore' 'Massimo Gallo'\n 'Pietro Michiardi']"
] |
null | null | 2405.20761 | null | null | http://arxiv.org/pdf/2405.20761v1 | 2024-05-31T12:27:38Z | 2024-05-31T12:27:38Z | Share Your Secrets for Privacy! Confidential Forecasting with Vertical
Federated Learning | Vertical federated learning (VFL) is a promising area for time series forecasting in industrial applications, such as predictive maintenance and machine control. Critical challenges to address in manufacturing include data privacy and over-fitting on small and noisy datasets during both training and inference. Additionally, to increase industry adaptability, such forecasting models must scale well with the number of parties while ensuring strong convergence and low-tuning complexity. We address those challenges and propose 'Secret-shared Time Series Forecasting with VFL' (STV), a novel framework that exhibits the following key features: i) a privacy-preserving algorithm for forecasting with SARIMAX and autoregressive trees on vertically partitioned data; ii) serverless forecasting using secret sharing and multi-party computation; iii) novel N-party algorithms for matrix multiplication and inverse operations for direct parameter optimization, giving strong convergence with minimal hyperparameter tuning complexity. We conduct evaluations on six representative datasets from public and industry-specific contexts. Our results demonstrate that STV's forecasting accuracy is comparable to those of centralized approaches. They also show that our direct optimization can outperform centralized methods, which include state-of-the-art diffusion models and long-short-term memory, by 23.81% on forecasting accuracy. We also conduct a scalability analysis by examining the communication costs of direct and iterative optimization to navigate the choice between the two. Code and appendix are available: https://github.com/adis98/STV | [
"['Aditya Shankar' 'Lydia Y. Chen' 'Jérémie Decouchant' 'Dimitra Gkorou'\n 'Rihan Hai']"
] |
null | null | 2405.20763 | null | null | http://arxiv.org/pdf/2405.20763v1 | 2024-05-31T12:32:34Z | 2024-05-31T12:32:34Z | Improving Generalization and Convergence by Enhancing Implicit
Regularization | In this work, we propose an Implicit Regularization Enhancement (IRE) framework to accelerate the discovery of flat solutions in deep learning, thereby improving generalization and convergence. Specifically, IRE decouples the dynamics of flat and sharp directions, which boosts the sharpness reduction along flat directions while maintaining the training stability in sharp directions. We show that IRE can be practically incorporated with {em generic base optimizers} without introducing significant computational overload. Experiments show that IRE consistently improves the generalization performance for image classification tasks across a variety of benchmark datasets (CIFAR-10/100, ImageNet) and models (ResNets and ViTs). Surprisingly, IRE also achieves a $2times$ {em speed-up} compared to AdamW in the pre-training of Llama models (of sizes ranging from 60M to 229M) on datasets including Wikitext-103, Minipile, and Openwebtext. Moreover, we provide theoretical guarantees, showing that IRE can substantially accelerate the convergence towards flat minima in Sharpness-aware Minimization (SAM). | [
"['Mingze Wang' 'Haotian He' 'Jinbo Wang' 'Zilin Wang' 'Guanhua Huang'\n 'Feiyu Xiong' 'Zhiyu Li' 'Weinan E' 'Lei Wu']"
] |
null | null | 2405.20768 | null | null | http://arxiv.org/pdf/2405.20768v1 | 2024-05-25T09:12:17Z | 2024-05-25T09:12:17Z | Expanded Gating Ranges Improve Activation Functions | Activation functions are core components of all deep learning architectures. Currently, the most popular activation functions are smooth ReLU variants like GELU and SiLU. These are self-gated activation functions where the range of the gating function is between zero and one. In this paper, we explore the viability of using arctan as a gating mechanism. A self-gated activation function that uses arctan as its gating function has a monotonically increasing first derivative. To make this activation function competitive, it is necessary to introduce a trainable parameter for every MLP block to expand the range of the gating function beyond zero and one. We find that this technique also improves existing self-gated activation functions. We conduct an empirical evaluation of Expanded ArcTan Linear Unit (xATLU), Expanded GELU (xGELU), and Expanded SiLU (xSiLU) and show that they outperform existing activation functions within a transformer architecture. Additionally, expanded gating ranges show promising results in improving first-order Gated Linear Units (GLU). | [
"['Allen Hao Huang']"
] |
null | null | 2405.20769 | null | null | http://arxiv.org/pdf/2405.20769v1 | 2024-05-27T20:30:12Z | 2024-05-27T20:30:12Z | Avoiding Pitfalls for Privacy Accounting of Subsampled Mechanisms under
Composition | We consider the problem of computing tight privacy guarantees for the composition of subsampled differentially private mechanisms. Recent algorithms can numerically compute the privacy parameters to arbitrary precision but must be carefully applied. Our main contribution is to address two common points of confusion. First, some privacy accountants assume that the privacy guarantees for the composition of a subsampled mechanism are determined by self-composing the worst-case datasets for the uncomposed mechanism. We show that this is not true in general. Second, Poisson subsampling is sometimes assumed to have similar privacy guarantees compared to sampling without replacement. We show that the privacy guarantees may in fact differ significantly between the two sampling schemes. In particular, we give an example of hyperparameters that result in $varepsilon approx 1$ for Poisson subsampling and $varepsilon > 10$ for sampling without replacement. This occurs for some parameters that could realistically be chosen for DP-SGD. | [
"['Christian Janos Lebeda' 'Matthew Regehr' 'Gautam Kamath'\n 'Thomas Steinke']"
] |
null | null | 2405.20771 | null | null | http://arxiv.org/pdf/2405.20771v1 | 2024-05-25T12:47:58Z | 2024-05-25T12:47:58Z | Towards Black-Box Membership Inference Attack for Diffusion Models | Identifying whether an artwork was used to train a diffusion model is an important research topic, given the rising popularity of AI-generated art and the associated copyright concerns. The work approaches this problem from the membership inference attack (MIA) perspective. We first identify the limitations of applying existing MIA methods for copyright protection: the required access of internal U-nets and the choice of non-member datasets for evaluation. To address the above problems, we introduce a novel black-box membership inference attack method that operates without needing access to the model's internal U-net. We then construct a DALL-E generated dataset for a more comprehensive evaluation. We validate our method across various setups, and our experimental results outperform previous works. | [
"['Jingwei Li' 'Jing Dong' 'Tianxing He' 'Jingzhao Zhang']"
] |
null | null | 2405.20772 | null | null | http://arxiv.org/pdf/2405.20772v1 | 2024-05-31T13:28:37Z | 2024-05-31T13:28:37Z | Reinforcement Learning for Sociohydrology | In this study, we discuss how reinforcement learning (RL) provides an effective and efficient framework for solving sociohydrology problems. The efficacy of RL for these types of problems is evident because of its ability to update policies in an iterative manner - something that is also foundational to sociohydrology, where we are interested in representing the co-evolution of human-water interactions. We present a simple case study to demonstrate the implementation of RL in a problem of runoff reduction through management decisions related to changes in land-use land-cover (LULC). We then discuss the benefits of RL for these types of problems and share our perspectives on the future research directions in this area. | [
"['Tirthankar Roy' 'Shivendra Srivastava' 'Beichen Zhang']"
] |
null | null | 2405.20776 | null | null | http://arxiv.org/pdf/2405.20776v1 | 2024-05-27T04:35:49Z | 2024-05-27T04:35:49Z | Federated Learning with Blockchain-Enhanced Machine Unlearning: A
Trustworthy Approach | With the growing need to comply with privacy regulations and respond to user data deletion requests, integrating machine unlearning into IoT-based federated learning has become imperative. Traditional unlearning methods, however, often lack verifiable mechanisms, leading to challenges in establishing trust. This paper delves into the innovative integration of blockchain technology with federated learning to surmount these obstacles. Blockchain fortifies the unlearning process through its inherent qualities of immutability, transparency, and robust security. It facilitates verifiable certification, harmonizes security with privacy, and sustains system efficiency. We introduce a framework that melds blockchain with federated learning, thereby ensuring an immutable record of unlearning requests and actions. This strategy not only bolsters the trustworthiness and integrity of the federated learning model but also adeptly addresses efficiency and security challenges typical in IoT environments. Our key contributions encompass a certification mechanism for the unlearning process, the enhancement of data security and privacy, and the optimization of data management to ensure system responsiveness in IoT scenarios. | [
"['Xuhan Zuo' 'Minghao Wang' 'Tianqing Zhu' 'Lefeng Zhang' 'Shui Yu'\n 'Wanlei Zhou']"
] |
null | null | 2405.20777 | null | null | http://arxiv.org/pdf/2405.20777v2 | 2024-07-13T15:47:35Z | 2024-05-28T08:41:30Z | Black-Box Detection of Language Model Watermarks | Watermarking has emerged as a promising way to detect LLM-generated text. To apply a watermark an LLM provider, given a secret key, augments generations with a signal that is later detectable by any party with the same key. Recent work has proposed three main families of watermarking schemes, two of which focus on the property of preserving the LLM distribution. This is motivated by it being a tractable proxy for maintaining LLM capabilities, but also by the idea that concealing a watermark deployment makes it harder for malicious actors to hide misuse by avoiding a certain LLM or attacking its watermark. Yet, despite much discourse around detectability, no prior work has investigated if any of these scheme families are detectable in a realistic black-box setting. We tackle this for the first time, developing rigorous statistical tests to detect the presence of all three most popular watermarking scheme families using only a limited number of black-box queries. We experimentally confirm the effectiveness of our methods on a range of schemes and a diverse set of open-source models. Our findings indicate that current watermarking schemes are more detectable than previously believed, and that obscuring the fact that a watermark was deployed may not be a viable way for providers to protect against adversaries. We further apply our methods to test for watermark presence behind the most popular public APIs: GPT4, Claude 3, Gemini 1.0 Pro, finding no strong evidence of a watermark at this point in time. | [
"['Thibaud Gloaguen' 'Nikola Jovanović' 'Robin Staab' 'Martin Vechev']"
] |
null | null | 2405.20778 | null | null | http://arxiv.org/pdf/2405.20778v1 | 2024-05-28T06:10:12Z | 2024-05-28T06:10:12Z | Improved Generation of Adversarial Examples Against Safety-aligned LLMs | Despite numerous efforts to ensure large language models (LLMs) adhere to safety standards and produce harmless content, some successes have been achieved in bypassing these restrictions, known as jailbreak attacks against LLMs. Adversarial prompts generated using gradient-based methods exhibit outstanding performance in performing jailbreak attacks automatically. Nevertheless, due to the discrete nature of texts, the input gradient of LLMs struggles to precisely reflect the magnitude of loss change that results from token replacements in the prompt, leading to limited attack success rates against safety-aligned LLMs, even in the white-box setting. In this paper, we explore a new perspective on this problem, suggesting that it can be alleviated by leveraging innovations inspired in transfer-based attacks that were originally proposed for attacking black-box image classification models. For the first time, we appropriate the ideologies of effective methods among these transfer-based attacks, i.e., Skip Gradient Method and Intermediate Level Attack, for improving the effectiveness of automatically generated adversarial examples against white-box LLMs. With appropriate adaptations, we inject these ideologies into gradient-based adversarial prompt generation processes and achieve significant performance gains without introducing obvious computational cost. Meanwhile, by discussing mechanisms behind the gains, new insights are drawn, and proper combinations of these methods are also developed. Our empirical results show that the developed combination achieves >30% absolute increase in attack success rates compared with GCG for attacking the Llama-2-7B-Chat model on AdvBench. | [
"['Qizhang Li' 'Yiwen Guo' 'Wangmeng Zuo' 'Hao Chen']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.