categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2402.12142
| null | null |
http://arxiv.org/abs/2402.12142v1
|
2024-02-19T13:52:37Z
|
2024-02-19T13:52:37Z
|
Federated Bayesian Network Ensembles
|
Federated learning allows us to run machine learning algorithms on decentralized data when data sharing is not permitted due to privacy concerns. Ensemble-based learning works by training multiple (weak) classifiers whose output is aggregated. Federated ensembles are ensembles applied to a federated setting, where each classifier in the ensemble is trained on one data location. In this article, we explore the use of federated ensembles of Bayesian networks (FBNE) in a range of experiments and compare their performance with locally trained models and models trained with VertiBayes, a federated learning algorithm to train Bayesian networks from decentralized data. Our results show that FBNE outperforms local models and provides a significant increase in training speed compared with VertiBayes while maintaining a similar performance in most settings, among other advantages. We show that FBNE is a potentially useful tool within the federated learning toolbox, especially when local populations are heavily biased, or there is a strong imbalance in population size across parties. We discuss the advantages and disadvantages of this approach in terms of time complexity, model accuracy, privacy protection, and model interpretability.
|
[
"['Florian van Daalen' 'Lianne Ippel' 'Andre Dekker' 'Inigo Bermejo']"
] |
null | null |
2402.12146
| null | null |
http://arxiv.org/pdf/2402.12146v3
|
2024-05-31T03:25:42Z
|
2024-02-19T13:57:55Z
|
Enabling Weak LLMs to Judge Response Reliability via Meta Ranking
|
Despite the strong performance of large language models (LLMs) across a wide range of tasks, they still have reliability issues. Previous studies indicate that strong LLMs like GPT-4-turbo excel in evaluating the reliability of responses from LLMs, but face efficiency and local deployment issues. Thus, to enable weak LLMs to effectively assess the reliability of LLM responses, we propose a novel cross-query-comparison-based method called $textit{Meta Ranking}$ (MR). Unlike previous few-shot methods that solely based on in-context learning capabilities in LLMs, MR assesses reliability by pairwisely ranking the target query-response pair with multiple reference query-response pairs. We found that MR is highly effective in error detection for LLM responses, where weak LLMs, such as Phi-2, could surpass strong baselines like GPT-3.5-turbo, requiring only five reference samples and significantly improving efficiency. We further demonstrate that MR can enhance strong LLMs' performance in two practical applications: model cascading and instruction tuning. In model cascading, we combine open- and closed-source LLMs to achieve performance comparable to GPT-4-turbo with lower costs. In instruction tuning, we use MR for iterative training data filtering, significantly reducing data processing time and enabling LLaMA-7B and Phi-2 to surpass Alpaca-13B with fewer training tokens. These results underscore the high potential of MR in both efficiency and effectiveness.
|
[
"['Zijun Liu' 'Boqun Kou' 'Peng Li' 'Ming Yan' 'Ji Zhang' 'Fei Huang'\n 'Yang Liu']"
] |
null | null |
2402.12149
| null | null |
http://arxiv.org/pdf/2402.12149v1
|
2024-02-19T14:02:13Z
|
2024-02-19T14:02:13Z
|
MLFEF: Machine Learning Fusion Model with Empirical Formula to Explore
the Momentum in Competitive Sports
|
Tennis is so popular that coaches and players are curious about factors other than skill, such as momentum. This article will try to define and quantify momentum, providing a basis for real-time analysis of tennis matches. Based on the tennis Grand Slam men's singles match data in recent years, we built two models, one is to build a model based on data-driven, and the other is to build a model based on empirical formulas. For the data-driven model, we first found a large amount of public data including public data on tennis matches in the past five years and personal information data of players. Then the data is preprocessed, and feature engineered, and a fusion model of SVM, Random Forrest algorithm and XGBoost was established. For the mechanism analysis model, important features were selected based on the suggestions of many tennis players and enthusiasts, the sliding window algorithm was used to calculate the weight, and different methods were used to visualize the momentum. For further analysis of the momentum fluctuation, it is based on the popular CUMSUM algorithm in the industry as well as the RUN Test, and the result shows the momentum is not random and the trend might be random. At last, the robustness of the fusion model is analyzed by Monte Carlo simulation.
|
[
"['Ruixin Peng' 'Ziqing Li']"
] |
null | null |
2402.12161
| null | null |
http://arxiv.org/abs/2402.12161v2
|
2024-02-20T09:03:43Z
|
2024-02-19T14:16:08Z
|
Endowing Pre-trained Graph Models with Provable Fairness
|
Pre-trained graph models (PGMs) aim to capture transferable inherent structural properties and apply them to different downstream tasks. Similar to pre-trained language models, PGMs also inherit biases from human society, resulting in discriminatory behavior in downstream applications. The debiasing process of existing fair methods is generally coupled with parameter optimization of GNNs. However, different downstream tasks may be associated with different sensitive attributes in reality, directly employing existing methods to improve the fairness of PGMs is inflexible and inefficient. Moreover, most of them lack a theoretical guarantee, i.e., provable lower bounds on the fairness of model predictions, which directly provides assurance in a practical scenario. To overcome these limitations, we propose a novel adapter-tuning framework that endows pre-trained graph models with provable fairness (called GraphPAR). GraphPAR freezes the parameters of PGMs and trains a parameter-efficient adapter to flexibly improve the fairness of PGMs in downstream tasks. Specifically, we design a sensitive semantic augmenter on node representations, to extend the node representations with different sensitive attribute semantics for each node. The extended representations will be used to further train an adapter, to prevent the propagation of sensitive attribute semantics from PGMs to task predictions. Furthermore, with GraphPAR, we quantify whether the fairness of each node is provable, i.e., predictions are always fair within a certain range of sensitive attribute semantics. Experimental evaluations on real-world datasets demonstrate that GraphPAR achieves state-of-the-art prediction performance and fairness on node classification task. Furthermore, based on our GraphPAR, around 90% nodes have provable fairness.
|
[
"['Zhongjian Zhang' 'Mengmei Zhang' 'Yue Yu' 'Cheng Yang' 'Jiawei Liu'\n 'Chuan Shi']"
] |
null | null |
2402.12175
| null | null |
http://arxiv.org/pdf/2402.12175v1
|
2024-02-19T14:29:35Z
|
2024-02-19T14:29:35Z
|
Learning Discretized Bayesian Networks with GOMEA
|
Bayesian networks model relationships between random variables under uncertainty and can be used to predict the likelihood of events and outcomes while incorporating observed evidence. From an eXplainable AI (XAI) perspective, such models are interesting as they tend to be compact. Moreover, captured relations can be directly inspected by domain experts. In practice, data is often real-valued. Unless assumptions of normality can be made, discretization is often required. The optimal discretization, however, depends on the relations modelled between the variables. This complicates learning Bayesian networks from data. For this reason, most literature focuses on learning conditional dependencies between sets of variables, called structure learning. In this work, we extend an existing state-of-the-art structure learning approach based on the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) to jointly learn variable discretizations. The proposed Discretized Bayesian Network GOMEA (DBN-GOMEA) obtains similar or better results than the current state-of-the-art when tasked to retrieve randomly generated ground-truth networks. Moreover, leveraging a key strength of evolutionary algorithms, we can straightforwardly perform DBN learning multi-objectively. We show how this enables incorporating expert knowledge in a uniquely insightful fashion, finding multiple DBNs that trade-off complexity, accuracy, and the difference with a pre-determined expert network.
|
[
"['Damy M. F. Ha' 'Tanja Alderliesten' 'Peter A. N. Bosman']"
] |
null | null |
2402.12177
| null | null |
http://arxiv.org/pdf/2402.12177v4
|
2024-03-12T16:04:23Z
|
2024-02-19T14:33:24Z
|
Mafin: Enhancing Black-Box Embeddings with Model Augmented Fine-Tuning
|
Retrieval Augmented Generation (RAG) has emerged as an effective solution for mitigating hallucinations in Large Language Models (LLMs). The retrieval stage in RAG typically involves a pre-trained embedding model, which converts queries and passages into vectors to capture their semantics. However, a standard pre-trained embedding model may exhibit sub-optimal performance when applied to specific domain knowledge, necessitating fine-tuning. This paper addresses scenarios where the embeddings are only available from a black-box model. We introduce Model augmented fine-tuning (Mafin) -- a novel approach for fine-tuning a black-box embedding model by augmenting it with a trainable embedding model. Our results demonstrate that Mafin significantly enhances the performance of the black-box embeddings by only requiring the training of a small augmented model. We validate the effectiveness of our method on both labeled and unlabeled datasets, illustrating its broad applicability and efficiency.
|
[
"['Mingtian Zhang' 'Shawn Lan' 'Peter Hayes' 'David Barber']"
] |
null | null |
2402.12181
| null | null |
http://arxiv.org/pdf/2402.12181v1
|
2024-02-19T14:42:10Z
|
2024-02-19T14:42:10Z
|
Revisiting Data Augmentation in Deep Reinforcement Learning
|
Various data augmentation techniques have been recently proposed in image-based deep reinforcement learning (DRL). Although they empirically demonstrate the effectiveness of data augmentation for improving sample efficiency or generalization, which technique should be preferred is not always clear. To tackle this question, we analyze existing methods to better understand them and to uncover how they are connected. Notably, by expressing the variance of the Q-targets and that of the empirical actor/critic losses of these methods, we can analyze the effects of their different components and compare them. We furthermore formulate an explanation about how these methods may be affected by choosing different data augmentation transformations in calculating the target Q-values. This analysis suggests recommendations on how to exploit data augmentation in a more principled way. In addition, we include a regularization term called tangent prop, previously proposed in computer vision, but whose adaptation to DRL is novel to the best of our knowledge. We evaluate our proposition and validate our analysis in several domains. Compared to different relevant baselines, we demonstrate that it achieves state-of-the-art performance in most environments and shows higher sample efficiency and better generalization ability in some complex environments.
|
[
"['Jianshu Hu' 'Yunpeng Jiang' 'Paul Weng']"
] |
null | null |
2402.12187
| null | null |
http://arxiv.org/pdf/2402.12187v1
|
2024-02-19T14:51:20Z
|
2024-02-19T14:51:20Z
|
Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep
Learning via Adversarial Training
|
Deep learning models continue to advance in accuracy, yet they remain vulnerable to adversarial attacks, which often lead to the misclassification of adversarial examples. Adversarial training is used to mitigate this problem by increasing robustness against these attacks. However, this approach typically reduces a model's standard accuracy on clean, non-adversarial samples. The necessity for deep learning models to balance both robustness and accuracy for security is obvious, but achieving this balance remains challenging, and the underlying reasons are yet to be clarified. This paper proposes a novel adversarial training method called Adversarial Feature Alignment (AFA), to address these problems. Our research unveils an intriguing insight: misalignment within the feature space often leads to misclassification, regardless of whether the samples are benign or adversarial. AFA mitigates this risk by employing a novel optimization algorithm based on contrastive learning to alleviate potential feature misalignment. Through our evaluations, we demonstrate the superior performance of AFA. The baseline AFA delivers higher robust accuracy than previous adversarial contrastive learning methods while minimizing the drop in clean accuracy to 1.86% and 8.91% on CIFAR10 and CIFAR100, respectively, in comparison to cross-entropy. We also show that joint optimization of AFA and TRADES, accompanied by data augmentation using a recent diffusion model, achieves state-of-the-art accuracy and robustness.
|
[
"['Leo Hyun Park' 'Jaeuk Kim' 'Myung Gyo Oh' 'Jaewoo Park' 'Taekyoung Kwon']"
] |
null | null |
2402.12189
| null | null |
http://arxiv.org/pdf/2402.12189v1
|
2024-02-19T14:52:50Z
|
2024-02-19T14:52:50Z
|
Amplifying Training Data Exposure through Fine-Tuning with
Pseudo-Labeled Memberships
|
Neural language models (LMs) are vulnerable to training data extraction attacks due to data memorization. This paper introduces a novel attack scenario wherein an attacker adversarially fine-tunes pre-trained LMs to amplify the exposure of the original training data. This strategy differs from prior studies by aiming to intensify the LM's retention of its pre-training dataset. To achieve this, the attacker needs to collect generated texts that are closely aligned with the pre-training data. However, without knowledge of the actual dataset, quantifying the amount of pre-training data within generated texts is challenging. To address this, we propose the use of pseudo-labels for these generated texts, leveraging membership approximations indicated by machine-generated probabilities from the target LM. We subsequently fine-tune the LM to favor generations with higher likelihoods of originating from the pre-training data, based on their membership probabilities. Our empirical findings indicate a remarkable outcome: LMs with over 1B parameters exhibit a four to eight-fold increase in training data exposure. We discuss potential mitigations and suggest future research directions.
|
[
"['Myung Gyo Oh' 'Hong Eun Ahn' 'Leo Hyun Park' 'Taekyoung Kwon']"
] |
null | null |
2402.12190
| null | null |
http://arxiv.org/pdf/2402.12190v2
|
2024-03-20T21:21:48Z
|
2024-02-19T14:54:20Z
|
Towards AI-Based Precision Oncology: A Machine Learning Framework for
Personalized Counterfactual Treatment Suggestions based on Multi-Omics Data
|
AI-driven precision oncology has the transformative potential to reshape cancer treatment by leveraging the power of AI models to analyze the interaction between complex patient characteristics and their corresponding treatment outcomes. New technological platforms have facilitated the timely acquisition of multimodal data on tumor biology at an unprecedented resolution, such as single-cell multi-omics data, making this quality and quantity of data available for data-driven improved clinical decision-making. In this work, we propose a modular machine learning framework designed for personalized counterfactual cancer treatment suggestions based on an ensemble of machine learning experts trained on diverse multi-omics technologies. These specialized counterfactual experts per technology are consistently aggregated into a more powerful expert with superior performance and can provide both confidence and an explanation of its decision. The framework is tailored to address critical challenges inherent in data-driven cancer research, including the high-dimensional nature of the data, and the presence of treatment assignment bias in the retrospective observational data. The framework is showcased through comprehensive demonstrations using data from in-vitro and in-vivo treatment responses from a cohort of patients with ovarian cancer. Our method aims to empower clinicians with a reality-centric decision-support tool including probabilistic treatment suggestions with calibrated confidence and personalized explanations for tailoring treatment strategies to multi-omics characteristics of individual cancer patients.
|
[
"['Manuel Schürch' 'Laura Boos' 'Viola Heinzelmann-Schwarz' 'Gabriele Gut'\n 'Michael Krauthammer' 'Andreas Wicki' 'Tumor Profiler Consortium']"
] |
null | null |
2402.12198
| null | null |
http://arxiv.org/pdf/2402.12198v1
|
2024-02-19T15:03:04Z
|
2024-02-19T15:03:04Z
|
Zero shot VLMs for hate meme detection: Are we there yet?
|
Multimedia content on social media is rapidly evolving, with memes gaining prominence as a distinctive form. Unfortunately, some malicious users exploit memes to target individuals or vulnerable communities, making it imperative to identify and address such instances of hateful memes. Extensive research has been conducted to address this issue by developing hate meme detection models. However, a notable limitation of traditional machine/deep learning models is the requirement for labeled datasets for accurate classification. Recently, the research community has witnessed the emergence of several visual language models that have exhibited outstanding performance across various tasks. In this study, we aim to investigate the efficacy of these visual language models in handling intricate tasks such as hate meme detection. We use various prompt settings to focus on zero-shot classification of hateful/harmful memes. Through our analysis, we observe that large VLMs are still vulnerable for zero-shot hate meme detection.
|
[
"['Naquee Rizwan' 'Paramananda Bhaskar' 'Mithun Das'\n 'Swadhin Satyaprakash Majhi' 'Punyajoy Saha' 'Animesh Mukherjee']"
] |
null | null |
2402.12201
| null | null |
http://arxiv.org/pdf/2402.12201v1
|
2024-02-19T15:04:53Z
|
2024-02-19T15:04:53Z
|
Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic
Interpretability: A Case Study on Othello-GPT
|
Sparse dictionary learning has been a rapidly growing technique in mechanistic interpretability to attack superposition and extract more human-understandable features from model activations. We ask a further question based on the extracted more monosemantic features: How do we recognize circuits connecting the enormous amount of dictionary features? We propose a circuit discovery framework alternative to activation patching. Our framework suffers less from out-of-distribution and proves to be more efficient in terms of asymptotic complexity. The basic unit in our framework is dictionary features decomposed from all modules writing to the residual stream, including embedding, attention output and MLP output. Starting from any logit, dictionary feature or attention score, we manage to trace down to lower-level dictionary features of all tokens and compute their contribution to these more interpretable and local model behaviors. We dig in a small transformer trained on a synthetic task named Othello and find a number of human-understandable fine-grained circuits inside of it.
|
[
"['Zhengfu He' 'Xuyang Ge' 'Qiong Tang' 'Tianxiang Sun' 'Qinyuan Cheng'\n 'Xipeng Qiu']"
] |
null | null |
2402.12219
| null | null |
http://arxiv.org/pdf/2402.12219v2
|
2024-04-17T15:03:19Z
|
2024-02-19T15:21:58Z
|
Reformatted Alignment
|
The quality of finetuning data is crucial for aligning large language models (LLMs) with human values. Current methods to improve data quality are either labor-intensive or prone to factual errors caused by LLM hallucinations. This paper explores elevating the quality of existing instruction data to better align with human values, introducing a simple and effective approach named ReAlign, which reformats the responses of instruction data into a format that better aligns with pre-established criteria and the collated evidence. This approach minimizes human annotation, hallucination, and the difficulty in scaling, remaining orthogonal to existing alignment techniques. Experimentally, ReAlign significantly boosts the general alignment ability, math reasoning, factuality, and readability of the LLMs. Encouragingly, without introducing any additional data or advanced training techniques, and merely by reformatting the response, LLaMA-2-13B's mathematical reasoning ability on GSM8K can be improved from 46.77% to 56.63% in accuracy. Additionally, a mere 5% of ReAlign data yields a 67% boost in general alignment ability measured by the Alpaca dataset. This work highlights the need for further research into the science and mechanistic interpretability of LLMs. We have made the associated code and data publicly accessible to support future studies at https://github.com/GAIR-NLP/ReAlign.
|
[
"['Run-Ze Fan' 'Xuefeng Li' 'Haoyang Zou' 'Junlong Li' 'Shwai He'\n 'Ethan Chern' 'Jiewen Hu' 'Pengfei Liu']"
] |
null | null |
2402.12220
| null | null |
http://arxiv.org/pdf/2402.12220v1
|
2024-02-19T15:26:19Z
|
2024-02-19T15:26:19Z
|
Bayesian Parameter-Efficient Fine-Tuning for Overcoming Catastrophic
Forgetting
|
Although motivated by the adaptation of text-to-speech synthesis models, we argue that more generic parameter-efficient fine-tuning (PEFT) is an appropriate framework to do such adaptation. However, catastrophic forgetting remains an issue with PEFT, damaging the pre-trained model's inherent capabilities. We demonstrate that existing Bayesian learning techniques can be applied to PEFT to prevent catastrophic forgetting as long as the parameter shift of the fine-tuned layers can be calculated differentiably. In a principled series of experiments on language modeling and speech synthesis tasks, we utilize established Laplace approximations, including diagonal and Kronecker factored approaches, to regularize PEFT with the low-rank adaptation (LoRA) and compare their performance in pre-training knowledge preservation. Our results demonstrate that catastrophic forgetting can be overcome by our methods without degrading the fine-tuning performance, and using the Kronecker factored approximations produces a better preservation of the pre-training knowledge than the diagonal ones.
|
[
"['Haolin Chen' 'Philip N. Garner']"
] |
null | null |
2402.12222
| null | null |
http://arxiv.org/pdf/2402.12222v1
|
2024-02-19T15:30:40Z
|
2024-02-19T15:30:40Z
|
CovRL: Fuzzing JavaScript Engines with Coverage-Guided Reinforcement
Learning for LLM-based Mutation
|
Fuzzing is an effective bug-finding technique but it struggles with complex systems like JavaScript engines that demand precise grammatical input. Recently, researchers have adopted language models for context-aware mutation in fuzzing to address this problem. However, existing techniques are limited in utilizing coverage guidance for fuzzing, which is rather performed in a black-box manner. This paper presents a novel technique called CovRL (Coverage-guided Reinforcement Learning) that combines Large Language Models (LLMs) with reinforcement learning from coverage feedback. Our fuzzer, CovRL-Fuzz, integrates coverage feedback directly into the LLM by leveraging the Term Frequency-Inverse Document Frequency (TF-IDF) method to construct a weighted coverage map. This map is key in calculating the fuzzing reward, which is then applied to the LLM-based mutator through reinforcement learning. CovRL-Fuzz, through this approach, enables the generation of test cases that are more likely to discover new coverage areas, thus improving vulnerability detection while minimizing syntax and semantic errors, all without needing extra post-processing. Our evaluation results indicate that CovRL-Fuzz outperforms the state-of-the-art fuzzers in terms of code coverage and bug-finding capabilities: CovRL-Fuzz identified 48 real-world security-related bugs in the latest JavaScript engines, including 39 previously unknown vulnerabilities and 11 CVEs.
|
[
"['Jueon Eom' 'Seyeon Jeong' 'Taekyoung Kwon']"
] |
null | null |
2402.12226
| null | null |
http://arxiv.org/pdf/2402.12226v3
|
2024-03-07T06:31:46Z
|
2024-02-19T15:33:10Z
|
AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling
|
We introduce AnyGPT, an any-to-any multimodal language model that utilizes discrete representations for the unified processing of various modalities, including speech, text, images, and music. AnyGPT can be trained stably without any alterations to the current large language model (LLM) architecture or training paradigms. Instead, it relies exclusively on data-level preprocessing, facilitating the seamless integration of new modalities into LLMs, akin to the incorporation of new languages. We build a multimodal text-centric dataset for multimodal alignment pre-training. Utilizing generative models, we synthesize the first large-scale any-to-any multimodal instruction dataset. It consists of 108k samples of multi-turn conversations that intricately interweave various modalities, thus equipping the model to handle arbitrary combinations of multimodal inputs and outputs. Experimental results demonstrate that AnyGPT is capable of facilitating any-to-any multimodal conversation while achieving performance comparable to specialized models across all modalities, proving that discrete representations can effectively and conveniently unify multiple modalities within a language model. Demos are shown in https://junzhan2000.github.io/AnyGPT.github.io/
|
[
"['Jun Zhan' 'Junqi Dai' 'Jiasheng Ye' 'Yunhua Zhou' 'Dong Zhang'\n 'Zhigeng Liu' 'Xin Zhang' 'Ruibin Yuan' 'Ge Zhang' 'Linyang Li'\n 'Hang Yan' 'Jie Fu' 'Tao Gui' 'Tianxiang Sun' 'Yugang Jiang' 'Xipeng Qiu']"
] |
null | null |
2402.12231
| null | null |
http://arxiv.org/pdf/2402.12231v4
|
2024-07-15T12:14:15Z
|
2024-02-19T15:36:36Z
|
Diffusion Tempering Improves Parameter Estimation with Probabilistic
Integrators for Ordinary Differential Equations
|
Ordinary differential equations (ODEs) are widely used to describe dynamical systems in science, but identifying parameters that explain experimental measurements is challenging. In particular, although ODEs are differentiable and would allow for gradient-based parameter optimization, the nonlinear dynamics of ODEs often lead to many local minima and extreme sensitivity to initial conditions. We therefore propose diffusion tempering, a novel regularization technique for probabilistic numerical methods which improves convergence of gradient-based parameter optimization in ODEs. By iteratively reducing a noise parameter of the probabilistic integrator, the proposed method converges more reliably to the true parameters. We demonstrate that our method is effective for dynamical systems of different complexity and show that it obtains reliable parameter estimates for a Hodgkin-Huxley model with a practically relevant number of parameters.
|
[
"['Jonas Beck' 'Nathanael Bosch' 'Michael Deistler' 'Kyra L. Kadhim'\n 'Jakob H. Macke' 'Philipp Hennig' 'Philipp Berens']"
] |
null | null |
2402.12232
| null | null |
http://arxiv.org/pdf/2402.12232v1
|
2024-02-19T15:39:39Z
|
2024-02-19T15:39:39Z
|
Kernel KMeans clustering splits for end-to-end unsupervised decision
trees
|
Trees are convenient models for obtaining explainable predictions on relatively small datasets. Although there are many proposals for the end-to-end construction of such trees in supervised learning, learning a tree end-to-end for clustering without labels remains an open challenge. As most works focus on interpreting with trees the result of another clustering algorithm, we present here a novel end-to-end trained unsupervised binary tree for clustering: Kauri. This method performs a greedy maximisation of the kernel KMeans objective without requiring the definition of centroids. We compare this model on multiple datasets with recent unsupervised trees and show that Kauri performs identically when using a linear kernel. For other kernels, Kauri often outperforms the concatenation of kernel KMeans and a CART decision tree.
|
[
"['Louis Ohl' 'Pierre-Alexandre Mattei' 'Mickaël Leclercq' 'Arnaud Droit'\n 'Frédéric Precioso']"
] |
null | null |
2402.12235
| null | null |
http://arxiv.org/pdf/2402.12235v2
|
2024-06-26T14:18:44Z
|
2024-02-19T15:44:54Z
|
The Fundamental Limits of Least-Privilege Learning
|
The promise of least-privilege learning -- to find feature representations that are useful for a learning task but prevent inference of any sensitive information unrelated to this task -- is highly appealing. However, so far this concept has only been stated informally. It thus remains an open question whether and how we can achieve this goal. In this work, we provide the first formalisation of the least-privilege principle for machine learning and characterise its feasibility. We prove that there is a fundamental trade-off between a representation's utility for a given task and its leakage beyond the intended task: it is not possible to learn representations that have high utility for the intended task but, at the same time prevent inference of any attribute other than the task label itself. This trade-off holds under realistic assumptions on the data distribution and regardless of the technique used to learn the feature mappings that produce these representations. We empirically validate this result for a wide range of learning techniques, model architectures, and datasets.
|
[
"['Theresa Stadler' 'Bogdan Kulynych' 'Michael C. Gastpar'\n 'Nicolas Papernot' 'Carmela Troncoso']"
] |
null | null |
2402.12237
| null | null |
http://arxiv.org/pdf/2402.12237v3
|
2024-06-02T16:02:24Z
|
2024-02-19T15:47:47Z
|
Learning to Defer in Content Moderation: The Human-AI Interplay
|
Successful content moderation in online platforms relies on a human-AI collaboration approach. A typical heuristic estimates the expected harmfulness of a post and uses fixed thresholds to decide whether to remove it and whether to send it for human review. This disregards the prediction uncertainty, the time-varying element of human review capacity and post arrivals, and the selective sampling in the dataset (humans only review posts filtered by the admission algorithm). In this paper, we introduce a model to capture the human-AI interplay in content moderation. The algorithm observes contextual information for incoming posts, makes classification and admission decisions, and schedules posts for human review. Only admitted posts receive human reviews on their harmfulness. These reviews help educate the machine-learning algorithms but are delayed due to congestion in the human review system. The classical learning-theoretic way to capture this human-AI interplay is via the framework of learning to defer, where the algorithm has the option to defer a classification task to humans for a fixed cost and immediately receive feedback. Our model contributes to this literature by introducing congestion in the human review system. Moreover, unlike work on online learning with delayed feedback where the delay in the feedback is exogenous to the algorithm's decisions, the delay in our model is endogenous to both the admission and the scheduling decisions. We propose a near-optimal learning algorithm that carefully balances the classification loss from a selectively sampled dataset, the idiosyncratic loss of non-reviewed posts, and the delay loss of having congestion in the human review system. To the best of our knowledge, this is the first result for online learning in contextual queueing systems and hence our analytical framework may be of independent interest.
|
[
"['Thodoris Lykouris' 'Wentao Weng']"
] |
null | null |
2402.12240
| null | null |
http://arxiv.org/pdf/2402.12240v1
|
2024-02-19T15:54:36Z
|
2024-02-19T15:54:36Z
|
BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts
|
Neuro-Symbolic (NeSy) predictors that conform to symbolic knowledge - encoding, e.g., safety constraints - can be affected by Reasoning Shortcuts (RSs): They learn concepts consistent with the symbolic knowledge by exploiting unintended semantics. RSs compromise reliability and generalization and, as we show in this paper, they are linked to NeSy models being overconfident about the predicted concepts. Unfortunately, the only trustworthy mitigation strategy requires collecting costly dense supervision over the concepts. Rather than attempting to avoid RSs altogether, we propose to ensure NeSy models are aware of the semantic ambiguity of the concepts they learn, thus enabling their users to identify and distrust low-quality concepts. Starting from three simple desiderata, we derive bears (BE Aware of Reasoning Shortcuts), an ensembling technique that calibrates the model's concept-level confidence without compromising prediction accuracy, thus encouraging NeSy architectures to be uncertain about concepts affected by RSs. We show empirically that bears improves RS-awareness of several state-of-the-art NeSy models, and also facilitates acquiring informative dense annotations for mitigation purposes.
|
[
"['Emanuele Marconato' 'Samuele Bortolotti' 'Emile van Krieken'\n 'Antonio Vergari' 'Andrea Passerini' 'Stefano Teso']"
] |
null | null |
2402.12241
| null | null |
http://arxiv.org/pdf/2402.12241v1
|
2024-02-19T15:56:43Z
|
2024-02-19T15:56:43Z
|
Convergence of Gradient Descent for Recurrent Neural Networks: A
Nonasymptotic Analysis
|
We analyze recurrent neural networks trained with gradient descent in the supervised learning setting for dynamical systems, and prove that gradient descent can achieve optimality emph{without} massive overparameterization. Our in-depth nonasymptotic analysis (i) provides sharp bounds on the network size $m$ and iteration complexity $tau$ in terms of the sequence length $T$, sample size $n$ and ambient dimension $d$, and (ii) identifies the significant impact of long-term dependencies in the dynamical system on the convergence and network width bounds characterized by a cutoff point that depends on the Lipschitz continuity of the activation function. Remarkably, this analysis reveals that an appropriately-initialized recurrent neural network trained with $n$ samples can achieve optimality with a network size $m$ that scales only logarithmically with $n$. This sharply contrasts with the prior works that require high-order polynomial dependency of $m$ on $n$ to establish strong regularity conditions. Our results are based on an explicit characterization of the class of dynamical systems that can be approximated and learned by recurrent neural networks via norm-constrained transportation mappings, and establishing local smoothness properties of the hidden state with respect to the learnable parameters.
|
[
"['Semih Cayci' 'Atilla Eryilmaz']"
] |
null | null |
2402.12242
| null | null |
http://arxiv.org/pdf/2402.12242v1
|
2024-02-19T15:57:39Z
|
2024-02-19T15:57:39Z
|
Synthetic location trajectory generation using categorical diffusion
models
|
Diffusion probabilistic models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data, for instance, for computer vision, audio, natural language processing, or biomolecule generation. Here, we propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals. ILTs are of major importance in mobility research to understand the mobility behavior of populations and to ultimately inform political decision-making. We represent ILTs as multi-dimensional categorical random variables and propose to model their joint distribution using a continuous DPM by first applying the diffusion process in a continuous unconstrained space and then mapping the continuous variables into a discrete space. We demonstrate that our model can synthesize realistic ILPs by comparing conditionally and unconditionally generated sequences to real-world ILPs from a GNSS tracking data set which suggests the potential use of our model for synthetic data generation, for example, for benchmarking models used in mobility research.
|
[
"['Simon Dirmeier' 'Ye Hong' 'Fernando Perez-Cruz']"
] |
null | null |
2402.12260
| null | null |
http://arxiv.org/pdf/2402.12260v1
|
2024-02-15T16:51:47Z
|
2024-02-15T16:51:47Z
|
Non-orthogonal Age-Optimal Information Dissemination in Vehicular
Networks: A Meta Multi-Objective Reinforcement Learning Approach
|
This paper considers minimizing the age-of-information (AoI) and transmit power consumption in a vehicular network, where a roadside unit (RSU) provides timely updates about a set of physical processes to vehicles. We consider non-orthogonal multi-modal information dissemination, which is based on superposed message transmission from RSU and successive interference cancellation (SIC) at vehicles. The formulated problem is a multi-objective mixed-integer nonlinear programming problem; thus, a Pareto-optimal front is very challenging to obtain. First, we leverage the weighted-sum approach to decompose the multi-objective problem into a set of multiple single-objective sub-problems corresponding to each predefined objective preference weight. Then, we develop a hybrid deep Q-network (DQN)-deep deterministic policy gradient (DDPG) model to solve each optimization sub-problem respective to predefined objective-preference weight. The DQN optimizes the decoding order, while the DDPG solves the continuous power allocation. The model needs to be retrained for each sub-problem. We then present a two-stage meta-multi-objective reinforcement learning solution to estimate the Pareto front with a few fine-tuning update steps without retraining the model for each sub-problem. Simulation results illustrate the efficacy of the proposed solutions compared to the existing benchmarks and that the meta-multi-objective reinforcement learning model estimates a high-quality Pareto frontier with reduced training time.
|
[
"['A. A. Habob' 'H. Tabassum' 'O. Waqar']"
] |
null | null |
2402.12263
| null | null |
http://arxiv.org/pdf/2402.12263v2
|
2024-03-08T21:16:13Z
|
2024-02-19T16:24:20Z
|
Towards a tailored mixed-precision sub-8-bit quantization scheme for
Gated Recurrent Units using Genetic Algorithms
|
Despite the recent advances in model compression techniques for deep neural networks, deploying such models on ultra-low-power embedded devices still proves challenging. In particular, quantization schemes for Gated Recurrent Units (GRU) are difficult to tune due to their dependence on an internal state, preventing them from fully benefiting from sub-8bit quantization. In this work, we propose a modular integer quantization scheme for GRUs where the bit width of each operator can be selected independently. We then employ Genetic Algorithms (GA) to explore the vast search space of possible bit widths, simultaneously optimising for model size and accuracy. We evaluate our methods on four different sequential tasks and demonstrate that mixed-precision solutions exceed homogeneous-precision ones in terms of Pareto efficiency. In our results, we achieve a model size reduction between 25% and 55% while maintaining an accuracy comparable with the 8-bit homogeneous equivalent.
|
[
"['Riccardo Miccini' 'Alessandro Cerioli' 'Clément Laroche'\n 'Tobias Piechowiak' 'Jens Sparsø' 'Luca Pezzarossa']"
] |
null | null |
2402.12264
| null | null |
http://arxiv.org/pdf/2402.12264v1
|
2024-02-19T16:26:00Z
|
2024-02-19T16:26:00Z
|
Uncertainty quantification in fine-tuned LLMs using LoRA ensembles
|
Fine-tuning large language models can improve task specific performance, although a general understanding of what the fine-tuned model has learned, forgotten and how to trust its predictions is still missing. We derive principled uncertainty quantification for fine-tuned LLMs with posterior approximations using computationally efficient low-rank adaptation ensembles. We analyze three common multiple-choice datasets using low-rank adaptation ensembles based on Mistral-7b, and draw quantitative and qualitative conclusions on their perceived complexity and model efficacy on the different target domains during and after fine-tuning. In particular, backed by the numerical experiments, we hypothesise about signals from entropic uncertainty measures for data domains that are inherently difficult for a given architecture to learn.
|
[
"['Oleksandr Balabanov' 'Hampus Linander']"
] |
null | null |
2402.12265
| null | null |
http://arxiv.org/pdf/2402.12265v1
|
2024-02-19T16:26:40Z
|
2024-02-19T16:26:40Z
|
On the Byzantine-Resilience of Distillation-Based Federated Learning
|
Federated Learning (FL) algorithms using Knowledge Distillation (KD) have received increasing attention due to their favorable properties with respect to privacy, non-i.i.d. data and communication cost. These methods depart from transmitting model parameters and, instead, communicate information about a learning task by sharing predictions on a public dataset. In this work, we study the performance of such approaches in the byzantine setting, where a subset of the clients act in an adversarial manner aiming to disrupt the learning process. We show that KD-based FL algorithms are remarkably resilient and analyze how byzantine clients can influence the learning process compared to Federated Averaging. Based on these insights, we introduce two new byzantine attacks and demonstrate that they are effective against prior byzantine-resilient methods. Additionally, we propose FilterExp, a novel method designed to enhance the byzantine resilience of KD-based FL algorithms and demonstrate its efficacy. Finally, we provide a general method to make attacks harder to detect, improving their effectiveness.
|
[
"['Christophe Roux' 'Max Zimmer' 'Sebastian Pokutta']"
] |
null | null |
2402.12269
| null | null |
http://arxiv.org/pdf/2402.12269v3
|
2024-06-14T10:06:23Z
|
2024-02-19T16:30:35Z
|
Any2Graph: Deep End-To-End Supervised Graph Prediction With An Optimal
Transport Loss
|
We propose Any2graph, a generic framework for end-to-end Supervised Graph Prediction (SGP) i.e. a deep learning model that predicts an entire graph for any kind of input. The framework is built on a novel Optimal Transport loss, the Partially-Masked Fused Gromov-Wasserstein, that exhibits all necessary properties (permutation invariance, differentiability and scalability) and is designed to handle any-sized graphs. Numerical experiments showcase the versatility of the approach that outperform existing competitors on a novel challenging synthetic dataset and a variety of real-world tasks such as map construction from satellite image (Sat2Graph) or molecule prediction from fingerprint (Fingerprint2Graph).
|
[
"['Paul Krzakala' 'Junjie Yang' 'Rémi Flamary' \"Florence d'Alché-Buc\"\n 'Charlotte Laclau' 'Matthieu Labeau']"
] |
null | null |
2402.12271
| null | null |
http://arxiv.org/pdf/2402.12271v1
|
2024-02-19T16:34:59Z
|
2024-02-19T16:34:59Z
|
Secure Federated Learning Across Heterogeneous Cloud and
High-Performance Computing Resources -- A Case Study on Federated Fine-tuning
of LLaMA 2
|
Federated learning enables multiple data owners to collaboratively train robust machine learning models without transferring large or sensitive local datasets by only sharing the parameters of the locally trained models. In this paper, we elaborate on the design of our Advanced Privacy-Preserving Federated Learning (APPFL) framework, which streamlines end-to-end secure and reliable federated learning experiments across cloud computing facilities and high-performance computing resources by leveraging Globus Compute, a distributed function as a service platform, and Amazon Web Services. We further demonstrate the use case of APPFL in fine-tuning a LLaMA 2 7B model using several cloud resources and supercomputers.
|
[
"['Zilinghan Li' 'Shilan He' 'Pranshu Chaturvedi' 'Volodymyr Kindratenko'\n 'Eliu A Huerta' 'Kibaek Kim' 'Ravi Madduri']"
] |
null | null |
2402.12284
| null | null |
http://arxiv.org/pdf/2402.12284v2
|
2024-06-08T10:08:25Z
|
2024-02-19T16:51:29Z
|
Refining Minimax Regret for Unsupervised Environment Design
|
In unsupervised environment design, reinforcement learning agents are trained on environment configurations (levels) generated by an adversary that maximises some objective. Regret is a commonly used objective that theoretically results in a minimax regret (MMR) policy with desirable robustness guarantees; in particular, the agent's maximum regret is bounded. However, once the agent reaches this regret bound on all levels, the adversary will only sample levels where regret cannot be further reduced. Although there are possible performance improvements to be made outside of these regret-maximising levels, learning stagnates. In this work, we introduce Bayesian level-perfect MMR (BLP), a refinement of the minimax regret objective that overcomes this limitation. We formally show that solving for this objective results in a subset of MMR policies, and that BLP policies act consistently with a Perfect Bayesian policy over all levels. We further introduce an algorithm, ReMiDi, that results in a BLP policy at convergence. We empirically demonstrate that training on levels from a minimax regret adversary causes learning to prematurely stagnate, but that ReMiDi continues learning.
|
[
"['Michael Beukman' 'Samuel Coward' 'Michael Matthews' 'Mattie Fellows'\n 'Minqi Jiang' 'Michael Dennis' 'Jakob Foerster']"
] |
null | null |
2402.12292
| null | null |
http://arxiv.org/pdf/2402.12292v1
|
2024-02-19T17:12:16Z
|
2024-02-19T17:12:16Z
|
Regularization by denoising: Bayesian model and Langevin-within-split
Gibbs sampling
|
This paper introduces a Bayesian framework for image inversion by deriving a probabilistic counterpart to the regularization-by-denoising (RED) paradigm. It additionally implements a Monte Carlo algorithm specifically tailored for sampling from the resulting posterior distribution, based on an asymptotically exact data augmentation (AXDA). The proposed algorithm is an approximate instance of split Gibbs sampling (SGS) which embeds one Langevin Monte Carlo step. The proposed method is applied to common imaging tasks such as deblurring, inpainting and super-resolution, demonstrating its efficacy through extensive numerical experiments. These contributions advance Bayesian inference in imaging by leveraging data-driven regularization strategies within a probabilistic framework.
|
[
"['Elhadji C. Faye' 'Mame Diarra Fall' 'Nicolas Dobigeon']"
] |
null | null |
2402.12302
| null | null |
http://arxiv.org/pdf/2402.12302v2
|
2024-05-27T07:44:57Z
|
2024-02-19T17:25:12Z
|
Asymptotic Gaussian Fluctuations of Eigenvectors in Spectral Clustering
|
The performance of spectral clustering relies on the fluctuations of the entries of the eigenvectors of a similarity matrix, which has been left uncharacterized until now. In this letter, it is shown that the signal $+$ noise structure of a general spike random matrix model is transferred to the eigenvectors of the corresponding Gram kernel matrix and the fluctuations of their entries are Gaussian in the large-dimensional regime. This CLT-like result was the last missing piece to precisely predict the classification performance of spectral clustering. The proposed proof is very general and relies solely on the rotational invariance of the noise. Numerical experiments on synthetic and real data illustrate the universality of this phenomenon.
|
[
"['Hugo Lebeau' 'Florent Chatelain' 'Romain Couillet']"
] |
null | null |
2402.12307
| null | null |
http://arxiv.org/pdf/2402.12307v1
|
2024-02-19T17:30:09Z
|
2024-02-19T17:30:09Z
|
Multi-View Conformal Learning for Heterogeneous Sensor Fusion
|
Being able to assess the confidence of individual predictions in machine learning models is crucial for decision making scenarios. Specially, in critical applications such as medical diagnosis, security, and unmanned vehicles, to name a few. In the last years, complex predictive models have had great success in solving hard tasks and new methods are being proposed every day. While the majority of new developments in machine learning models focus on improving the overall performance, less effort is put on assessing the trustworthiness of individual predictions, and even to a lesser extent, in the context of sensor fusion. To this end, we build and test multi-view and single-view conformal models for heterogeneous sensor fusion. Our models provide theoretical marginal confidence guarantees since they are based on the conformal prediction framework. We also propose a multi-view semi-conformal model based on sets intersection. Through comprehensive experimentation, we show that multi-view models perform better than single-view models not only in terms of accuracy-based performance metrics (as it has already been shown in several previous works) but also in conformal measures that provide uncertainty estimation. Our results also showed that multi-view models generate prediction sets with less uncertainty compared to single-view models.
|
[
"['Enrique Garcia-Ceja']"
] |
null | null |
2402.12319
| null | null |
http://arxiv.org/pdf/2402.12319v1
|
2024-02-19T17:44:35Z
|
2024-02-19T17:44:35Z
|
Dynamic Environment Responsive Online Meta-Learning with Fairness
Awareness
|
The fairness-aware online learning framework has emerged as a potent tool within the context of continuous lifelong learning. In this scenario, the learner's objective is to progressively acquire new tasks as they arrive over time, while also guaranteeing statistical parity among various protected sub-populations, such as race and gender, when it comes to the newly introduced tasks. A significant limitation of current approaches lies in their heavy reliance on the i.i.d (independent and identically distributed) assumption concerning data, leading to a static regret analysis of the framework. Nevertheless, it's crucial to note that achieving low static regret does not necessarily translate to strong performance in dynamic environments characterized by tasks sampled from diverse distributions. In this paper, to tackle the fairness-aware online learning challenge in evolving settings, we introduce a unique regret measure, FairSAR, by incorporating long-term fairness constraints into a strongly adapted loss regret framework. Moreover, to determine an optimal model parameter at each time step, we introduce an innovative adaptive fairness-aware online meta-learning algorithm, referred to as FairSAOML. This algorithm possesses the ability to adjust to dynamic environments by effectively managing bias control and model accuracy. The problem is framed as a bi-level convex-concave optimization, considering both the model's primal and dual parameters, which pertain to its accuracy and fairness attributes, respectively. Theoretical analysis yields sub-linear upper bounds for both loss regret and the cumulative violation of fairness constraints. Our experimental evaluation on various real-world datasets in dynamic environments demonstrates that our proposed FairSAOML algorithm consistently outperforms alternative approaches rooted in the most advanced prior online learning methods.
|
[
"['Chen Zhao' 'Feng Mi' 'Xintao Wu' 'Kai Jiang' 'Latifur Khan' 'Feng Chen']"
] |
null | null |
2402.12320
| null | null |
http://arxiv.org/abs/2402.12320v1
|
2024-02-19T17:49:23Z
|
2024-02-19T17:49:23Z
|
Landmark Stereo Dataset for Landmark Recognition and Moving Node
Localization in a Non-GPS Battlefield Environment
|
In this paper, we have proposed a new strategy of using the landmark anchor node instead of a radio-based anchor node to obtain the virtual coordinates (landmarkID, DISTANCE) of moving troops or defense forces that will help in tracking and maneuvering the troops along a safe path within a GPS-denied battlefield environment. The proposed strategy implements landmark recognition using the Yolov5 model and landmark distance estimation using an efficient Stereo Matching Algorithm. We consider that a moving node carrying a low-power mobile device facilitated with a calibrated stereo vision camera that captures stereo images of a scene containing landmarks within the battlefield region whose locations are stored in an offline server residing within the device itself. We created a custom landmark image dataset called MSTLandmarkv1 with 34 landmark classes and another landmark stereo dataset of those 34 landmark instances called MSTLandmarkStereov1. We trained the YOLOv5 model with MSTLandmarkv1 dataset and achieved 0.95 mAP @ 0.5 IoU and 0.767 mAP @ [0.5: 0.95] IoU. We calculated the distance from a node to the landmark utilizing the bounding box coordinates and the depth map generated by the improved SGM algorithm using MSTLandmarkStereov1. The tuple of landmark IDs obtained from the detection result and the distances calculated by the SGM algorithm are stored as the virtual coordinates of a node. In future work, we will use these virtual coordinates to obtain the location of a node using an efficient trilateration algorithm and optimize the node position using the appropriate optimization method.
|
[
"['Ganesh Sapkota' 'Sanjay Madria']"
] |
null | null |
2402.12326
| null | null |
http://arxiv.org/pdf/2402.12326v1
|
2024-02-19T18:00:30Z
|
2024-02-19T18:00:30Z
|
LLM Agents for Psychology: A Study on Gamified Assessments
|
Psychological measurement is essential for mental health, self-understanding, and personal development. Traditional methods, such as self-report scales and psychologist interviews, often face challenges with engagement and accessibility. While game-based and LLM-based tools have been explored to improve user interest and automate assessment, they struggle to balance engagement with generalizability. In this work, we propose PsychoGAT (Psychological Game AgenTs) to achieve a generic gamification of psychological assessment. The main insight is that powerful LLMs can function both as adept psychologists and innovative game designers. By incorporating LLM agents into designated roles and carefully managing their interactions, PsychoGAT can transform any standardized scales into personalized and engaging interactive fiction games. To validate the proposed method, we conduct psychometric evaluations to assess its effectiveness and employ human evaluators to examine the generated content across various psychological constructs, including depression, cognitive distortions, and personality traits. Results demonstrate that PsychoGAT serves as an effective assessment tool, achieving statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity. Moreover, human evaluations confirm PsychoGAT's enhancements in content coherence, interactivity, interest, immersion, and satisfaction.
|
[
"['Qisen Yang' 'Zekun Wang' 'Honghui Chen' 'Shenzhi Wang' 'Yifan Pu'\n 'Xin Gao' 'Wenhao Huang' 'Shiji Song' 'Gao Huang']"
] |
null | null |
2402.12329
| null | null |
http://arxiv.org/pdf/2402.12329v1
|
2024-02-19T18:01:36Z
|
2024-02-19T18:01:36Z
|
Query-Based Adversarial Prompt Generation
|
Recent work has shown it is possible to construct adversarial examples that cause an aligned language model to emit harmful strings or perform harmful behavior. Existing attacks work either in the white-box setting (with full access to the model weights), or through transferability: the phenomenon that adversarial examples crafted on one model often remain effective on other models. We improve on prior work with a query-based attack that leverages API access to a remote language model to construct adversarial examples that cause the model to emit harmful strings with (much) higher probability than with transfer-only attacks. We validate our attack on GPT-3.5 and OpenAI's safety classifier; we can cause GPT-3.5 to emit harmful strings that current transfer attacks fail at, and we can evade the safety classifier with nearly 100% probability.
|
[
"['Jonathan Hayase' 'Ema Borevkovic' 'Nicholas Carlini' 'Florian Tramèr'\n 'Milad Nasr']"
] |
null | null |
2402.12331
| null | null |
http://arxiv.org/pdf/2402.12331v1
|
2024-02-19T18:02:10Z
|
2024-02-19T18:02:10Z
|
Generating Survival Interpretable Trajectories and Data
|
A new model for generating survival trajectories and data based on applying an autoencoder of a specific structure is proposed. It solves three tasks. First, it provides predictions in the form of the expected event time and the survival function for a new generated feature vector on the basis of the Beran estimator. Second, the model generates additional data based on a given training set that would supplement the original dataset. Third, the most important, it generates a prototype time-dependent trajectory for an object, which characterizes how features of the object could be changed to achieve a different time to an event. The trajectory can be viewed as a type of the counterfactual explanation. The proposed model is robust during training and inference due to a specific weighting scheme incorporating into the variational autoencoder. The model also determines the censored indicators of new generated data by solving a classification task. The paper demonstrates the efficiency and properties of the proposed model using numerical experiments on synthetic and real datasets. The code of the algorithm implementing the proposed model is publicly available.
|
[
"['Andrei V. Konstantinov' 'Stanislav R. Kirpichenko' 'Lev V. Utkin']"
] |
null | null |
2402.12336
| null | null |
http://arxiv.org/pdf/2402.12336v2
|
2024-06-05T15:32:03Z
|
2024-02-19T18:09:48Z
|
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings
for Robust Large Vision-Language Models
|
Multi-modal foundation models like OpenFlamingo, LLaVA, and GPT-4 are increasingly used for various real-world tasks. Prior work has shown that these models are highly vulnerable to adversarial attacks on the vision modality. These attacks can be leveraged to spread fake information or defraud users, and thus pose a significant risk, which makes the robustness of large multi-modal foundation models a pressing problem. The CLIP model, or one of its variants, is used as a frozen vision encoder in many large vision-language models (LVLMs), e.g. LLaVA and OpenFlamingo. We propose an unsupervised adversarial fine-tuning scheme to obtain a robust CLIP vision encoder, which yields robustness on all vision down-stream tasks (LVLMs, zero-shot classification) that rely on CLIP. In particular, we show that stealth-attacks on users of LVLMs by a malicious third party providing manipulated images are no longer possible once one replaces the original CLIP model with our robust one. No retraining or fine-tuning of the down-stream LVLMs is required. The code and robust models are available at https://github.com/chs20/RobustVLM
|
[
"['Christian Schlarmann' 'Naman Deep Singh' 'Francesco Croce'\n 'Matthias Hein']"
] |
null | null |
2402.12338
| null | null |
http://arxiv.org/pdf/2402.12338v2
|
2024-04-22T17:56:01Z
|
2024-02-19T18:11:37Z
|
An Adversarial Approach to Evaluating the Robustness of Event
Identification Models
|
Intelligent machine learning approaches are finding active use for event detection and identification that allow real-time situational awareness. Yet, such machine learning algorithms have been shown to be susceptible to adversarial attacks on the incoming telemetry data. This paper considers a physics-based modal decomposition method to extract features for event classification and focuses on interpretable classifiers including logistic regression and gradient boosting to distinguish two types of events: load loss and generation loss. The resulting classifiers are then tested against an adversarial algorithm to evaluate their robustness. The adversarial attack is tested in two settings: the white box setting, wherein the attacker knows exactly the classification model; and the gray box setting, wherein the attacker has access to historical data from the same network as was used to train the classifier, but does not know the classification model. Thorough experiments on the synthetic South Carolina 500-bus system highlight that a relatively simpler model such as logistic regression is more susceptible to adversarial attacks than gradient boosting.
|
[
"['Obai Bahwal' 'Oliver Kosut' 'Lalitha Sankar']"
] |
null | null |
2402.12343
| null | null |
http://arxiv.org/pdf/2402.12343v4
|
2024-06-06T12:54:48Z
|
2024-02-19T18:16:51Z
|
Emulated Disalignment: Safety Alignment for Large Language Models May
Backfire!
|
Large language models (LLMs) undergo safety alignment to ensure safe conversations with humans. However, this paper introduces a training-free attack method capable of reversing safety alignment, converting the outcomes of stronger alignment into greater potential for harm by accessing only LLM output token distributions. Specifically, our method achieves this reversal by contrasting the output token distribution of a safety-aligned language model (e.g., Llama-2-chat) against its pre-trained version (e.g., Llama-2), so that the token predictions are shifted towards the opposite direction of safety alignment. We name this method emulated disalignment (ED) because sampling from this contrastive distribution provably emulates the result of fine-tuning to minimize a safety reward. Our experiments with ED across three evaluation datasets and four model families (Llama-1, Llama-2, Mistral, and Alpaca) show that ED doubles the harmfulness of pre-trained models and outperforms strong baselines, achieving the highest harmful rates in 43 out of 48 evaluation subsets by a large margin. Eventually, given ED's reliance on language model output token distributions, which particularly compromises open-source models, our findings highlight the need to reassess the open accessibility of language models, even if they have been safety-aligned. Code is available at https://github.com/ZHZisZZ/emulated-disalignment.
|
[
"['Zhanhui Zhou' 'Jie Liu' 'Zhichen Dong' 'Jiaheng Liu' 'Chao Yang'\n 'Wanli Ouyang' 'Yu Qiao']"
] |
null | null |
2402.12348
| null | null |
http://arxiv.org/pdf/2402.12348v2
|
2024-06-10T17:14:09Z
|
2024-02-19T18:23:36Z
|
GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via
Game-Theoretic Evaluations
|
As Large Language Models (LLMs) are integrated into critical real-world applications, their strategic and logical reasoning abilities are increasingly crucial. This paper evaluates LLMs' reasoning abilities in competitive environments through game-theoretic tasks, e.g., board and card games that require pure logic and strategic reasoning to compete with opponents. We first propose GTBench, a language-driven environment composing 10 widely recognized tasks, across a comprehensive game taxonomy: complete versus incomplete information, dynamic versus static, and probabilistic versus deterministic scenarios. Then, we (1) Characterize the game-theoretic reasoning of LLMs; and (2) Perform LLM-vs.-LLM competitions as reasoning evaluation. We observe that (1) LLMs have distinct behaviors regarding various gaming scenarios; for example, LLMs fail in complete and deterministic games yet they are competitive in probabilistic gaming scenarios; (2) Most open-source LLMs, e.g., CodeLlama-34b-Instruct and Llama-2-70b-chat, are less competitive than commercial LLMs, e.g., GPT-4, in complex games, yet the recently released Llama-3-70b-Instruct makes up for this shortcoming. In addition, code-pretraining greatly benefits strategic reasoning, while advanced reasoning methods such as Chain-of-Thought (CoT) and Tree-of-Thought (ToT) do not always help. We further characterize the game-theoretic properties of LLMs, such as equilibrium and Pareto Efficiency in repeated games. Detailed error profiles are provided for a better understanding of LLMs' behavior. We hope our research provides standardized protocols and serves as a foundation to spur further explorations in the strategic reasoning of LLMs.
|
[
"['Jinhao Duan' 'Renming Zhang' 'James Diffenderfer' 'Bhavya Kailkhura'\n 'Lichao Sun' 'Elias Stengel-Eskin' 'Mohit Bansal' 'Tianlong Chen'\n 'Kaidi Xu']"
] |
null | null |
2402.12354
| null | null |
http://arxiv.org/pdf/2402.12354v2
|
2024-07-04T18:33:00Z
|
2024-02-19T18:33:49Z
|
LoRA+: Efficient Low Rank Adaptation of Large Models
|
In this paper, we show that Low Rank Adaptation (LoRA) as originally introduced in Hu et al. (2021) leads to suboptimal finetuning of models with large width (embedding dimension). This is due to the fact that adapter matrices A and B in LoRA are updated with the same learning rate. Using scaling arguments for large width networks, we demonstrate that using the same learning rate for A and B does not allow efficient feature learning. We then show that this suboptimality of LoRA can be corrected simply by setting different learning rates for the LoRA adapter matrices A and B with a well-chosen ratio. We call this proposed algorithm LoRA$+$. In our extensive experiments, LoRA$+$ improves performance (1-2 $%$ improvements) and finetuning speed (up to $sim$ 2X SpeedUp), at the same computational cost as LoRA.
|
[
"['Soufiane Hayou' 'Nikhil Ghosh' 'Bin Yu']"
] |
null | null |
2402.12365
| null | null |
http://arxiv.org/pdf/2402.12365v2
|
2024-04-30T17:15:35Z
|
2024-02-19T18:52:13Z
|
Universal Physics Transformers: A Framework For Efficiently Scaling
Neural Operators
|
Neural operators, serving as physics surrogate models, have recently gained increased interest. With ever increasing problem complexity, the natural question arises: what is an efficient way to scale neural operators to larger and more complex simulations - most importantly by taking into account different types of simulation datasets. This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar. Whereas the flexibility of transformers has enabled unified architectures across domains, neural operators mostly follow a problem specific design, where GNNs are commonly used for Lagrangian simulations and grid-based models predominate Eulerian simulations. We introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles. UPTs efficiently propagate dynamics in the latent space, emphasized by inverse encoding and decoding techniques. Finally, UPTs allow for queries of the latent space representation at any point in space-time. We demonstrate diverse applicability and efficacy of UPTs in mesh-based fluid simulations, and steady-state Reynolds averaged Navier-Stokes simulations, and Lagrangian-based dynamics.
|
[
"['Benedikt Alkin' 'Andreas Fürst' 'Simon Schmid' 'Lukas Gruber'\n 'Markus Holzleitner' 'Johannes Brandstetter']"
] |
null | null |
2402.12366
| null | null |
http://arxiv.org/pdf/2402.12366v1
|
2024-02-19T18:53:54Z
|
2024-02-19T18:53:54Z
|
A Critical Evaluation of AI Feedback for Aligning Large Language Models
|
Reinforcement learning with AI feedback (RLAIF) is a popular paradigm for improving the instruction-following abilities of powerful pre-trained language models. RLAIF first performs supervised fine-tuning (SFT) using demonstrations from a teacher model and then further fine-tunes the model with reinforcement learning (RL), using feedback from a critic model. While recent popular open-source models have demonstrated substantial improvements in performance from the RL step, in this paper we question whether the complexity of this RL step is truly warranted for AI feedback. We show that the improvements of the RL step are virtually entirely due to the widespread practice of using a weaker teacher model (e.g. GPT-3.5) for SFT data collection than the critic (e.g., GPT-4) used for AI feedback generation. Specifically, we show that simple supervised fine-tuning with GPT-4 as the teacher outperforms existing RLAIF pipelines. More generally, we find that the gains from RLAIF vary substantially across base model families, test-time evaluation protocols, and critic models. Finally, we provide a mechanistic explanation for when SFT may outperform the full two-step RLAIF pipeline as well as suggestions for making RLAIF maximally useful in practice.
|
[
"['Archit Sharma' 'Sedrick Keh' 'Eric Mitchell' 'Chelsea Finn'\n 'Kushal Arora' 'Thomas Kollar']"
] |
null | null |
2402.12369
| null | null |
http://arxiv.org/pdf/2402.12369v1
|
2024-02-19T18:56:35Z
|
2024-02-19T18:56:35Z
|
Short-Period Variables in TESS Full-Frame Image Light Curves Identified
via Convolutional Neural Networks
|
The Transiting Exoplanet Survey Satellite (TESS) mission measured light from stars in ~85% of the sky throughout its two-year primary mission, resulting in millions of TESS 30-minute cadence light curves to analyze in the search for transiting exoplanets. To search this vast dataset, we aim to provide an approach that is both computationally efficient, produces highly performant predictions, and minimizes the required human search effort. We present a convolutional neural network that we train to identify short period variables. To make a prediction for a given light curve, our network requires no prior target parameters identified using other methods. Our network performs inference on a TESS 30-minute cadence light curve in ~5ms on a single GPU, enabling large scale archival searches. We present a collection of 14156 short-period variables identified by our network. The majority of our identified variables fall into two prominent populations, one of short-period main sequence binaries and another of Delta Scuti stars. Our neural network model and related code is additionally provided as open-source code for public use and extension.
|
[
"['Greg Olmschenk' 'Richard K. Barry' 'Stela Ishitani Silva'\n 'Brian P. Powell' 'Ethan Kruse' 'Jeremy D. Schnittman'\n 'Agnieszka M. Cieplak' 'Thomas Barclay' 'Siddhant Solanki'\n 'Bianca Ortega' 'John Baker' 'Yesenia Helem Salinas Mamani']"
] |
null | null |
2402.12391
| null | null |
http://arxiv.org/pdf/2402.12391v2
|
2024-02-21T03:42:32Z
|
2024-02-15T06:30:12Z
|
Toward a Team of AI-made Scientists for Scientific Discovery from Gene
Expression Data
|
Machine learning has emerged as a powerful tool for scientific discovery, enabling researchers to extract meaningful insights from complex datasets. For instance, it has facilitated the identification of disease-predictive genes from gene expression data, significantly advancing healthcare. However, the traditional process for analyzing such datasets demands substantial human effort and expertise for the data selection, processing, and analysis. To address this challenge, we introduce a novel framework, a Team of AI-made Scientists (TAIS), designed to streamline the scientific discovery pipeline. TAIS comprises simulated roles, including a project manager, data engineer, and domain expert, each represented by a Large Language Model (LLM). These roles collaborate to replicate the tasks typically performed by data scientists, with a specific focus on identifying disease-predictive genes. Furthermore, we have curated a benchmark dataset to assess TAIS's effectiveness in gene identification, demonstrating our system's potential to significantly enhance the efficiency and scope of scientific exploration. Our findings represent a solid step towards automating scientific discovery through large language models.
|
[
"['Haoyang Liu' 'Yijiang Li' 'Jinglin Jian' 'Yuxuan Cheng' 'Jianrong Lu'\n 'Shuyi Guo' 'Jinglei Zhu' 'Mianchen Zhang' 'Miantong Zhang' 'Haohan Wang']"
] |
null | null |
2402.12394
| null | null |
http://arxiv.org/pdf/2402.12394v1
|
2024-02-16T20:19:28Z
|
2024-02-16T20:19:28Z
|
Improving Model's Interpretability and Reliability using Biomarkers
|
Accurate and interpretable diagnostic models are crucial in the safety-critical field of medicine. We investigate the interpretability of our proposed biomarker-based lung ultrasound diagnostic pipeline to enhance clinicians' diagnostic capabilities. The objective of this study is to assess whether explanations from a decision tree classifier, utilizing biomarkers, can improve users' ability to identify inaccurate model predictions compared to conventional saliency maps. Our findings demonstrate that decision tree explanations, based on clinically established biomarkers, can assist clinicians in detecting false positives, thus improving the reliability of diagnostic models in medicine.
|
[
"['Gautam Rajendrakumar Gare' 'Tom Fox' 'Beam Chansangavej'\n 'Amita Krishnan' 'Ricardo Luis Rodriguez' 'Bennett P deBoisblanc'\n 'Deva Kannan Ramanan' 'John Michael Galeotti']"
] |
null | null |
2402.12397
| null | null |
http://arxiv.org/pdf/2402.12397v2
|
2024-06-25T02:58:06Z
|
2024-02-17T00:22:29Z
|
Multi-class Temporal Logic Neural Networks
|
Time-series data can represent the behaviors of autonomous systems, such as drones and self-driving cars. The task of binary and multi-class classification for time-series data has become a prominent area of research. Neural networks represent a popular approach to classifying data; However, they lack interpretability, which poses a significant challenge in extracting meaningful information from them. Signal Temporal Logic (STL) is a formalism that describes the properties of timed behaviors. We propose a method that combines all of the above: neural networks that represent STL specifications for multi-class classification of time-series data. We offer two key contributions: 1) We introduce a notion of margin for multi-class classification, and 2) we introduce STL-based attributes for enhancing the interpretability of the results. We evaluate our method on two datasets and compare it with state-of-the-art baselines.
|
[
"['Danyang Li' 'Roberto Tron']"
] |
null | null |
2402.12398
| null | null |
http://arxiv.org/pdf/2402.12398v1
|
2024-02-17T05:39:48Z
|
2024-02-17T05:39:48Z
|
Primary and Secondary Factor Consistency as Domain Knowledge to Guide
Happiness Computing in Online Assessment
|
Happiness computing based on large-scale online web data and machine learning methods is an emerging research topic that underpins a range of issues, from personal growth to social stability. Many advanced Machine Learning (ML) models with explanations are used to compute the happiness online assessment while maintaining high accuracy of results. However, domain knowledge constraints, such as the primary and secondary relations of happiness factors, are absent from these models, which limits the association between computing results and the right reasons for why they occurred. This article attempts to provide new insights into the explanation consistency from an empirical study perspective. Then we study how to represent and introduce domain knowledge constraints to make ML models more trustworthy. We achieve this through: (1) proving that multiple prediction models with additive factor attributions will have the desirable property of primary and secondary relations consistency, and (2) showing that factor relations with quantity can be represented as an importance distribution for encoding domain knowledge. Factor explanation difference is penalized by the Kullback-Leibler divergence-based loss among computing models. Experimental results using two online web datasets show that domain knowledge of stable factor relations exists. Using this knowledge not only improves happiness computing accuracy but also reveals more significative happiness factors for assisting decisions well.
|
[
"['Xiaohua Wu' 'Lin Li' 'Xiaohui Tao' 'Frank Xing' 'Jingling Yuan']"
] |
null | null |
2402.12399
| null | null |
http://arxiv.org/pdf/2402.12399v2
|
2024-02-21T13:33:12Z
|
2024-02-17T06:23:27Z
|
Turn Waste into Worth: Rectifying Top-$k$ Router of MoE
|
Sparse Mixture of Experts (MoE) models are popular for training large language models due to their computational efficiency. However, the commonly used top-$k$ routing mechanism suffers from redundancy computation and memory costs due to the unbalanced routing. Some experts are overflow, where the exceeding tokens are dropped. While some experts are vacant, which are padded with zeros, negatively impacting model performance. To address the dropped tokens and padding, we propose the Rectify-Router, comprising the Intra-GPU Rectification and the Fill-in Rectification. The Intra-GPU Rectification handles dropped tokens, efficiently routing them to experts within the GPU where they are located to avoid inter-GPU communication. The Fill-in Rectification addresses padding by replacing padding tokens with the tokens that have high routing scores. Our experimental results demonstrate that the Intra-GPU Rectification and the Fill-in Rectification effectively handle dropped tokens and padding, respectively. Furthermore, the combination of them achieves superior performance, surpassing the accuracy of the vanilla top-1 router by 4.7%.
|
[
"['Zhiyuan Zeng' 'Qipeng Guo' 'Zhaoye Fei' 'Zhangyue Yin' 'Yunhua Zhou'\n 'Linyang Li' 'Tianxiang Sun' 'Hang Yan' 'Dahua Lin' 'Xipeng Qiu']"
] |
null | null |
2402.12400
| null | null |
http://arxiv.org/pdf/2402.12400v1
|
2024-02-17T20:16:41Z
|
2024-02-17T20:16:41Z
|
Estimating the age-conditioned average treatment effects curves: An
application for assessing load-management strategies in the NBA
|
In the realm of competitive sports, understanding the performance dynamics of athletes, represented by the age curve (showing progression, peak, and decline), is vital. Our research introduces a novel framework for quantifying age-specific treatment effects, enhancing the granularity of performance trajectory analysis. Firstly, we propose a methodology for estimating the age curve using game-level data, diverging from traditional season-level data approaches, and tackling its inherent complexities with a meta-learner framework that leverages advanced machine learning models. This approach uncovers intricate non-linear patterns missed by existing methods. Secondly, our framework enables the identification of causal effects, allowing for a detailed examination of age curves under various conditions. By defining the Age-Conditioned Treatment Effect (ACTE), we facilitate the exploration of causal relationships regarding treatment impacts at specific ages. Finally, applying this methodology to study the effects of rest days on performance metrics, particularly across different ages, offers valuable insights into load management strategies' effectiveness. Our findings underscore the importance of tailored rest periods, highlighting their positive impact on athlete performance and suggesting a reevaluation of current management practices for optimizing athlete performance.
|
[
"['Shinpei Nakamura-Sakai' 'Laura Forastiere' 'Brian Macdonald']"
] |
null | null |
2402.12406
| null | null |
http://arxiv.org/pdf/2402.12406v1
|
2024-02-18T08:13:57Z
|
2024-02-18T08:13:57Z
|
Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge
Distillation
|
Data-free knowledge distillation (DFKD) aims to distill pretrained knowledge to a student model with the help of a generator without using original data. In such data-free scenarios, achieving stable performance of DFKD is essential due to the unavailability of validation data. Unfortunately, this paper has discovered that existing DFKD methods are quite sensitive to different teacher models, occasionally showing catastrophic failures of distillation, even when using well-trained teacher models. Our observation is that the generator in DFKD is not always guaranteed to produce precise yet diverse samples using the existing representative strategy of minimizing both class-prior and adversarial losses. Through our empirical study, we focus on the fact that class-prior not only decreases the diversity of generated samples, but also cannot completely address the problem of generating unexpectedly low-quality samples depending on teacher models. In this paper, we propose the teacher-agnostic data-free knowledge distillation (TA-DFKD) method, with the goal of more robust and stable performance regardless of teacher models. Our basic idea is to assign the teacher model a lenient expert role for evaluating samples, rather than a strict supervisor that enforces its class-prior on the generator. Specifically, we design a sample selection approach that takes only clean samples verified by the teacher model without imposing restrictions on the power of generating diverse samples. Through extensive experiments, we show that our method successfully achieves both robustness and training stability across various teacher models, while outperforming the existing DFKD methods.
|
[
"['Hyunjune Shin' 'Dong-Wan Choi']"
] |
null | null |
2402.12408
| null | null |
http://arxiv.org/pdf/2402.12408v1
|
2024-02-18T11:24:34Z
|
2024-02-18T11:24:34Z
|
ModelGPT: Unleashing LLM's Capabilities for Tailored Model Generation
|
The rapid advancement of Large Language Models (LLMs) has revolutionized various sectors by automating routine tasks, marking a step toward the realization of Artificial General Intelligence (AGI). However, they still struggle to accommodate the diverse and specific needs of users and simplify the utilization of AI models for the average user. In response, we propose ModelGPT, a novel framework designed to determine and generate AI models specifically tailored to the data or task descriptions provided by the user, leveraging the capabilities of LLMs. Given user requirements, ModelGPT is able to provide tailored models at most 270x faster than the previous paradigms (e.g. all-parameter or LoRA finetuning). Comprehensive experiments on NLP, CV, and Tabular datasets attest to the effectiveness of our framework in making AI models more accessible and user-friendly. Our code is available at https://github.com/IshiKura-a/ModelGPT.
|
[
"['Zihao Tang' 'Zheqi Lv' 'Shengyu Zhang' 'Fei Wu' 'Kun Kuang']"
] |
null | null |
2402.12411
| null | null |
http://arxiv.org/pdf/2402.12411v1
|
2024-02-19T02:34:23Z
|
2024-02-19T02:34:23Z
|
Deep Structural Knowledge Exploitation and Synergy for Estimating Node
Importance Value on Heterogeneous Information Networks
|
Node importance estimation problem has been studied conventionally with homogeneous network topology analysis. To deal with network heterogeneity, a few recent methods employ graph neural models to automatically learn diverse sources of information. However, the major concern revolves around that their full adaptive learning process may lead to insufficient information exploration, thereby formulating the problem as the isolated node value prediction with underperformance and less interpretability. In this work, we propose a novel learning framework: SKES. Different from previous automatic learning designs, SKES exploits heterogeneous structural knowledge to enrich the informativeness of node representations. Based on a sufficiently uninformative reference, SKES estimates the importance value for any input node, by quantifying its disparity against the reference. This establishes an interpretable node importance computation paradigm. Furthermore, SKES dives deep into the understanding that "nodes with similar characteristics are prone to have similar importance values" whilst guaranteeing that such informativeness disparity between any different nodes is orderly reflected by the embedding distance of their associated latent features. Extensive experiments on three widely-evaluated benchmarks demonstrate the performance superiority of SKES over several recent competing methods.
|
[
"['Yankai Chen' 'Yixiang Fang' 'Qiongyan Wang' 'Xin Cao' 'Irwin King']"
] |
null | null |
2402.12415
| null | null |
http://arxiv.org/pdf/2402.12415v1
|
2024-02-19T07:47:23Z
|
2024-02-19T07:47:23Z
|
Vehicle-group-based Crash Risk Formation and Propagation Analysis for
Expressways
|
Previous studies in predicting crash risk primarily associated the number or likelihood of crashes on a road segment with traffic parameters or geometric characteristics of the segment, usually neglecting the impact of vehicles' continuous movement and interactions with nearby vehicles. Advancements in communication technologies have empowered driving information collected from surrounding vehicles, enabling the study of group-based crash risks. Based on high-resolution vehicle trajectory data, this research focused on vehicle groups as the subject of analysis and explored risk formation and propagation mechanisms considering features of vehicle groups and road segments. Several key factors contributing to crash risks were identified, including past high-risk vehicle-group states, complex vehicle behaviors, high percentage of large vehicles, frequent lane changes within a vehicle group, and specific road geometries. A multinomial logistic regression model was developed to analyze the spatial risk propagation patterns, which were classified based on the trend of high-risk occurrences within vehicle groups. The results indicated that extended periods of high-risk states, increase in vehicle-group size, and frequent lane changes are associated with adverse risk propagation patterns. Conversely, smoother traffic flow and high initial crash risk values are linked to risk dissipation. Furthermore, the study conducted sensitivity analysis on different types of classifiers, prediction time intervalsss and adaptive TTC thresholds. The highest AUC value for vehicle-group risk prediction surpassed 0.93. The findings provide valuable insights to researchers and practitioners in understanding and prediction of vehicle-group safety, ultimately improving active traffic safety management and operations of Connected and Autonomous Vehicles.
|
[
"['Tianheng Zhu' 'Ling Wang' 'Yiheng Feng' 'Wanjing Ma' 'Mohamed Abdel-Aty']"
] |
null | null |
2402.12417
| null | null |
http://arxiv.org/pdf/2402.12417v1
|
2024-02-19T08:27:53Z
|
2024-02-19T08:27:53Z
|
Predicting trucking accidents with truck drivers 'safety climate
perception across companies: A transfer learning approach
|
There is a rising interest in using artificial intelligence (AI)-powered safety analytics to predict accidents in the trucking industry. Companies may face the practical challenge, however, of not having enough data to develop good safety analytics models. Although pretrained models may offer a solution for such companies, existing safety research using transfer learning has mostly focused on computer vision and natural language processing, rather than accident analytics. To fill the above gap, we propose a pretrain-then-fine-tune transfer learning approach to help any company leverage other companies' data to develop AI models for a more accurate prediction of accident risk. We also develop SafeNet, a deep neural network algorithm for classification tasks suitable for accident prediction. Using the safety climate survey data from seven trucking companies with different data sizes, we show that our proposed approach results in better model performance compared to training the model from scratch using only the target company's data. We also show that for the transfer learning model to be effective, the pretrained model should be developed with larger datasets from diverse sources. The trucking industry may, thus, consider pooling safety analytics data from a wide range of companies to develop pretrained models and share them within the industry for better knowledge and resource transfer. The above contributions point to the promise of advanced safety analytics to make the industry safer and more sustainable.
|
[
"['Kailai Sun' 'Tianxiang Lan' 'Say Hong Kam' 'Yang Miang Goh'\n 'Yueng-Hsiang Huang']"
] |
null | null |
2402.12418
| null | null |
http://arxiv.org/pdf/2402.12418v1
|
2024-02-19T09:52:45Z
|
2024-02-19T09:52:45Z
|
Beyond Uniform Scaling: Exploring Depth Heterogeneity in Neural
Architectures
|
Conventional scaling of neural networks typically involves designing a base network and growing different dimensions like width, depth, etc. of the same by some predefined scaling factors. We introduce an automated scaling approach leveraging second-order loss landscape information. Our method is flexible towards skip connections a mainstay in modern vision transformers. Our training-aware method jointly scales and trains transformers without additional training iterations. Motivated by the hypothesis that not all neurons need uniform depth complexity, our approach embraces depth heterogeneity. Extensive evaluations on DeiT-S with ImageNet100 show a 2.5% accuracy gain and 10% parameter efficiency improvement over conventional scaling. Scaled networks demonstrate superior performance upon training small scale datasets from scratch. We introduce the first intact scaling mechanism for vision transformers, a step towards efficient model scaling.
|
[
"['Akash Guna R. T' 'Arnav Chavan' 'Deepak Gupta']"
] |
null | null |
2402.12419
| null | null |
http://arxiv.org/pdf/2402.12419v1
|
2024-02-19T09:55:32Z
|
2024-02-19T09:55:32Z
|
EBFT: Effective and Block-Wise Fine-Tuning for Sparse LLMs
|
Existing methods for fine-tuning sparse LLMs often suffer from resource-intensive requirements and high retraining costs. Additionally, many fine-tuning methods often rely on approximations or heuristic optimization strategies, which may lead to suboptimal solutions. To address these issues, we propose an efficient and fast framework for fine-tuning sparse LLMs based on minimizing reconstruction error. Our approach involves sampling a small dataset for calibration and utilizing backpropagation to iteratively optimize block-wise reconstruction error, on a block-by-block basis, aiming for optimal solutions. Extensive experiments on various benchmarks consistently demonstrate the superiority of our method over other baselines. For instance, on the Wikitext2 dataset with LlamaV1-7B at 70% sparsity, our proposed EBFT achieves a perplexity of 16.88, surpassing the state-of-the-art DSnoT with a perplexity of 75.14. Moreover, with a structured sparsity ratio of 26%, EBFT achieves a perplexity of 16.27, outperforming LoRA (perplexity 16.44). Furthermore, the fine-tuning process of EBFT for LlamaV1-7B only takes approximately 30 minutes, and the entire framework can be executed on a single 16GB GPU. The source code is available at https://github.com/sunggo/EBFT.
|
[
"['Song Guo' 'Fan Wu' 'Lei Zhang' 'Xiawu Zheng' 'Shengchuan Zhang'\n 'Fei Chao' 'Yiyu Shi' 'Rongrong Ji']"
] |
null | null |
2402.12423
| null | null |
http://arxiv.org/pdf/2402.12423v2
|
2024-06-04T11:03:57Z
|
2024-02-19T16:22:21Z
|
On the Semantic Latent Space of Diffusion-Based Text-to-Speech Models
|
The incorporation of Denoising Diffusion Models (DDMs) in the Text-to-Speech (TTS) domain is rising, providing great value in synthesizing high quality speech. Although they exhibit impressive audio quality, the extent of their semantic capabilities is unknown, and controlling their synthesized speech's vocal properties remains a challenge. Inspired by recent advances in image synthesis, we explore the latent space of frozen TTS models, which is composed of the latent bottleneck activations of the DDM's denoiser. We identify that this space contains rich semantic information, and outline several novel methods for finding semantic directions within it, both supervised and unsupervised. We then demonstrate how these enable off-the-shelf audio editing, without any further training, architectural changes or data requirements. We present evidence of the semantic and acoustic qualities of the edited audio, and provide supplemental samples: https://latent-analysis-grad-tts.github.io/speech-samples/.
|
[
"['Miri Varshavsky-Hassid' 'Roy Hirsch' 'Regev Cohen' 'Tomer Golany'\n 'Daniel Freedman' 'Ehud Rivlin']"
] |
null | null |
2402.12424
| null | null |
http://arxiv.org/pdf/2402.12424v4
|
2024-06-06T03:02:25Z
|
2024-02-19T16:34:50Z
|
Tables as Texts or Images: Evaluating the Table Reasoning Ability of
LLMs and MLLMs
|
In this paper, we investigate the effectiveness of various LLMs in interpreting tabular data through different prompting strategies and data formats. Our analyses extend across six benchmarks for table-related tasks such as question-answering and fact-checking. We introduce for the first time the assessment of LLMs' performance on image-based table representations. Specifically, we compare five text-based and three image-based table representations, demonstrating the role of representation and prompting on LLM performance. Our study provides insights into the effective use of LLMs on table-related tasks.
|
[
"['Naihao Deng' 'Zhenjie Sun' 'Ruiqi He' 'Aman Sikka' 'Yulong Chen'\n 'Lin Ma' 'Yue Zhang' 'Rada Mihalcea']"
] |
null | null |
2402.12426
| null | null |
http://arxiv.org/pdf/2402.12426v2
|
2024-03-05T16:31:53Z
|
2024-02-19T17:52:29Z
|
Attacks on Node Attributes in Graph Neural Networks
|
Graphs are commonly used to model complex networks prevalent in modern social media and literacy applications. Our research investigates the vulnerability of these graphs through the application of feature based adversarial attacks, focusing on both decision time attacks and poisoning attacks. In contrast to state of the art models like Net Attack and Meta Attack, which target node attributes and graph structure, our study specifically targets node attributes. For our analysis, we utilized the text dataset Hellaswag and graph datasets Cora and CiteSeer, providing a diverse basis for evaluation. Our findings indicate that decision time attacks using Projected Gradient Descent (PGD) are more potent compared to poisoning attacks that employ Mean Node Embeddings and Graph Contrastive Learning strategies. This provides insights for graph data security, pinpointing where graph-based models are most vulnerable and thereby informing the development of stronger defense mechanisms against such attacks.
|
[
"['Ying Xu' 'Michael Lanier' 'Anindya Sarkar' 'Yevgeniy Vorobeychik']"
] |
null | null |
2402.12435
| null | null |
http://arxiv.org/pdf/2402.12435v1
|
2024-02-19T19:00:01Z
|
2024-02-19T19:00:01Z
|
Emulating the interstellar medium chemistry with neural operators
|
Galaxy formation and evolution critically depend on understanding the complex photo-chemical processes that govern the evolution and thermodynamics of the InterStellar Medium (ISM). Computationally, solving chemistry is among the most heavy tasks in cosmological and astrophysical simulations. The evolution of such non-equilibrium photo-chemical network relies on implicit, precise, computationally costly, ordinary differential equations (ODE) solvers. Here, we aim at substituting such procedural solvers with fast, pre-trained, emulators based on neural operators. We emulate a non-equilibrium chemical network up to H$_2$ formation (9 species, 52 reactions) by adopting the DeepONet formalism, i.e. by splitting the ODE solver operator that maps the initial conditions and time evolution into a tensor product of two neural networks. We use $texttt{KROME}$ to generate a training set spanning $-2leq log(n/mathrm{cm}^{-3}) leq 3.5$, $log(20) leqlog(T/mathrm{K}) leq 5.5$, $-6 leq log(n_i/n) < 0$, and by adopting an incident radiation field $textbf{F}$ sampled in 10 energy bins with a continuity prior. We separately train the solver for $T$ and each $n_i$ for $simeq 4.34,rm GPUhrs$. Compared with the reference solutions obtained by $texttt{KROME}$ for single zone models, the typical precision obtained is of order $10^{-2}$, i.e. the $10 times$ better with a training that is $40 times$ less costly with respect to previous emulators which however considered only a fixed $mathbf{F}$. The present model achieves a speed-up of a factor of $128 times$ with respect to stiff ODE solvers. Our neural emulator represents a significant leap forward in the modeling of ISM chemistry, offering a good balance of precision, versatility, and computational efficiency.
|
[
"['Lorenzo Branca' 'Andrea Pallottini']"
] |
null | null |
2402.12448
| null | null |
http://arxiv.org/abs/2402.12448v1
|
2024-02-19T19:00:09Z
|
2024-02-19T19:00:09Z
|
DBNets: A publicly available deep learning tool to measure the masses of
young planets in dusty protoplanetary discs
|
Current methods to characterize embedded planets in protoplanetary disc observations are severely limited either in their ability to fully account for the observed complex physics or in their computational and time costs. To address this shortcoming, we developed DBNets: a deep learning tool, based on convolutional neural networks, that analyses substructures observed in the dust continuum emission of protoplanetary discs to quickly infer the mass of allegedly embedded planets. We focussed on developing a method to reliably quantify not only the planet mass, but also the associated uncertainty introduced by our modelling and adopted techniques. Our tests gave promising results achieving an 87% reduction of the log Mp mean squared error with respect to an analytical formula fitted on the same data (DBNets metrics: lmse 0.016, r2-score 97%). With the goal of providing the final user of DBNets with all the tools needed to interpret their measurements and decide on their significance, we extensively tested our tool on out-of-distribution data. We found that DBNets can identify inputs strongly outside its training scope returning an uncertainty above a specific threshold and we thus provided a rejection criterion that helps determine the significance of the results obtained. Additionally, we outlined some limitations of our tool: it can be reliably applied only on discs observed with inclinations below approximately 60{deg}, in the optically thin regime, with a resolution 8 times better than the gap radial location and with a signal-to-noise ratio higher than approximately ten. Finally, we applied DBNets to 33 actual observations of protoplanetary discs measuring the mass of 48 proposed planets and comparing our results with the available literature. We confirmed that most of the observed gaps imply planets in the sub-Jupiter regime. DBNets is publicly available at dbnets.fisica.unimi.it.
|
[
"['Alessandro Ruzza' 'Giuseppe Lodato' 'Giovanni Pietro Rosotti']"
] |
null | null |
2402.12465
| null | null |
http://arxiv.org/pdf/2402.12465v1
|
2024-02-19T19:11:22Z
|
2024-02-19T19:11:22Z
|
Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps
|
An intelligent system capable of continual learning is one that can process and extract knowledge from potentially infinitely long streams of pattern vectors. The major challenge that makes crafting such a system difficult is known as catastrophic forgetting - an agent, such as one based on artificial neural networks (ANNs), struggles to retain previously acquired knowledge when learning from new samples. Furthermore, ensuring that knowledge is preserved for previous tasks becomes more challenging when input is not supplemented with task boundary information. Although forgetting in the context of ANNs has been studied extensively, there still exists far less work investigating it in terms of unsupervised architectures such as the venerable self-organizing map (SOM), a neural model often used in clustering and dimensionality reduction. While the internal mechanisms of SOMs could, in principle, yield sparse representations that improve memory retention, we observe that, when a fixed-size SOM processes continuous data streams, it experiences concept drift. In light of this, we propose a generalization of the SOM, the continual SOM (CSOM), which is capable of online unsupervised learning under a low memory budget. Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy, and CIFAR-10 demonstrates a state-of-the-art result when tested on (online) unsupervised class incremental learning setting.
|
[
"['Hitesh Vaidya' 'Travis Desell' 'Ankur Mali' 'Alexander Ororbia']"
] |
null | null |
2402.12475
| null | null |
http://arxiv.org/pdf/2402.12475v2
|
2024-06-20T04:08:36Z
|
2024-02-19T19:21:45Z
|
Diffeomorphism Neural Operator for various domains and parameters of
partial differential equations
|
In scientific and engineering applications, solving partial differential equations (PDEs) across various parameters and domains normally relies on resource-intensive numerical methods. Neural operators based on deep learning offered a promising alternative to PDEs solving by directly learning physical laws from data. However, the current neural operator methods were limited to solve PDEs on fixed domains. Expanding neural operators to solve PDEs on various domains hold significant promise in medical imaging, engineering design and manufacturing applications, where geometric and parameter changes are essential. This paper presents a novel neural operator learning framework for solving PDEs with various domains and parameters defined for physical systems, named diffeomorphism neural operator (DNO). The main idea is that a neural operator learns in a generic domain which is diffeomorphically mapped from various physics domains expressed by the same PDE. In this way, the challenge of operator learning on various domains is transformed into operator learning on the generic domain. The generalization performance of DNO on different domains can be assessed by a proposed method which evaluates the geometric similarity between a new domain and the domains of training dataset after diffeomorphism. Experiments on Darcy flow, pipe flow, airfoil flow and mechanics were carried out, where harmonic and volume parameterization were used as the diffeomorphism for 2D and 3D domains. The DNO framework demonstrated robust learning capabilities and strong generalization performance across various domains and parameters.
|
[
"['Zhiwei Zhao' 'Changqing Liu' 'Yingguang Li' 'Zhibin Chen' 'Xu Liu']"
] |
null | null |
2402.12479
| null | null |
http://arxiv.org/pdf/2402.12479v3
|
2024-06-25T13:10:06Z
|
2024-02-19T19:34:07Z
|
In value-based deep reinforcement learning, a pruned network is a good
network
|
Recent work has shown that deep reinforcement learning agents have difficulty in effectively using their network parameters. We leverage prior insights into the advantages of sparse training techniques and demonstrate that gradual magnitude pruning enables value-based agents to maximize parameter effectiveness. This results in networks that yield dramatic performance improvements over traditional networks, using only a small fraction of the full network parameters.
|
[
"['Johan Obando-Ceron' 'Aaron Courville' 'Pablo Samuel Castro']"
] |
null | null |
2402.12482
| null | null |
http://arxiv.org/pdf/2402.12482v1
|
2024-02-19T19:38:37Z
|
2024-02-19T19:38:37Z
|
SECP: A Speech Enhancement-Based Curation Pipeline For Scalable
Acquisition Of Clean Speech
|
As more speech technologies rely on a supervised deep learning approach with clean speech as the ground truth, a methodology to onboard said speech at scale is needed. However, this approach needs to minimize the dependency on human listening and annotation, only requiring a human-in-the-loop when needed. In this paper, we address this issue by outlining Speech Enhancement-based Curation Pipeline (SECP) which serves as a framework to onboard clean speech. This clean speech can then train a speech enhancement model, which can further refine the original dataset and thus close the iterative loop. By running two iterative rounds, we observe that enhanced output used as ground truth does not degrade model performance according to $Delta_{PESQ}$, a metric used in this paper. We also show through comparative mean opinion score (CMOS) based subjective tests that the highest and lowest bound of refined data is perceptually better than the original data.
|
[
"['Adam Sabra' 'Cyprian Wronka' 'Michelle Mao' 'Samer Hijazi']"
] |
null | null |
2402.12490
| null | null |
http://arxiv.org/pdf/2402.12490v1
|
2024-02-19T19:54:03Z
|
2024-02-19T19:54:03Z
|
Towards Cross-Domain Continual Learning
|
Continual learning is a process that involves training learning agents to sequentially master a stream of tasks or classes without revisiting past data. The challenge lies in leveraging previously acquired knowledge to learn new tasks efficiently, while avoiding catastrophic forgetting. Existing methods primarily focus on single domains, restricting their applicability to specific problems. In this work, we introduce a novel approach called Cross-Domain Continual Learning (CDCL) that addresses the limitations of being limited to single supervised domains. Our method combines inter- and intra-task cross-attention mechanisms within a compact convolutional network. This integration enables the model to maintain alignment with features from previous tasks, thereby delaying the data drift that may occur between tasks, while performing unsupervised cross-domain (UDA) between related domains. By leveraging an intra-task-specific pseudo-labeling method, we ensure accurate input pairs for both labeled and unlabeled samples, enhancing the learning process. To validate our approach, we conduct extensive experiments on public UDA datasets, showcasing its positive performance on cross-domain continual learning challenges. Additionally, our work introduces incremental ideas that contribute to the advancement of this field. We make our code and models available to encourage further exploration and reproduction of our results: url{https://github.com/Ivsucram/CDCL}
|
[
"['Marcus de Carvalho' 'Mahardhika Pratama' 'Jie Zhang' 'Chua Haoyan'\n 'Edward Yapp']"
] |
null | null |
2402.12498
| null | null |
http://arxiv.org/pdf/2402.12498v1
|
2024-02-19T20:05:41Z
|
2024-02-19T20:05:41Z
|
Feudal Networks for Visual Navigation
|
Visual navigation follows the intuition that humans can navigate without detailed maps. A common approach is interactive exploration while building a topological graph with images at nodes that can be used for planning. Recent variations learn from passive videos and can navigate using complex social and semantic cues. However, a significant number of training videos are needed, large graphs are utilized, and scenes are not unseen since odometry is utilized. We introduce a new approach to visual navigation using feudal learning, which employs a hierarchical structure consisting of a worker agent, a mid-level manager, and a high-level manager. Key to the feudal learning paradigm, agents at each level see a different aspect of the task and operate at different spatial and temporal scales. Two unique modules are developed in this framework. For the high-level manager, we learn a memory proxy map in a self supervised manner to record prior observations in a learned latent space and avoid the use of graphs and odometry. For the mid-level manager, we develop a waypoint network that outputs intermediate subgoals imitating human waypoint selection during local navigation. This waypoint network is pre-trained using a new, small set of teleoperation videos that we make publicly available, with training environments different from testing environments. The resulting feudal navigation network achieves near SOTA performance, while providing a novel no-RL, no-graph, no-odometry, no-metric map approach to the image goal navigation task.
|
[
"['Faith Johnson' 'Bryan Bo Cao' 'Kristin Dana' 'Shubham Jain'\n 'Ashwin Ashok']"
] |
null | null |
2402.12499
| null | null |
http://arxiv.org/pdf/2402.12499v1
|
2024-02-19T20:06:15Z
|
2024-02-19T20:06:15Z
|
Automated Security Response through Online Learning with Adaptive
Conjectures
|
We study automated security response for an IT infrastructure and formulate the interaction between an attacker and a defender as a partially observed, non-stationary game. We relax the standard assumption that the game model is correctly specified and consider that each player has a probabilistic conjecture about the model, which may be misspecified in the sense that the true model has probability 0. This formulation allows us to capture uncertainty about the infrastructure and the intents of the players. To learn effective game strategies online, we design a novel method where a player iteratively adapts its conjecture using Bayesian learning and updates its strategy through rollout. We prove that the conjectures converge to best fits, and we provide a bound on the performance improvement that rollout enables with a conjectured model. To characterize the steady state of the game, we propose a variant of the Berk-Nash equilibrium. We present our method through an advanced persistent threat use case. Simulation studies based on testbed measurements show that our method produces effective security strategies that adapt to a changing environment. We also find that our method enables faster convergence than current reinforcement learning techniques.
|
[
"['Kim Hammar' 'Tao Li' 'Rolf Stadler' 'Quanyan Zhu']"
] |
null | null |
2402.12500
| null | null |
http://arxiv.org/pdf/2402.12500v1
|
2024-02-19T20:08:13Z
|
2024-02-19T20:08:13Z
|
Integrating kNN with Foundation Models for Adaptable and Privacy-Aware
Image Classification
|
Traditional deep learning models implicity encode knowledge limiting their transparency and ability to adapt to data changes. Yet, this adaptability is vital for addressing user data privacy concerns. We address this limitation by storing embeddings of the underlying training data independently of the model weights, enabling dynamic data modifications without retraining. Specifically, our approach integrates the $k$-Nearest Neighbor ($k$-NN) classifier with a vision-based foundation model, pre-trained self-supervised on natural images, enhancing interpretability and adaptability. We share open-source implementations of a previously unpublished baseline method as well as our performance-improving contributions. Quantitative experiments confirm improved classification across established benchmark datasets and the method's applicability to distinct medical image classification tasks. Additionally, we assess the method's robustness in continual learning and data removal scenarios. The approach exhibits great promise for bridging the gap between foundation models' performance and challenges tied to data privacy. The source code is available at https://github.com/TobArc/privacy-aware-image-classification-with-kNN.
|
[
"['Sebastian Doerrich' 'Tobias Archut' 'Francesco Di Salvo'\n 'Christian Ledig']"
] |
null | null |
2402.12503
| null | null |
http://arxiv.org/pdf/2402.12503v3
|
2024-05-24T13:35:59Z
|
2024-02-19T20:11:46Z
|
PARCv2: Physics-aware Recurrent Convolutional Neural Networks for
Spatiotemporal Dynamics Modeling
|
Modeling unsteady, fast transient, and advection-dominated physics problems is a pressing challenge for physics-aware deep learning (PADL). The physics of complex systems is governed by large systems of partial differential equations (PDEs) and ancillary constitutive models with nonlinear structures, as well as evolving state fields exhibiting sharp gradients and rapidly deforming material interfaces. Here, we investigate an inductive bias approach that is versatile and generalizable to model generic nonlinear field evolution problems. Our study focuses on the recent physics-aware recurrent convolutions (PARC), which incorporates a differentiator-integrator architecture that inductively models the spatiotemporal dynamics of generic physical systems. We extend the capabilities of PARC to simulate unsteady, transient, and advection-dominant systems. The extended model, referred to as PARCv2, is equipped with differential operators to model advection-reaction-diffusion equations, as well as a hybrid integral solver for stable, long-time predictions. PARCv2 is tested on both standard benchmark problems in fluid dynamics, namely Burgers and Navier-Stokes equations, and then applied to more complex shock-induced reaction problems in energetic materials. We evaluate the behavior of PARCv2 in comparison to other physics-informed and learning bias models and demonstrate its potential to model unsteady and advection-dominant dynamics regimes.
|
[
"['Phong C. H. Nguyen' 'Xinlun Cheng' 'Shahab Azarfar' 'Pradeep Seshadri'\n 'Yen T. Nguyen' 'Munho Kim' 'Sanghun Choi' 'H. S. Udaykumar'\n 'Stephen Baek']"
] |
null | null |
2402.12508
| null | null |
http://arxiv.org/pdf/2402.12508v1
|
2024-02-19T20:18:29Z
|
2024-02-19T20:18:29Z
|
SDEs for Minimax Optimization
|
Minimax optimization problems have attracted a lot of attention over the past few years, with applications ranging from economics to machine learning. While advanced optimization methods exist for such problems, characterizing their dynamics in stochastic scenarios remains notably challenging. In this paper, we pioneer the use of stochastic differential equations (SDEs) to analyze and compare Minimax optimizers. Our SDE models for Stochastic Gradient Descent-Ascent, Stochastic Extragradient, and Stochastic Hamiltonian Gradient Descent are provable approximations of their algorithmic counterparts, clearly showcasing the interplay between hyperparameters, implicit regularization, and implicit curvature-induced noise. This perspective also allows for a unified and simplified analysis strategy based on the principles of It^o calculus. Finally, our approach facilitates the derivation of convergence conditions and closed-form solutions for the dynamics in simplified settings, unveiling further insights into the behavior of different optimizers.
|
[
"['Enea Monzio Compagnoni' 'Antonio Orvieto' 'Hans Kersting'\n 'Frank Norbert Proske' 'Aurelien Lucchi']"
] |
null | null |
2402.12513
| null | null |
http://arxiv.org/pdf/2402.12513v1
|
2024-02-19T20:21:09Z
|
2024-02-19T20:21:09Z
|
Induced Model Matching: How Restricted Models Can Help Larger Ones
|
We consider scenarios where a very accurate predictive model using restricted features is available at the time of training of a larger, full-featured, model. This restricted model may be thought of as "side-information", derived either from an auxiliary exhaustive dataset or on the same dataset, by forcing the restriction. How can the restricted model be useful to the full model? We propose an approach for transferring the knowledge of the restricted model to the full model, by aligning the full model's context-restricted performance with that of the restricted model's. We call this methodology Induced Model Matching (IMM) and first illustrate its general applicability by using logistic regression as a toy example. We then explore IMM's use in language modeling, the application that initially inspired it, and where it offers an explicit foundation in contrast to the implicit use of restricted models in techniques such as noising. We demonstrate the methodology on both LSTM and transformer full models, using $N$-grams as restricted models. To further illustrate the potential of the principle whenever it is much cheaper to collect restricted rather than full information, we conclude with a simple RL example where POMDP policies can improve learned MDP policies via IMM.
|
[
"['Usama Muneeb' 'Mesrob I. Ohannessian']"
] |
null | null |
2402.12518
| null | null |
http://arxiv.org/pdf/2402.12518v2
|
2024-03-19T15:38:29Z
|
2024-02-19T20:29:34Z
|
Gaussian Process Neural Additive Models
|
Deep neural networks have revolutionized many fields, but their black-box nature also occasionally prevents their wider adoption in fields such as healthcare and finance, where interpretable and explainable models are required. The recent development of Neural Additive Models (NAMs) is a significant step in the direction of interpretable deep learning for tabular datasets. In this paper, we propose a new subclass of NAMs that use a single-layer neural network construction of the Gaussian process via random Fourier features, which we call Gaussian Process Neural Additive Models (GP-NAM). GP-NAMs have the advantage of a convex objective function and number of trainable parameters that grows linearly with feature dimensionality. It suffers no loss in performance compared to deeper NAM approaches because GPs are well-suited for learning complex non-parametric univariate functions. We demonstrate the performance of GP-NAM on several tabular datasets, showing that it achieves comparable or better performance in both classification and regression tasks with a large reduction in the number of parameters.
|
[
"['Wei Zhang' 'Brian Barr' 'John Paisley']"
] |
null | null |
2402.12527
| null | null |
http://arxiv.org/pdf/2402.12527v1
|
2024-02-19T20:38:00Z
|
2024-02-19T20:38:00Z
|
The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning
|
Offline reinforcement learning aims to enable agents to be trained from pre-collected datasets, however, this comes with the added challenge of estimating the value of behavior not covered in the dataset. Model-based methods offer a solution by allowing agents to collect additional synthetic data via rollouts in a learned dynamics model. The prevailing theoretical understanding is that this can then be viewed as online reinforcement learning in an approximate dynamics model, and any remaining gap is therefore assumed to be due to the imperfect dynamics model. Surprisingly, however, we find that if the learned dynamics model is replaced by the true error-free dynamics, existing model-based methods completely fail. This reveals a major misconception. Our subsequent investigation finds that the general procedure used in model-based algorithms results in the existence of a set of edge-of-reach states which trigger pathological value overestimation and collapse in Bellman-based algorithms. We term this the edge-of-reach problem. Based on this, we fill some gaps in existing theory and also explain how prior model-based methods are inadvertently addressing the true underlying edge-of-reach problem. Finally, we propose Reach-Aware Value Learning (RAVL), a simple and robust method that directly addresses the edge-of-reach problem and achieves strong performance across both proprioceptive and pixel-based benchmarks. Code open-sourced at: https://github.com/anyasims/edge-of-reach.
|
[
"['Anya Sims' 'Cong Lu' 'Yee Whye Teh']"
] |
null | null |
2402.12530
| null | null |
http://arxiv.org/pdf/2402.12530v1
|
2024-02-19T20:40:48Z
|
2024-02-19T20:40:48Z
|
Parallel Structures in Pre-training Data Yield In-Context Learning
|
Pre-trained language models (LMs) are capable of in-context learning (ICL): they can adapt to a task with only a few examples given in the prompt without any parameter update. However, it is unclear where this capability comes from as there is a stark distribution shift between pre-training text and ICL prompts. In this work, we study what patterns of the pre-training data contribute to ICL. We find that LMs' ICL ability depends on $textit{parallel structures}$ in the pre-training data -- pairs of phrases following similar templates in the same context window. Specifically, we detect parallel structures by checking whether training on one phrase improves prediction of the other, and conduct ablation experiments to study their effect on ICL. We show that removing parallel structures in the pre-training data reduces LMs' ICL accuracy by 51% (vs 2% from random ablation). This drop persists even when excluding common patterns such as n-gram repetitions and long-range dependency, showing the diversity and generality of parallel structures. A closer look at the detected parallel structures indicates that they cover diverse linguistic tasks and span long distances in the data.
|
[
"['Yanda Chen' 'Chen Zhao' 'Zhou Yu' 'Kathleen McKeown' 'He He']"
] |
null | null |
2402.12531
| null | null |
http://arxiv.org/pdf/2402.12531v2
|
2024-02-22T23:49:53Z
|
2024-02-19T20:41:03Z
|
Improving Deep Generative Models on Many-To-One Image-to-Image
Translation
|
Deep generative models have been applied to multiple applications in image-to-image translation. Generative Adversarial Networks and Diffusion Models have presented impressive results, setting new state-of-the-art results on these tasks. Most methods have symmetric setups across the different domains in a dataset. These methods assume that all domains have either multiple modalities or only one modality. However, there are many datasets that have a many-to-one relationship between two domains. In this work, we first introduce a Colorized MNIST dataset and a Color-Recall score that can provide a simple benchmark for evaluating models on many-to-one translation. We then introduce a new asymmetric framework to improve existing deep generative models on many-to-one image-to-image translation. We apply this framework to StarGAN V2 and show that in both unsupervised and semi-supervised settings, the performance of this new model improves on many-to-one image-to-image translation.
|
[
"['Sagar Saxena' 'Mohammad Nayeem Teli']"
] |
null | null |
2402.12535
| null | null |
http://arxiv.org/pdf/2402.12535v2
|
2024-06-05T16:57:00Z
|
2024-02-19T20:48:09Z
|
Locality-Sensitive Hashing-Based Efficient Point Transformer with
Applications in High-Energy Physics
|
This study introduces a novel transformer model optimized for large-scale point cloud processing in scientific domains such as high-energy physics (HEP) and astrophysics. Addressing the limitations of graph neural networks and standard transformers, our model integrates local inductive bias and achieves near-linear complexity with hardware-friendly regular operations. One contribution of this work is the quantitative analysis of the error-complexity tradeoff of various sparsification techniques for building efficient transformers. Our findings highlight the superiority of using locality-sensitive hashing (LSH), especially OR & AND-construction LSH, in kernel approximation for large-scale point cloud data with local inductive bias. Based on this finding, we propose LSH-based Efficient Point Transformer (HEPT), which combines E$^2$LSH with OR & AND constructions and is built upon regular computations. HEPT demonstrates remarkable performance on two critical yet time-consuming HEP tasks, significantly outperforming existing GNNs and transformers in accuracy and computational speed, marking a significant advancement in geometric deep learning and large-scale scientific data processing. Our code is available at https://github.com/Graph-COM/HEPT.
|
[
"['Siqi Miao' 'Zhiyuan Lu' 'Mia Liu' 'Javier Duarte' 'Pan Li']"
] |
null | null |
2402.12537
| null | null |
http://arxiv.org/pdf/2402.12537v2
|
2024-02-25T20:32:50Z
|
2024-02-19T20:53:27Z
|
Hierarchical Bayes Approach to Personalized Federated Unsupervised
Learning
|
Statistical heterogeneity of clients' local data is an important characteristic in federated learning, motivating personalized algorithms tailored to the local data statistics. Though there has been a plethora of algorithms proposed for personalized supervised learning, discovering the structure of local data through personalized unsupervised learning is less explored. We initiate a systematic study of such personalized unsupervised learning by developing algorithms based on optimization criteria inspired by a hierarchical Bayesian statistical framework. We develop adaptive algorithms that discover the balance between using limited local data and collaborative information. We do this in the context of two unsupervised learning tasks: personalized dimensionality reduction and personalized diffusion models. We develop convergence analyses for our adaptive algorithms which illustrate the dependence on problem parameters (e.g., heterogeneity, local sample size). We also develop a theoretical framework for personalized diffusion models, which shows the benefits of collaboration even under heterogeneity. We finally evaluate our proposed algorithms using synthetic and real data, demonstrating the effective sample amplification for personalized tasks, induced through collaboration, despite data heterogeneity.
|
[
"['Kaan Ozkara' 'Bruce Huang' 'Ruida Zhou' 'Suhas Diggavi']"
] |
null | null |
2402.12538
| null | null |
http://arxiv.org/pdf/2402.12538v1
|
2024-02-19T20:55:12Z
|
2024-02-19T20:55:12Z
|
A Machine Learning Ensemble Model for the Detection of Cyberbullying
|
The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms. Motivated by this necessity, we present this paper to contribute to developing an automated system for detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to previous experiments on the same dataset. We employed the stacking ensemble machine learning method, utilizing four various feature extraction techniques to optimize performance within the stacking ensemble learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we achieved superior results compared to traditional machine learning classifier models. The stacking classifier achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive.
|
[
"['Abulkarim Faraj Alqahtani' 'Mohammad Ilyas']"
] |
null | null |
2402.12539
| null | null |
http://arxiv.org/pdf/2402.12539v1
|
2024-02-19T21:01:11Z
|
2024-02-19T21:01:11Z
|
Impact of data usage for forecasting on performance of model predictive
control in buildings with smart energy storage
|
Data is required to develop forecasting models for use in Model Predictive Control (MPC) schemes in building energy systems. However, data usage incurs costs from both its collection and exploitation. Determining cost optimal data usage requires understanding of the forecast accuracy and resulting MPC operational performance it enables. This study investigates the performance of both simple and state-of-the-art machine learning prediction models for MPC in a multi-building energy system simulation using historic building energy data. The impact of data usage on forecast accuracy is quantified for the following data efficiency measures: reuse of prediction models, reduction of training data volumes, reduction of model data features, and online model training. A simple linear multi-layer perceptron model is shown to provide equivalent forecast accuracy to state-of-the-art models, with greater data efficiency and generalisability. The use of more than 2 years of training data for load prediction models provided no significant improvement in forecast accuracy. Forecast accuracy and data efficiency were improved simultaneously by using change-point analysis to screen training data. Reused models and those trained with 3 months of data had on average 10% higher error than baseline, indicating that deploying MPC systems without prior data collection may be economic.
|
[
"['Max Langtry' 'Vijja Wichitwechkarn' 'Rebecca Ward' 'Chaoqun Zhuang'\n 'Monika J. Kreitmair' 'Nikolas Makasis' 'Zack Xuereb Conti'\n 'Ruchi Choudhary']"
] |
null | null |
2402.12550
| null | null |
http://arxiv.org/pdf/2402.12550v2
|
2024-05-31T14:04:05Z
|
2024-02-19T21:20:22Z
|
Multilinear Mixture of Experts: Scalable Expert Specialization through
Factorization
|
The Mixture of Experts (MoE) paradigm provides a powerful way to decompose dense layers into smaller, modular computations often more amenable to human interpretation, debugging, and editability. However, a major challenge lies in the computational cost of scaling the number of experts high enough to achieve fine-grained specialization. In this paper, we propose the Multilinear Mixture of Experts ($mu$MoE) layer to address this, focusing on vision models. $mu$MoE layers enable scalable expert specialization by performing an implicit computation on prohibitively large weight tensors entirely in factorized form. Consequently, $mu$MoEs (1) avoid the restrictively high inference-time costs of 'soft' MoEs, yet (2) do not inherit the training issues of the popular 'sparse' MoEs' discrete (non-differentiable) expert routing. We present both qualitative and quantitative evidence that scaling $mu$MoE layers when fine-tuning foundation models for vision tasks leads to more specialized experts at the class-level, further enabling manual bias correction in CelebA attribute classification. Finally, we show qualitative results demonstrating the expert specialism achieved when pre-training large GPT2 and MLP-Mixer models with parameter-matched $mu$MoE blocks at every layer, maintaining comparable accuracy. Our code is available at: https://github.com/james-oldfield/muMoE.
|
[
"['James Oldfield' 'Markos Georgopoulos' 'Grigorios G. Chrysos'\n 'Christos Tzelepis' 'Yannis Panagakis' 'Mihalis A. Nicolaou'\n 'Jiankang Deng' 'Ioannis Patras']"
] |
null | null |
2402.12558
| null | null |
http://arxiv.org/abs/2402.12558v1
|
2024-02-19T21:32:56Z
|
2024-02-19T21:32:56Z
|
Evaluation of Country Dietary Habits Using Machine Learning Techniques
in Relation to Deaths from COVID-19
|
COVID-19 disease has affected almost every country in the world. The large number of infected people and the different mortality rates between countries has given rise to many hypotheses about the key points that make the virus so lethal in some places. In this study, the eating habits of 170 countries were evaluated in order to find correlations between these habits and mortality rates caused by COVID-19 using machine learning techniques that group the countries together according to the different distribution of fat, energy, and protein across 23 different types of food, as well as the amount ingested in kilograms. Results shown how obesity and the high consumption of fats appear in countries with the highest death rates, whereas countries with a lower rate have a higher level of cereal consumption accompanied by a lower total average intake of kilocalories.
|
[
"['María Teresa García-Ordás' 'Natalia Arias' 'Carmen Benavides'\n 'Oscar García-Olalla' 'José Alberto Benítez-Andrades']"
] |
null | null |
2402.12562
| null | null |
http://arxiv.org/pdf/2402.12562v1
|
2024-02-19T21:36:54Z
|
2024-02-19T21:36:54Z
|
Dynamic Pricing and Learning with Long-term Reference Effects
|
We consider a dynamic pricing problem where customer response to the current price is impacted by the customer price expectation, aka reference price. We study a simple and novel reference price mechanism where reference price is the average of the past prices offered by the seller. As opposed to the more commonly studied exponential smoothing mechanism, in our reference price mechanism the prices offered by seller have a longer term effect on the future customer expectations. We show that under this mechanism, a markdown policy is near-optimal irrespective of the parameters of the model. This matches the common intuition that a seller may be better off by starting with a higher price and then decreasing it, as the customers feel like they are getting bargains on items that are ordinarily more expensive. For linear demand models, we also provide a detailed characterization of the near-optimal markdown policy along with an efficient way of computing it. We then consider a more challenging dynamic pricing and learning problem, where the demand model parameters are apriori unknown, and the seller needs to learn them online from the customers' responses to the offered prices while simultaneously optimizing revenue. The objective is to minimize regret, i.e., the $T$-round revenue loss compared to a clairvoyant optimal policy. This task essentially amounts to learning a non-stationary optimal policy in a time-variant Markov Decision Process (MDP). For linear demand models, we provide an efficient learning algorithm with an optimal $tilde{O}(sqrt{T})$ regret upper bound.
|
[
"['Shipra Agrawal' 'Wei Tang']"
] |
null | null |
2402.12566
| null | null |
http://arxiv.org/pdf/2402.12566v2
|
2024-03-16T21:14:16Z
|
2024-02-19T21:45:55Z
|
GenAudit: Fixing Factual Errors in Language Model Outputs with Evidence
|
LLMs can generate factually incorrect statements even when provided access to reference documents. Such errors can be dangerous in high-stakes applications (e.g., document-grounded QA for healthcare or finance). We present GenAudit -- a tool intended to assist fact-checking LLM responses for document-grounded tasks. GenAudit suggests edits to the LLM response by revising or removing claims that are not supported by the reference document, and also presents evidence from the reference for facts that do appear to have support. We train models to execute these tasks, and design an interactive interface to present suggested edits and evidence to users. Comprehensive evaluation by human raters shows that GenAudit can detect errors in 8 different LLM outputs when summarizing documents from diverse domains. To ensure that most errors are flagged by the system, we propose a method that can increase the error recall while minimizing impact on precision. We release our tool (GenAudit) and fact-checking model for public use.
|
[
"['Kundan Krishna' 'Sanjana Ramprasad' 'Prakhar Gupta' 'Byron C. Wallace'\n 'Zachary C. Lipton' 'Jeffrey P. Bigham']"
] |
null | null |
2402.12570
| null | null |
http://arxiv.org/pdf/2402.12570v1
|
2024-02-19T21:52:44Z
|
2024-02-19T21:52:44Z
|
Offline Multi-task Transfer RL with Representational Penalization
|
We study the problem of representation transfer in offline Reinforcement Learning (RL), where a learner has access to episodic data from a number of source tasks collected a priori, and aims to learn a shared representation to be used in finding a good policy for a target task. Unlike in online RL where the agent interacts with the environment while learning a policy, in the offline setting there cannot be such interactions in either the source tasks or the target task; thus multi-task offline RL can suffer from incomplete coverage. We propose an algorithm to compute pointwise uncertainty measures for the learnt representation, and establish a data-dependent upper bound for the suboptimality of the learnt policy for the target task. Our algorithm leverages the collective exploration done by source tasks to mitigate poor coverage at some points by a few tasks, thus overcoming the limitation of needing uniformly good coverage for a meaningful transfer by existing offline algorithms. We complement our theoretical results with empirical evaluation on a rich-observation MDP which requires many samples for complete coverage. Our findings illustrate the benefits of penalizing and quantifying the uncertainty in the learnt representation.
|
[
"['Avinandan Bose' 'Simon Shaolei Du' 'Maryam Fazel']"
] |
null | null |
2402.12572
| null | null |
http://arxiv.org/pdf/2402.12572v1
|
2024-02-19T21:53:43Z
|
2024-02-19T21:53:43Z
|
FairProof : Confidential and Certifiable Fairness for Neural Networks
|
Machine learning models are increasingly used in societal applications, yet legal and privacy concerns demand that they very often be kept confidential. Consequently, there is a growing distrust about the fairness properties of these models in the minds of consumers, who are often at the receiving end of model predictions. To this end, we propose FairProof - a system that uses Zero-Knowledge Proofs (a cryptographic primitive) to publicly verify the fairness of a model, while maintaining confidentiality. We also propose a fairness certification algorithm for fully-connected neural networks which is befitting to ZKPs and is used in this system. We implement FairProof in Gnark and demonstrate empirically that our system is practically feasible.
|
[
"['Chhavi Yadav' 'Amrita Roy Chowdhury' 'Dan Boneh' 'Kamalika Chaudhuri']"
] |
null | null |
2402.12595
| null | null |
http://arxiv.org/pdf/2402.12595v1
|
2024-02-19T23:19:15Z
|
2024-02-19T23:19:15Z
|
Truncated Polynomial Expansion-Based Detection in Massive MIMO: A
Model-Driven Deep Learning Approach
|
In this paper, we propose a deep learning (DL)-based approach for efficiently computing the inverse of Hermitian matrices using truncated polynomial expansion (TPE). Our model-driven approach involves optimizing the coefficients of the TPE during an offline training procedure for a given number of TPE terms. We apply this method to signal detection in uplink massive multiple-input multiple-output (MIMO) systems, where the matrix inverse operation required by linear detectors, such as zero-forcing (ZF) and minimum mean square error (MMSE), is approximated using TPE. Our simulation results demonstrate that the proposed learned TPE-based method outperforms the conventional TPE method with optimal coefficients in terms of asymptotic convergence speed and reduces the computational complexity of the online detection stage, albeit at the expense of the offline training stage. However, the limited number of trainable parameters leads to a swift offline training process.
|
[
"['Kazem Izadinasab' 'Ahmed Wagdy Shaban' 'Oussama Damen']"
] |
null | null |
2402.12598
| null | null |
http://arxiv.org/pdf/2402.12598v1
|
2024-02-19T23:22:30Z
|
2024-02-19T23:22:30Z
|
Graph-based Virtual Sensing from Sparse and Partial Multivariate
Observations
|
Virtual sensing techniques allow for inferring signals at new unmonitored locations by exploiting spatio-temporal measurements coming from physical sensors at different locations. However, as the sensor coverage becomes sparse due to costs or other constraints, physical proximity cannot be used to support interpolation. In this paper, we overcome this challenge by leveraging dependencies between the target variable and a set of correlated variables (covariates) that can frequently be associated with each location of interest. From this viewpoint, covariates provide partial observability, and the problem consists of inferring values for unobserved channels by exploiting observations at other locations to learn how such variables can correlate. We introduce a novel graph-based methodology to exploit such relationships and design a graph deep learning architecture, named GgNet, implementing the framework. The proposed approach relies on propagating information over a nested graph structure that is used to learn dependencies between variables as well as locations. GgNet is extensively evaluated under different virtual sensing scenarios, demonstrating higher reconstruction accuracy compared to the state-of-the-art.
|
[
"['Giovanni De Felice' 'Andrea Cini' 'Daniele Zambon' 'Vladimir V. Gusev'\n 'Cesare Alippi']"
] |
null | null |
2402.12613
| null | null |
http://arxiv.org/pdf/2402.12613v1
|
2024-02-20T00:34:58Z
|
2024-02-20T00:34:58Z
|
Analysis of Using Sigmoid Loss for Contrastive Learning
|
Contrastive learning has emerged as a prominent branch of self-supervised learning for several years. Especially, CLIP, which applies contrastive learning to large sets of captioned images, has garnered significant attention. Recently, SigLIP, a variant of CLIP, has been proposed, which uses the sigmoid loss instead of the standard InfoNCE loss. SigLIP achieves the performance comparable to CLIP in a more efficient manner by eliminating the need for a global view. However, theoretical understanding of using the sigmoid loss in contrastive learning is underexplored. In this paper, we provide a theoretical analysis of using the sigmoid loss in contrastive learning, in the perspective of the geometric structure of learned embeddings. First, we propose the double-Constant Embedding Model (CCEM), a framework for parameterizing various well-known embedding structures by a single variable. Interestingly, the proposed CCEM is proven to contain the optimal embedding with respect to the sigmoid loss. Second, we mathematically analyze the optimal embedding minimizing the sigmoid loss for contrastive learning. The optimal embedding ranges from simplex equiangular-tight-frame to antipodal structure, depending on the temperature parameter used in the sigmoid loss. Third, our experimental results on synthetic datasets coincide with the theoretical results on the optimal embedding structures.
|
[
"['Chungpa Lee' 'Joonhwan Chang' 'Jy-yong Sohn']"
] |
null | null |
2402.12616
| null | null |
http://arxiv.org/abs/2402.12616v1
|
2024-02-20T00:50:26Z
|
2024-02-20T00:50:26Z
|
Multi-objective Binary Coordinate Search for Feature Selection
|
A supervised feature selection method selects an appropriate but concise set of features to differentiate classes, which is highly expensive for large-scale datasets. Therefore, feature selection should aim at both minimizing the number of selected features and maximizing the accuracy of classification, or any other task. However, this crucial task is computationally highly demanding on many real-world datasets and requires a very efficient algorithm to reach a set of optimal features with a limited number of fitness evaluations. For this purpose, we have proposed the binary multi-objective coordinate search (MOCS) algorithm to solve large-scale feature selection problems. To the best of our knowledge, the proposed algorithm in this paper is the first multi-objective coordinate search algorithm. In this method, we generate new individuals by flipping a variable of the candidate solutions on the Pareto front. This enables us to investigate the effectiveness of each feature in the corresponding subset. In fact, this strategy can play the role of crossover and mutation operators to generate distinct subsets of features. The reported results indicate the significant superiority of our method over NSGA-II, on five real-world large-scale datasets, particularly when the computing budget is limited. Moreover, this simple hyper-parameter-free algorithm can solve feature selection much faster and more efficiently than NSGA-II.
|
[
"['Sevil Zanjani Miyandoab' 'Shahryar Rahnamayan' 'Azam Asilian Bidgoli']"
] |
null | null |
2402.12617
| null | null |
http://arxiv.org/pdf/2402.12617v1
|
2024-02-20T00:51:05Z
|
2024-02-20T00:51:05Z
|
Generative AI Security: Challenges and Countermeasures
|
Generative AI's expanding footprint across numerous industries has led to both excitement and increased scrutiny. This paper delves into the unique security challenges posed by Generative AI, and outlines potential research directions for managing these risks.
|
[
"['Banghua Zhu' 'Norman Mu' 'Jiantao Jiao' 'David Wagner']"
] |
null | null |
2402.12621
| null | null |
http://arxiv.org/pdf/2402.12621v2
|
2024-06-06T17:04:41Z
|
2024-02-20T01:04:21Z
|
Reflect-RL: Two-Player Online RL Fine-Tuning for LMs
|
As language models (LMs) demonstrate their capabilities in various fields, their application to tasks requiring multi-round interactions has become increasingly popular. These tasks usually have complex dynamics, so supervised fine-tuning (SFT) on a limited offline dataset does not yield good performance. However, only a few works attempted to directly train the LMs within interactive decision-making environments. We aim to create an effective approach to fine-tune LMs with online reinforcement learning (RL) in these environments. We propose Reflect-RL, a two-player system to fine-tune an LM using SFT and online RL, where a frozen reflection model (player) assists the policy model (player). To generate data for the warm-up SFT stage, we use negative example generation to enhance the error-correction ability of the reflection model. Furthermore, we designed single-prompt action enumeration and applied curriculum learning to allow the policy model to learn more efficiently. Empirically, we verify that Reflect-RL outperforms SFT and online RL without reflection. Testing results indicate GPT-2 XL 1.56B fine-tuned with Reflect-RL outperforms larger open-source LMs, such as Mistral 7B. The benchmarks, dataset, and code involved in this work are publicly available: https://github.com/zhourunlong/Reflect-RL.
|
[
"['Runlong Zhou' 'Simon S. Du' 'Beibin Li']"
] |
null | null |
2402.12625
| null | null |
http://arxiv.org/abs/2402.12625v1
|
2024-02-20T01:10:12Z
|
2024-02-20T01:10:12Z
|
Compact NSGA-II for Multi-objective Feature Selection
|
Feature selection is an expensive challenging task in machine learning and data mining aimed at removing irrelevant and redundant features. This contributes to an improvement in classification accuracy, as well as the budget and memory requirements for classification, or any other post-processing task conducted after feature selection. In this regard, we define feature selection as a multi-objective binary optimization task with the objectives of maximizing classification accuracy and minimizing the number of selected features. In order to select optimal features, we have proposed a binary Compact NSGA-II (CNSGA-II) algorithm. Compactness represents the population as a probability distribution to enhance evolutionary algorithms not only to be more memory-efficient but also to reduce the number of fitness evaluations. Instead of holding two populations during the optimization process, our proposed method uses several Probability Vectors (PVs) to generate new individuals. Each PV efficiently explores a region of the search space to find non-dominated solutions instead of generating candidate solutions from a small population as is the common approach in most evolutionary algorithms. To the best of our knowledge, this is the first compact multi-objective algorithm proposed for feature selection. The reported results for expensive optimization cases with a limited budget on five datasets show that the CNSGA-II performs more efficiently than the well-known NSGA-II method in terms of the hypervolume (HV) performance metric requiring less memory. The proposed method and experimental results are explained and analyzed in detail.
|
[
"['Sevil Zanjani Miyandoab' 'Shahryar Rahnamayan' 'Azam Asilian Bidgoli']"
] |
null | null |
2402.12626
| null | null |
http://arxiv.org/pdf/2402.12626v1
|
2024-02-20T01:12:59Z
|
2024-02-20T01:12:59Z
|
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
|
Machine learning models have achieved great success in supervised learning tasks for end-to-end training, which requires a large amount of labeled data that is not always feasible. Recently, many practitioners have shifted to self-supervised learning methods that utilize cheap unlabeled data to learn a general feature extractor via pre-training, which can be further applied to personalized downstream tasks by simply training an additional linear layer with limited labeled data. However, such a process may also raise concerns regarding data poisoning attacks. For instance, indiscriminate data poisoning attacks, which aim to decrease model utility by injecting a small number of poisoned data into the training set, pose a security risk to machine learning models, but have only been studied for end-to-end supervised learning. In this paper, we extend the exploration of the threat of indiscriminate attacks on downstream tasks that apply pre-trained feature extractors. Specifically, we propose two types of attacks: (1) the input space attacks, where we modify existing attacks to directly craft poisoned data in the input space. However, due to the difficulty of optimization under constraints, we further propose (2) the feature targeted attacks, where we mitigate the challenge with three stages, firstly acquiring target parameters for the linear head; secondly finding poisoned features by treating the learned feature representations as a dataset; and thirdly inverting the poisoned features back to the input space. Our experiments examine such attacks in popular downstream tasks of fine-tuning on the same dataset and transfer learning that considers domain adaptation. Empirical results reveal that transfer learning is more vulnerable to our attacks. Additionally, input space attacks are a strong threat if no countermeasures are posed, but are otherwise weaker than feature targeted attacks.
|
[
"['Yiwei Lu' 'Matthew Y. R. Yang' 'Gautam Kamath' 'Yaoliang Yu']"
] |
null | null |
2402.12627
| null | null |
http://arxiv.org/pdf/2402.12627v1
|
2024-02-20T01:16:01Z
|
2024-02-20T01:16:01Z
|
A Comprehensive Review of Machine Learning Advances on Data Change: A
Cross-Field Perspective
|
Recent artificial intelligence (AI) technologies show remarkable evolution in various academic fields and industries. However, in the real world, dynamic data lead to principal challenges for deploying AI models. An unexpected data change brings about severe performance degradation in AI models. We identify two major related research fields, domain shift and concept drift according to the setting of the data change. Although these two popular research fields aim to solve distribution shift and non-stationary data stream problems, the underlying properties remain similar which also encourages similar technical approaches. In this review, we regroup domain shift and concept drift into a single research problem, namely the data change problem, with a systematic overview of state-of-the-art methods in the two research fields. We propose a three-phase problem categorization scheme to link the key ideas in the two technical fields. We thus provide a novel scope for researchers to explore contemporary technical strategies, learn industrial applications, and identify future directions for addressing data change challenges.
|
[
"['Jeng-Lin Li' 'Chih-Fan Hsu' 'Ming-Ching Chang' 'Wei-Chao Chen']"
] |
null | null |
2402.12630
| null | null |
http://arxiv.org/pdf/2402.12630v1
|
2024-02-20T01:22:04Z
|
2024-02-20T01:22:04Z
|
FAST: An Optimization Framework for Fast Additive Segmentation in
Transparent ML
|
We present FAST, an optimization framework for fast additive segmentation. FAST segments piecewise constant shape functions for each feature in a dataset to produce transparent additive models. The framework leverages a novel optimization procedure to fit these models $sim$2 orders of magnitude faster than existing state-of-the-art methods, such as explainable boosting machines citep{nori2019interpretml}. We also develop new feature selection algorithms in the FAST framework to fit parsimonious models that perform well. Through experiments and case studies, we show that FAST improves the computational efficiency and interpretability of additive models.
|
[
"['Brian Liu' 'Rahul Mazumder']"
] |
null | null |
2402.12646
| null | null |
http://arxiv.org/abs/2402.12646v1
|
2024-02-20T01:47:25Z
|
2024-02-20T01:47:25Z
|
Training Artificial Neural Networks by Coordinate Search Algorithm
|
Training Artificial Neural Networks poses a challenging and critical problem in machine learning. Despite the effectiveness of gradient-based learning methods, such as Stochastic Gradient Descent (SGD), in training neural networks, they do have several limitations. For instance, they require differentiable activation functions, and cannot optimize a model based on several independent non-differentiable loss functions simultaneously; for example, the F1-score, which is used during testing, can be used during training when a gradient-free optimization algorithm is utilized. Furthermore, the training in any DNN can be possible with a small size of the training dataset. To address these concerns, we propose an efficient version of the gradient-free Coordinate Search (CS) algorithm, an instance of General Pattern Search methods, for training neural networks. The proposed algorithm can be used with non-differentiable activation functions and tailored to multi-objective/multi-loss problems. Finding the optimal values for weights of ANNs is a large-scale optimization problem. Therefore instead of finding the optimal value for each variable, which is the common technique in classical CS, we accelerate optimization and convergence by bundling the weights. In fact, this strategy is a form of dimension reduction for optimization problems. Based on the experimental results, the proposed method, in some cases, outperforms the gradient-based approach, particularly, in situations with insufficient labeled training data. The performance plots demonstrate a high convergence rate, highlighting the capability of our suggested method to find a reasonable solution with fewer function calls. As of now, the only practical and efficient way of training ANNs with hundreds of thousands of weights is gradient-based algorithms such as SGD or Adam. In this paper we introduce an alternative method for training ANN.
|
[
"['Ehsan Rokhsatyazdi' 'Shahryar Rahnamayan' 'Sevil Zanjani Miyandoab'\n 'Azam Asilian Bidgoli' 'H. R. Tizhoosh']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.