categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.07316 | null | null | http://arxiv.org/pdf/2405.07316v1 | 2024-05-12T15:55:43Z | 2024-05-12T15:55:43Z | VALID: a Validated Algorithm for Learning in Decentralized Networks with
Possible Adversarial Presence | We introduce the paradigm of validated decentralized learning for undirected networks with heterogeneous data and possible adversarial infiltration. We require (a) convergence to a global empirical loss minimizer when adversaries are absent, and (b) either detection of adversarial presence of convergence to an admissible consensus irrespective of the adversarial configuration. To this end, we propose the VALID protocol which, to the best of our knowledge, is the first to achieve a validated learning guarantee. Moreover, VALID offers an O(1/T) convergence rate (under pertinent regularity assumptions), and computational and communication complexities comparable to non-adversarial distributed stochastic gradient descent. Remarkably, VALID retains optimal performance metrics in adversary-free environments, sidestepping the robustness penalties observed in prior byzantine-robust methods. A distinctive aspect of our study is a heterogeneity metric based on the norms of individual agents' gradients computed at the global empirical loss minimizer. This not only provides a natural statistic for detecting significant byzantine disruptions but also allows us to prove the optimality of VALID in wide generality. Lastly, our numerical results reveal that, in the absence of adversaries, VALID converges faster than state-of-the-art byzantine robust algorithms, while when adversaries are present, VALID terminates with each honest either converging to an admissible consensus of declaring adversarial presence in the network. | [
"['Mayank Bakshi' 'Sara Ghasvarianjahromi' 'Yauhen Yakimenka'\n 'Allison Beemer' 'Oliver Kosut' 'Joerg Kliewer']"
]
|
null | null | 2405.07317 | null | null | http://arxiv.org/pdf/2405.07317v1 | 2024-05-12T16:09:01Z | 2024-05-12T16:09:01Z | Machine Unlearning in Contrastive Learning | Machine unlearning is a complex process that necessitates the model to diminish the influence of the training data while keeping the loss of accuracy to a minimum. Despite the numerous studies on machine unlearning in recent years, the majority of them have primarily focused on supervised learning models, leaving research on contrastive learning models relatively underexplored. With the conviction that self-supervised learning harbors a promising potential, surpassing or rivaling that of supervised learning, we set out to investigate methods for machine unlearning centered around contrastive learning models. In this study, we introduce a novel gradient constraint-based approach for training the model to effectively achieve machine unlearning. Our method only necessitates a minimal number of training epochs and the identification of the data slated for unlearning. Remarkably, our approach demonstrates proficient performance not only on contrastive learning models but also on supervised learning models, showcasing its versatility and adaptability in various learning paradigms. | [
"['Zixin Wang' 'Kongyang Chen']"
]
|
null | null | 2405.07327 | null | null | http://arxiv.org/pdf/2405.07327v1 | 2024-05-12T16:33:48Z | 2024-05-12T16:33:48Z | Liquid Ensemble Selection for Continual Learning | Continual learning aims to enable machine learning models to continually learn from a shifting data distribution without forgetting what has already been learned. Such shifting distributions can be broken into disjoint subsets of related examples; by training each member of an ensemble on a different subset it is possible for the ensemble as a whole to achieve much higher accuracy with less forgetting than a naive model. We address the problem of selecting which models within an ensemble should learn on any given data, and which should predict. By drawing on work from delegative voting we develop an algorithm for using delegation to dynamically select which models in an ensemble are active. We explore a variety of delegation methods and performance metrics, ultimately finding that delegation is able to provide a significant performance boost over naive learning in the face of distribution shifts. | [
"['Carter Blair' 'Ben Armstrong' 'Kate Larson']"
]
|
null | null | 2405.07331 | null | null | http://arxiv.org/pdf/2405.07331v1 | 2024-05-12T16:54:57Z | 2024-05-12T16:54:57Z | Stochastic Bandits with ReLU Neural Networks | We study the stochastic bandit problem with ReLU neural network structure. We show that a $tilde{O}(sqrt{T})$ regret guarantee is achievable by considering bandits with one-layer ReLU neural networks; to the best of our knowledge, our work is the first to achieve such a guarantee. In this specific setting, we propose an OFU-ReLU algorithm that can achieve this upper bound. The algorithm first explores randomly until it reaches a linear regime, and then implements a UCB-type linear bandit algorithm to balance exploration and exploitation. Our key insight is that we can exploit the piecewise linear structure of ReLU activations and convert the problem into a linear bandit in a transformed feature space, once we learn the parameters of ReLU relatively accurately during the exploration stage. To remove dependence on model parameters, we design an OFU-ReLU+ algorithm based on a batching strategy, which can provide the same theoretical guarantee. | [
"['Kan Xu' 'Hamsa Bastani' 'Surbhi Goel' 'Osbert Bastani']"
]
|
null | null | 2405.07336 | null | null | http://arxiv.org/pdf/2405.07336v1 | 2024-05-12T17:11:50Z | 2024-05-12T17:11:50Z | Data Trading Combination Auction Mechanism based on the Exponential
Mechanism | With the widespread application of machine learning technology in recent years, the demand for training data has increased significantly, leading to the emergence of research areas such as data trading. The work in this field is still in the developmental stage. Different buyers have varying degrees of demand for various types of data, and auctions play a role in such scenarios due to their authenticity and fairness. Recent related work has proposed combination auction mechanisms for different domains. However, such mechanisms have not addressed the privacy concerns of buyers. In this paper, we design a textit{Data Trading Combination Auction Mechanism based on the exponential mechanism} (DCAE) to protect buyers' bidding privacy from being leaked. We apply the exponential mechanism to select the final settlement price for the auction and generate a probability distribution based on the relationship between the price and the revenue. In the experimental aspect, we consider the selection of different mechanisms under two scenarios, and the experimental results show that this method can ensure high auction revenue and protect buyers' privacy from being violated. | [
"['Kongyang Chen' 'Zeming Xu' 'Bing Mi']"
]
|
null | null | 2405.07343 | null | null | http://arxiv.org/pdf/2405.07343v1 | 2024-05-12T17:40:27Z | 2024-05-12T17:40:27Z | Graph neural networks for power grid operational risk assessment under
evolving grid topology | This article investigates the ability of graph neural networks (GNNs) to identify risky conditions in a power grid over the subsequent few hours, without explicit, high-resolution information regarding future generator on/off status (grid topology) or power dispatch decisions. The GNNs are trained using supervised learning, to predict the power grid's aggregated bus-level (either zonal or system-level) or individual branch-level state under different power supply and demand conditions. The variability of the stochastic grid variables (wind/solar generation and load demand), and their statistical correlations, are rigorously considered while generating the inputs for the training data. The outputs in the training data, obtained by solving numerous mixed-integer linear programming (MILP) optimal power flow problems, correspond to system-level, zonal and transmission line-level quantities of interest (QoIs). The QoIs predicted by the GNNs are used to conduct hours-ahead, sampling-based reliability and risk assessment w.r.t. zonal and system-level (load shedding) as well as branch-level (overloading) failure events. The proposed methodology is demonstrated for three synthetic grids with sizes ranging from 118 to 2848 buses. Our results demonstrate that GNNs are capable of providing fast and accurate prediction of QoIs and can be good proxies for computationally expensive MILP algorithms. The excellent accuracy of GNN-based reliability and risk assessment suggests that GNN models can substantially improve situational awareness by quickly providing rigorous reliability and risk estimates. | [
"['Yadong Zhang' 'Pranav M Karve' 'Sankaran Mahadevan']"
]
|
null | null | 2405.07344 | null | null | http://arxiv.org/pdf/2405.07344v2 | 2024-06-05T16:46:11Z | 2024-05-12T17:40:48Z | TKAN: Temporal Kolmogorov-Arnold Networks | Recurrent Neural Networks (RNNs) have revolutionized many areas of machine learning, particularly in natural language and data sequence processing. Long Short-Term Memory (LSTM) has demonstrated its ability to capture long-term dependencies in sequential data. Inspired by the Kolmogorov-Arnold Networks (KANs) a promising alternatives to Multi-Layer Perceptrons (MLPs), we proposed a new neural networks architecture inspired by KAN and the LSTM, the Temporal Kolomogorov-Arnold Networks (TKANs). TKANs combined the strenght of both networks, it is composed of Recurring Kolmogorov-Arnold Networks (RKANs) Layers embedding memory management. This innovation enables us to perform multi-step time series forecasting with enhanced accuracy and efficiency. By addressing the limitations of traditional models in handling complex sequential patterns, the TKAN architecture offers significant potential for advancements in fields requiring more than one step ahead forecasting. | [
"['Remi Genet' 'Hugo Inzirillo']"
]
|
null | null | 2405.07348 | null | null | http://arxiv.org/pdf/2405.07348v2 | 2024-05-14T16:44:02Z | 2024-05-12T17:54:50Z | MedConceptsQA: Open Source Medical Concepts QA Benchmark | We present MedConceptsQA, a dedicated open source benchmark for medical concepts question answering. The benchmark comprises of questions of various medical concepts across different vocabularies: diagnoses, procedures, and drugs. The questions are categorized into three levels of difficulty: easy, medium, and hard. We conducted evaluations of the benchmark using various Large Language Models. Our findings show that pre-trained clinical Large Language Models achieved accuracy levels close to random guessing on this benchmark, despite being pre-trained on medical data. However, GPT-4 achieves an absolute average improvement of nearly 27%-37% (27% for zero-shot learning and 37% for few-shot learning) when compared to clinical Large Language Models. Our benchmark serves as a valuable resource for evaluating the understanding and reasoning of medical concepts by Large Language Models. Our benchmark is available at https://huggingface.co/datasets/ofir408/MedConceptsQA | [
"['Ofir Ben Shoham' 'Nadav Rappoport']"
]
|
null | null | 2405.07354 | null | null | http://arxiv.org/pdf/2405.07354v1 | 2024-05-12T18:25:38Z | 2024-05-12T18:25:38Z | SoccerNet-Echoes: A Soccer Game Audio Commentary Dataset | The application of Automatic Speech Recognition (ASR) technology in soccer offers numerous opportunities for sports analytics. Specifically, extracting audio commentaries with ASR provides valuable insights into the events of the game, and opens the door to several downstream applications such as automatic highlight generation. This paper presents SoccerNet-Echoes, an augmentation of the SoccerNet dataset with automatically generated transcriptions of audio commentaries from soccer game broadcasts, enhancing video content with rich layers of textual information derived from the game audio using ASR. These textual commentaries, generated using the Whisper model and translated with Google Translate, extend the usefulness of the SoccerNet dataset in diverse applications such as enhanced action spotting, automatic caption generation, and game summarization. By incorporating textual data alongside visual and auditory content, SoccerNet-Echoes aims to serve as a comprehensive resource for the development of algorithms specialized in capturing the dynamics of soccer games. We detail the methods involved in the curation of this dataset and the integration of ASR. We also highlight the implications of a multimodal approach in sports analytics, and how the enriched dataset can support diverse applications, thus broadening the scope of research and development in the field of sports analytics. | [
"['Sushant Gautam' 'Mehdi Houshmand Sarkhoosh' 'Jan Held' 'Cise Midoglu'\n 'Anthony Cioppa' 'Silvio Giancola' 'Vajira Thambawita'\n 'Michael A. Riegler' 'Pål Halvorsen' 'Mubarak Shah']"
]
|
null | null | 2405.07359 | null | null | http://arxiv.org/abs/2405.07359v1 | 2024-05-12T18:45:30Z | 2024-05-12T18:45:30Z | Forecasting with an N-dimensional Langevin Equation and a
Neural-Ordinary Differential Equation | Accurate prediction of electricity day-ahead prices is essential in competitive electricity markets. Although stationary electricity-price forecasting techniques have received considerable attention, research on non-stationary methods is comparatively scarce, despite the common prevalence of non-stationary features in electricity markets. Specifically, existing non-stationary techniques will often aim to address individual non-stationary features in isolation, leaving aside the exploration of concurrent multiple non-stationary effects. Our overarching objective here is the formulation of a framework to systematically model and forecast non-stationary electricity-price time series, encompassing the broader scope of non-stationary behavior. For this purpose we develop a data-driven model that combines an N-dimensional Langevin equation (LE) with a neural-ordinary differential equation (NODE). The LE captures fine-grained details of the electricity-price behavior in stationary regimes but is inadequate for non-stationary conditions. To overcome this inherent limitation, we adopt a NODE approach to learn, and at the same time predict, the difference between the actual electricity-price time series and the simulated price trajectories generated by the LE. By learning this difference, the NODE reconstructs the non-stationary components of the time series that the LE is not able to capture. We exemplify the effectiveness of our framework using the Spanish electricity day-ahead market as a prototypical case study. Our findings reveal that the NODE nicely complements the LE, providing a comprehensive strategy to tackle both stationary and non-stationary electricity-price behavior. The framework's dependability and robustness is demonstrated through different non-stationary scenarios by comparing it against a range of basic naive methods. | [
"['Antonio Malpica-Morales' 'Miguel A. Duran-Olivencia'\n 'Serafim Kalliadasis']"
]
|
null | null | 2405.07369 | null | null | http://arxiv.org/pdf/2405.07369v1 | 2024-05-12T20:02:25Z | 2024-05-12T20:02:25Z | Incorporating Anatomical Awareness for Enhanced Generalizability and
Progression Prediction in Deep Learning-Based Radiographic Sacroiliitis
Detection | Purpose: To examine whether incorporating anatomical awareness into a deep learning model can improve generalizability and enable prediction of disease progression. Methods: This retrospective multicenter study included conventional pelvic radiographs of 4 different patient cohorts focusing on axial spondyloarthritis (axSpA) collected at university and community hospitals. The first cohort, which consisted of 1483 radiographs, was split into training (n=1261) and validation (n=222) sets. The other cohorts comprising 436, 340, and 163 patients, respectively, were used as independent test datasets. For the second cohort, follow-up data of 311 patients was used to examine progression prediction capabilities. Two neural networks were trained, one on images cropped to the bounding box of the sacroiliac joints (anatomy-aware) and the other one on full radiographs. The performance of the models was compared using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. Results: On the three test datasets, the standard model achieved AUC scores of 0.853, 0.817, 0.947, with an accuracy of 0.770, 0.724, 0.850. Whereas the anatomy-aware model achieved AUC scores of 0.899, 0.846, 0.957, with an accuracy of 0.821, 0.744, 0.906, respectively. The patients who were identified as high risk by the anatomy aware model had an odds ratio of 2.16 (95% CI: 1.19, 3.86) for having progression of radiographic sacroiliitis within 2 years. Conclusion: Anatomical awareness can improve the generalizability of a deep learning model in detecting radiographic sacroiliitis. The model is published as fully open source alongside this study. | [
"['Felix J. Dorfner' 'Janis L. Vahldiek' 'Leonhard Donle' 'Andrei Zhukov'\n 'Lina Xu' 'Hartmut Häntze' 'Marcus R. Makowski' 'Hugo J. W. L. Aerts'\n 'Fabian Proft' 'Valeria Rios Rodriguez' 'Judith Rademacher'\n 'Mikhail Protopopov' 'Hildrun Haibel' 'Torsten Diekhoff'\n 'Murat Torgutalp' 'Lisa C. Adams' 'Denis Poddubnyy' 'Keno K. Bressem']"
]
|
null | null | 2405.07374 | null | null | http://arxiv.org/pdf/2405.07374v2 | 2024-06-03T03:32:56Z | 2024-05-12T20:27:34Z | Conformalized Survival Distributions: A Generic Post-Process to Increase
Calibration | Discrimination and calibration represent two important properties of survival analysis, with the former assessing the model's ability to accurately rank subjects and the latter evaluating the alignment of predicted outcomes with actual events. With their distinct nature, it is hard for survival models to simultaneously optimize both of them especially as many previous results found improving calibration tends to diminish discrimination performance. This paper introduces a novel approach utilizing conformal regression that can improve a model's calibration without degrading discrimination. We provide theoretical guarantees for the above claim, and rigorously validate the efficiency of our approach across 11 real-world datasets, showcasing its practical applicability and robustness in diverse scenarios. | [
"['Shi-ang Qi' 'Yakun Yu' 'Russell Greiner']"
]
|
null | null | 2405.07387 | null | null | http://arxiv.org/pdf/2405.07387v1 | 2024-05-12T22:18:25Z | 2024-05-12T22:18:25Z | Semantic Loss Functions for Neuro-Symbolic Structured Prediction | Structured output prediction problems are ubiquitous in machine learning. The prominent approach leverages neural networks as powerful feature extractors, otherwise assuming the independence of the outputs. These outputs, however, jointly encode an object, e.g. a path in a graph, and are therefore related through the structure underlying the output space. We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training by minimizing the network's violation of such dependencies, steering the network towards predicting distributions satisfying the underlying structure. At the same time, it is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby, while also enabling efficient end-to-end training and inference. We also discuss key improvements and applications of the semantic loss. One limitations of the semantic loss is that it does not exploit the association of every data point with certain features certifying its membership in a target class. We should therefore prefer minimum-entropy distributions over valid structures, which we obtain by additionally minimizing the neuro-symbolic entropy. We empirically demonstrate the benefits of this more refined formulation. Moreover, the semantic loss is designed to be modular and can be combined with both discriminative and generative neural models. This is illustrated by integrating it into generative adversarial networks, yielding constrained adversarial networks, a novel class of deep generative models able to efficiently synthesize complex objects obeying the structure of the underlying domain. | [
"['Kareem Ahmed' 'Stefano Teso' 'Paolo Morettin' 'Luca Di Liello'\n 'Pierfrancesco Ardino' 'Jacopo Gobbi' 'Yitao Liang' 'Eric Wang'\n 'Kai-Wei Chang' 'Andrea Passerini' 'Guy Van den Broeck']"
]
|
null | null | 2405.07391 | null | null | http://arxiv.org/pdf/2405.07391v2 | 2024-06-12T03:25:44Z | 2024-05-12T22:51:35Z | AnyRotate: Gravity-Invariant In-Hand Object Rotation with Sim-to-Real
Touch | Human hands are capable of in-hand manipulation in the presence of different hand motions. For a robot hand, harnessing rich tactile information to achieve this level of dexterity still remains a significant challenge. In this paper, we present AnyRotate, a system for gravity-invariant multi-axis in-hand object rotation using dense featured sim-to-real touch. We tackle this problem by training a dense tactile policy in simulation and present a sim-to-real method for rich tactile sensing to achieve zero-shot policy transfer. Our formulation allows the training of a unified policy to rotate unseen objects about arbitrary rotation axes in any hand direction. In our experiments, we highlight the benefit of capturing detailed contact information when handling objects with varying properties. Interestingly, despite not having explicit slip detection, we found rich multi-fingered tactile sensing can implicitly detect object movement within grasp and provide a reactive behavior that improves the robustness of the policy. The project website can be found at https://maxyang27896.github.io/anyrotate/. | [
"['Max Yang' 'Chenghua Lu' 'Alex Church' 'Yijiong Lin' 'Chris Ford'\n 'Haoran Li' 'Efi Psomopoulou' 'David A. W. Barton' 'Nathan F. Lepora']"
]
|
null | null | 2405.07393 | null | null | http://arxiv.org/pdf/2405.07393v1 | 2024-05-12T23:15:21Z | 2024-05-12T23:15:21Z | Intrinsic Fairness-Accuracy Tradeoffs under Equalized Odds | With the growing adoption of machine learning (ML) systems in areas like law enforcement, criminal justice, finance, hiring, and admissions, it is increasingly critical to guarantee the fairness of decisions assisted by ML. In this paper, we study the tradeoff between fairness and accuracy under the statistical notion of equalized odds. We present a new upper bound on the accuracy (that holds for any classifier), as a function of the fairness budget. In addition, our bounds also exhibit dependence on the underlying statistics of the data, labels and the sensitive group attributes. We validate our theoretical upper bounds through empirical analysis on three real-world datasets: COMPAS, Adult, and Law School. Specifically, we compare our upper bound to the tradeoffs that are achieved by various existing fair classifiers in the literature. Our results show that achieving high accuracy subject to a low-bias could be fundamentally limited based on the statistical disparity across the groups. | [
"['Meiyu Zhong' 'Ravi Tandon']"
]
|
null | null | 2405.07395 | null | null | http://arxiv.org/pdf/2405.07395v1 | 2024-05-12T23:18:14Z | 2024-05-12T23:18:14Z | CaFA: Global Weather Forecasting with Factorized Attention on Sphere | Accurate weather forecasting is crucial in various sectors, impacting decision-making processes and societal events. Data-driven approaches based on machine learning models have recently emerged as a promising alternative to numerical weather prediction models given their potential to capture physics of different scales from historical data and the significantly lower computational cost during the prediction stage. Renowned for its state-of-the-art performance across diverse domains, the Transformer model has also gained popularity in machine learning weather prediction. Yet applying Transformer architectures to weather forecasting, particularly on a global scale is computationally challenging due to the quadratic complexity of attention and the quadratic increase in spatial points as resolution increases. In this work, we propose a factorized-attention-based model tailored for spherical geometries to mitigate this issue. More specifically, it utilizes multi-dimensional factorized kernels that convolve over different axes where the computational complexity of the kernel is only quadratic to the axial resolution instead of overall resolution. The deterministic forecasting accuracy of the proposed model on $1.5^circ$ and 0-7 days' lead time is on par with state-of-the-art purely data-driven machine learning weather prediction models. We also showcase the proposed model holds great potential to push forward the Pareto front of accuracy-efficiency for Transformer weather models, where it can achieve better accuracy with less computational cost compared to Transformer based models with standard attention. | [
"['Zijie Li' 'Anthony Zhou' 'Saurabh Patil' 'Amir Barati Farimani']"
]
|
null | null | 2405.07404 | null | null | http://arxiv.org/pdf/2405.07404v1 | 2024-05-13T00:51:36Z | 2024-05-13T00:51:36Z | Indoor PM2.5 forecasting and the association with outdoor air pollution:
a modelling study based on sensor data in Australia | Exposure to poor indoor air quality poses significant health risks, necessitating thorough assessment to mitigate associated dangers. This study aims to predict hourly indoor fine particulate matter (PM2.5) concentrations and investigate their correlation with outdoor PM2.5 levels across 24 distinct buildings in Australia. Indoor air quality data were gathered from 91 monitoring sensors in eight Australian cities spanning 2019 to 2022. Employing an innovative three-stage deep ensemble machine learning framework (DEML), comprising three base models (Support Vector Machine, Random Forest, and eXtreme Gradient Boosting) and two meta-models (Random Forest and Generalized Linear Model), hourly indoor PM2.5 concentrations were predicted. The model's accuracy was evaluated using a rolling windows approach, comparing its performance against three benchmark algorithms (SVM, RF, and XGBoost). Additionally, a correlation analysis assessed the relationship between indoor and outdoor PM2.5 concentrations. Results indicate that the DEML model consistently outperformed benchmark models, achieving an R2 ranging from 0.63 to 0.99 and RMSE from 0.01 to 0.663 mg/m3 for most sensors. Notably, outdoor PM2.5 concentrations significantly impacted indoor air quality, particularly evident during events like bushfires. This study underscores the importance of accurate indoor air quality prediction, crucial for developing location-specific early warning systems and informing effective interventions. By promoting protective behaviors, these efforts contribute to enhanced public health outcomes. | [
"['Wenhua Yu' 'Bahareh Nakisa' 'Seng W. Loke' 'Svetlana Stevanovic'\n 'Yuming Guo' 'Mohammad Naim Rastgoo']"
]
|
null | null | 2405.07414 | null | null | http://arxiv.org/pdf/2405.07414v2 | 2024-05-14T01:29:37Z | 2024-05-13T01:23:14Z | Binning as a Pretext Task: Improving Self-Supervised Learning in Tabular
Domains | The ability of deep networks to learn superior representations hinges on leveraging the proper inductive biases, considering the inherent properties of datasets. In tabular domains, it is critical to effectively handle heterogeneous features (both categorical and numerical) in a unified manner and to grasp irregular functions like piecewise constant functions. To address the challenges in the self-supervised learning framework, we propose a novel pretext task based on the classical binning method. The idea is straightforward: reconstructing the bin indices (either orders or classes) rather than the original values. This pretext task provides the encoder with an inductive bias to capture the irregular dependencies, mapping from continuous inputs to discretized bins, and mitigates the feature heterogeneity by setting all features to have category-type targets. Our empirical investigations ascertain several advantages of binning: capturing the irregular function, compatibility with encoder architecture and additional modifications, standardizing all features into equal sets, grouping similar values within a feature, and providing ordering information. Comprehensive evaluations across diverse tabular datasets corroborate that our method consistently improves tabular representation learning performance for a wide range of downstream tasks. The codes are available in https://github.com/kyungeun-lee/tabularbinning. | [
"['Kyungeun Lee' 'Ye Seul Sim' 'Hye-Seung Cho' 'Moonjung Eo' 'Suhee Yoon'\n 'Sanghyu Yoon' 'Woohyung Lim']"
]
|
null | null | 2405.07415 | null | null | http://arxiv.org/pdf/2405.07415v1 | 2024-05-13T01:29:48Z | 2024-05-13T01:29:48Z | Structured Reinforcement Learning for Incentivized Stochastic Covert
Optimization | This paper studies how a stochastic gradient algorithm (SG) can be controlled to hide the estimate of the local stationary point from an eavesdropper. Such problems are of significant interest in distributed optimization settings like federated learning and inventory management. A learner queries a stochastic oracle and incentivizes the oracle to obtain noisy gradient measurements and perform SG. The oracle probabilistically returns either a noisy gradient of the function} or a non-informative measurement, depending on the oracle state and incentive. The learner's query and incentive are visible to an eavesdropper who wishes to estimate the stationary point. This paper formulates the problem of the learner performing covert optimization by dynamically incentivizing the stochastic oracle and obfuscating the eavesdropper as a finite-horizon Markov decision process (MDP). Using conditions for interval-dominance on the cost and transition probability structure, we show that the optimal policy for the MDP has a monotone threshold structure. We propose searching for the optimal stationary policy with the threshold structure using a stochastic approximation algorithm and a multi-armed bandit approach. The effectiveness of our methods is numerically demonstrated on a covert federated learning hate-speech classification task. | [
"['Adit Jain' 'Vikram Krishnamurthy']"
]
|
null | null | 2405.07432 | null | null | http://arxiv.org/pdf/2405.07432v1 | 2024-05-13T02:18:49Z | 2024-05-13T02:18:49Z | Compressed Online Learning of Conditional Mean Embedding | The conditional mean embedding (CME) encodes Markovian stochastic kernels through their actions on probability distributions embedded within the reproducing kernel Hilbert spaces (RKHS). The CME plays a key role in several well-known machine learning tasks such as reinforcement learning, analysis of dynamical systems, etc. We present an algorithm to learn the CME incrementally from data via an operator-valued stochastic gradient descent. As is well-known, function learning in RKHS suffers from scalability challenges from large data. We utilize a compression mechanism to counter the scalability challenge. The core contribution of this paper is a finite-sample performance guarantee on the last iterate of the online compressed operator learning algorithm with fast-mixing Markovian samples, when the target CME may not be contained in the hypothesis space. We illustrate the efficacy of our algorithm by applying it to the analysis of an example dynamical system. | [
"['Boya Hou' 'Sina Sanjari' 'Alec Koppel' 'Subhonmesh Bose']"
]
|
null | null | 2405.07436 | null | null | http://arxiv.org/pdf/2405.07436v1 | 2024-05-13T02:31:08Z | 2024-05-13T02:31:08Z | Can Language Models Explain Their Own Classification Behavior? | Large language models (LLMs) perform well at a myriad of tasks, but explaining the processes behind this performance is a challenge. This paper investigates whether LLMs can give faithful high-level explanations of their own internal processes. To explore this, we introduce a dataset, ArticulateRules, of few-shot text-based classification tasks generated by simple rules. Each rule is associated with a simple natural-language explanation. We test whether models that have learned to classify inputs competently (both in- and out-of-distribution) are able to articulate freeform natural language explanations that match their classification behavior. Our dataset can be used for both in-context and finetuning evaluations. We evaluate a range of LLMs, demonstrating that articulation accuracy varies considerably between models, with a particularly sharp increase from GPT-3 to GPT-4. We then investigate whether we can improve GPT-3's articulation accuracy through a range of methods. GPT-3 completely fails to articulate 7/10 rules in our test, even after additional finetuning on correct explanations. We release our dataset, ArticulateRules, which can be used to test self-explanation for LLMs trained either in-context or by finetuning. | [
"['Dane Sherburn' 'Bilal Chughtai' 'Owain Evans']"
]
|
null | null | 2405.07440 | null | null | http://arxiv.org/pdf/2405.07440v1 | 2024-05-13T02:58:59Z | 2024-05-13T02:58:59Z | Maximizing Information Gain in Privacy-Aware Active Learning of Email
Anomalies | Redacted emails satisfy most privacy requirements but they make it more difficult to detect anomalous emails that may be indicative of data exfiltration. In this paper we develop an enhanced method of Active Learning using an information gain maximizing heuristic, and we evaluate its effectiveness in a real world setting where only redacted versions of email could be labeled by human analysts due to privacy concerns. In the first case study we examined how Active Learning should be carried out. We found that model performance was best when a single highly skilled (in terms of the labelling task) analyst provided the labels. In the second case study we used confidence ratings to estimate the labeling uncertainty of analysts and then prioritized instances for labeling based on the expected information gain (the difference between model uncertainty and analyst uncertainty) that would be provided by labelling each instance. We found that the information maximization gain heuristic improved model performance over existing sampling methods for Active Learning. Based on the results obtained, we recommend that analysts should be screened, and possibly trained, prior to implementation of Active Learning in cybersecurity applications. We also recommend that the information gain maximizing sample method (based on expert confidence) should be used in early stages of Active Learning, providing that well-calibrated confidence can be obtained. We also note that the expertise of analysts should be assessed prior to Active Learning, as we found that analysts with lower labelling skill had poorly calibrated (over-) confidence in their labels. | [
"['Mu-Huan Miles Chung' 'Sharon Li' 'Jaturong Kongmanee' 'Lu Wang'\n 'Yuhong Yang' 'Calvin Giang' 'Khilan Jerath' 'Abhay Raman' 'David Lie'\n 'Mark Chignell']"
]
|
null | null | 2405.07441 | null | null | http://arxiv.org/pdf/2405.07441v2 | 2024-05-22T16:36:17Z | 2024-05-13T02:59:50Z | Reducing Spatial Discretization Error on Coarse CFD Simulations Using an
OpenFOAM-Embedded Deep Learning Framework | We propose a method for reducing the spatial discretization error of coarse computational fluid dynamics (CFD) problems by enhancing the quality of low-resolution simulations using a deep learning model fed with high-quality data. We substitute the default differencing scheme for the convection term by a feed-forward neural network that interpolates velocities from cell centers to face values to produce velocities that approximate the fine-mesh data well. The deep learning framework incorporates the open-source CFD code OpenFOAM, resulting in an end-to-end differentiable model. We automatically differentiate the CFD physics using a discrete adjoint code version. We present a fast communication method between TensorFlow (Python) and OpenFOAM (c++) that accelerates the training process. We applied the model to the flow past a square cylinder problem, reducing the error to about 50% for simulations outside the training distribution compared to the traditional solver in the x- and y-velocity components using an 8x coarser mesh. The training is affordable in terms of time and data samples since the architecture exploits the local features of the physics while generating stable predictions for mid-term simulations. | [
"['Jesus Gonzalez-Sieiro' 'David Pardo' 'Vincenzo Nava' 'Victor M. Calo'\n 'Markus Towara']"
]
|
null | null | 2405.07452 | null | null | http://arxiv.org/pdf/2405.07452v2 | 2024-05-18T08:55:05Z | 2024-05-13T03:27:02Z | PLA-SGCN: Protein-Ligand Binding Affinity Prediction by Integrating
Similar Pairs and Semi-supervised Graph Convolutional Network | The protein-ligand binding affinity (PLA) prediction goal is to predict whether or not the ligand could bind to a protein sequence. Recently, in PLA prediction, deep learning has received much attention. Two steps are involved in deep learning-based approaches: feature extraction and task prediction step. Many deep learning-based approaches concentrate on introducing new feature extraction networks or integrating auxiliary knowledge like protein-protein interaction networks or gene ontology knowledge. Then, a task prediction network is designed simply using some fully connected layers. This paper aims to integrate retrieved similar hard protein-ligand pairs in PLA prediction (i.e., task prediction step) using a semi-supervised graph convolutional network (GCN). Hard protein-ligand pairs are retrieved for each input query sample based on the manifold smoothness constraint. Then, a graph is learned automatically in which each node is a protein-ligand pair, and each edge represents the similarity between pairs. In other words, an end-to-end framework is proposed that simultaneously retrieves hard similar samples, learns protein-ligand descriptor, learns the graph topology of the input sample with retrieved similar hard samples (learn adjacency matrix), and learns a semi-supervised GCN to predict the binding affinity (as task predictor). The training step adjusts the parameter values, and in the inference step, the learned model is fine-tuned for each input sample. To evaluate the proposed approach, it is applied to the four well-known PDBbind, Davis, KIBA, and BindingDB datasets. The results show that the proposed method significantly performs better than the comparable approaches. | [
"['Karim Abbasi' 'Parvin Razzaghi' 'Amin Ghareyazi' 'Hamid R. Rabiee']"
]
|
null | null | 2405.07453 | null | null | http://arxiv.org/pdf/2405.07453v1 | 2024-05-13T03:45:20Z | 2024-05-13T03:45:20Z | An Effectiveness Study Across Baseline and Neural Network-based Force
Estimation Methods on the da Vinci Research Kit Si System | In this study, we further investigate the robustness and generalization ability of an neural network (NN) based force estimation method, using the da Vinci Research Kit Si (dVRK-Si). To evaluate our method's performance, we compare the force estimation accuracy with several baseline methods. We conduct comparative studies between the dVRK classic and dVRK-Si systems to benchmark the effectiveness of these approaches. We conclude that the NN-based method provides comparable force estimation accuracy across the two systems, as the average root mean square error (RMSE) over the average range of force ratio is approximately 3.07% for the dVRK classic, and 5.27% for the dVRK-Si. On the dVRK-Si, the force estimation RMSEs for all the baseline methods are 2 to 4 times larger than the NN-based method in all directions. One possible reason is, we made assumptions in the baseline methods that static forces remain the same or dynamics is time-invariant. These assumptions may hold for the dVRK Classic, as it has pre-loaded weight and maintains horizontal self balance. Since the dVRK-Si configuration does not have this property, assumptions do not hold anymore, therefore the NN-based method significantly outperforms. | [
"['Hao Yang' 'Ayberk Acar' 'Keshuai Xu' 'Anton Deguet' 'Peter Kazanzides'\n 'Jie Ying Wu']"
]
|
null | null | 2405.07456 | null | null | http://arxiv.org/pdf/2405.07456v1 | 2024-05-13T04:12:03Z | 2024-05-13T04:12:03Z | Boosting House Price Estimations with Multi-Head Gated Attention | Evaluating house prices is crucial for various stakeholders, including homeowners, investors, and policymakers. However, traditional spatial interpolation methods have limitations in capturing the complex spatial relationships that affect property values. To address these challenges, we have developed a new method called Multi-Head Gated Attention for spatial interpolation. Our approach builds upon attention-based interpolation models and incorporates multiple attention heads and gating mechanisms to capture spatial dependencies and contextual information better. Importantly, our model produces embeddings that reduce the dimensionality of the data, enabling simpler models like linear regression to outperform complex ensembling models. We conducted extensive experiments to compare our model with baseline methods and the original attention-based interpolation model. The results show a significant improvement in the accuracy of house price predictions, validating the effectiveness of our approach. This research advances the field of spatial interpolation and provides a robust tool for more precise house price evaluation. Our GitHub repository.contains the data and code for all datasets, which are available for researchers and practitioners interested in replicating or building upon our work. | [
"['Zakaria Abdellah Sellam' 'Cosimo Distante' 'Abdelmalik Taleb-Ahmed'\n 'Pier Luigi Mazzeo']"
]
|
null | null | 2405.07460 | null | null | http://arxiv.org/pdf/2405.07460v3 | 2024-06-13T16:22:04Z | 2024-05-13T04:35:14Z | HoneyBee: A Scalable Modular Framework for Creating Multimodal Oncology
Datasets with Foundational Embedding Models | Developing accurate machine learning models for oncology requires large-scale, high-quality multimodal datasets. However, creating such datasets remains challenging due to the complexity and heterogeneity of medical data. To address this challenge, we introduce HoneyBee, a scalable modular framework for building multimodal oncology datasets that leverages foundation models to generate representative embeddings. HoneyBee integrates various data modalities, including clinical diagnostic and pathology imaging data, medical notes, reports, records, and molecular data. It employs data preprocessing techniques and foundation models to generate embeddings that capture the essential features and relationships within the raw medical data. The generated embeddings are stored in a structured format using Hugging Face datasets and PyTorch dataloaders for accessibility. Vector databases enable efficient querying and retrieval for machine learning applications. We demonstrate the effectiveness of HoneyBee through experiments assessing the quality and representativeness of these embeddings. The framework is designed to be extensible to other medical domains and aims to accelerate oncology research by providing high-quality, machine learning-ready datasets. HoneyBee is an ongoing open-source effort, and the code, datasets, and models are available at the project repository. | [
"['Aakash Tripathi' 'Asim Waqas' 'Yasin Yilmaz' 'Ghulam Rasool']"
]
|
null | null | 2405.07473 | null | null | http://arxiv.org/pdf/2405.07473v1 | 2024-05-13T05:18:23Z | 2024-05-13T05:18:23Z | Intrinsic Rewards for Exploration without Harm from Observational Noise:
A Simulation Study Based on the Free Energy Principle | In Reinforcement Learning (RL), artificial agents are trained to maximize numerical rewards by performing tasks. Exploration is essential in RL because agents must discover information before exploiting it. Two rewards encouraging efficient exploration are the entropy of action policy and curiosity for information gain. Entropy is well-established in literature, promoting randomized action selection. Curiosity is defined in a broad variety of ways in literature, promoting discovery of novel experiences. One example, prediction error curiosity, rewards agents for discovering observations they cannot accurately predict. However, such agents may be distracted by unpredictable observational noises known as curiosity traps. Based on the Free Energy Principle (FEP), this paper proposes hidden state curiosity, which rewards agents by the KL divergence between the predictive prior and posterior probabilities of latent variables. We trained six types of agents to navigate mazes: baseline agents without rewards for entropy or curiosity, and agents rewarded for entropy and/or either prediction error curiosity or hidden state curiosity. We find entropy and curiosity result in efficient exploration, especially both employed together. Notably, agents with hidden state curiosity demonstrate resilience against curiosity traps, which hinder agents with prediction error curiosity. This suggests implementing the FEP may enhance the robustness and generalization of RL models, potentially aligning the learning processes of artificial and biological agents. | [
"['Theodore Jerome Tinker' 'Kenji Doya' 'Jun Tani']"
]
|
null | null | 2405.07482 | null | null | http://arxiv.org/pdf/2405.07482v1 | 2024-05-13T05:48:37Z | 2024-05-13T05:48:37Z | Marginal Fairness Sliced Wasserstein Barycenter | The sliced Wasserstein barycenter (SWB) is a widely acknowledged method for efficiently generalizing the averaging operation within probability measure spaces. However, achieving marginal fairness SWB, ensuring approximately equal distances from the barycenter to marginals, remains unexplored. The uniform weighted SWB is not necessarily the optimal choice to obtain the desired marginal fairness barycenter due to the heterogeneous structure of marginals and the non-optimality of the optimization. As the first attempt to tackle the problem, we define the marginal fairness sliced Wasserstein barycenter (MFSWB) as a constrained SWB problem. Due to the computational disadvantages of the formal definition, we propose two hyperparameter-free and computationally tractable surrogate MFSWB problems that implicitly minimize the distances to marginals and encourage marginal fairness at the same time. To further improve the efficiency, we perform slicing distribution selection and obtain the third surrogate definition by introducing a new slicing distribution that focuses more on marginally unfair projecting directions. We discuss the relationship of the three proposed problems and their relationship to sliced multi-marginal Wasserstein distance. Finally, we conduct experiments on finding 3D point-clouds averaging, color harmonization, and training of sliced Wasserstein autoencoder with class-fairness representation to show the favorable performance of the proposed surrogate MFSWB problems. | [
"['Khai Nguyen' 'Hai Nguyen' 'Nhat Ho']"
]
|
null | null | 2405.07488 | null | null | http://arxiv.org/pdf/2405.07488v1 | 2024-05-13T06:04:26Z | 2024-05-13T06:04:26Z | Predictive Modeling of Flexible EHD Pumps using Kolmogorov-Arnold
Networks | We present a novel approach to predicting the pressure and flow rate of flexible electrohydrodynamic pumps using the Kolmogorov-Arnold Network. Inspired by the Kolmogorov-Arnold representation theorem, KAN replaces fixed activation functions with learnable spline-based activation functions, enabling it to approximate complex nonlinear functions more effectively than traditional models like Multi-Layer Perceptron and Random Forest. We evaluated KAN on a dataset of flexible EHD pump parameters and compared its performance against RF, and MLP models. KAN achieved superior predictive accuracy, with Mean Squared Errors of 12.186 and 0.001 for pressure and flow rate predictions, respectively. The symbolic formulas extracted from KAN provided insights into the nonlinear relationships between input parameters and pump performance. These findings demonstrate that KAN offers exceptional accuracy and interpretability, making it a promising alternative for predictive modeling in electrohydrodynamic pumping. | [
"['Yanhong Peng' 'Miao He' 'Fangchao Hu' 'Zebing Mao' 'Xia Huang'\n 'Jun Ding']"
]
|
null | null | 2405.07489 | null | null | http://arxiv.org/pdf/2405.07489v1 | 2024-05-13T06:08:09Z | 2024-05-13T06:08:09Z | Sparse Domain Transfer via Elastic Net Regularization | Transportation of samples across different domains is a central task in several machine learning problems. A sensible requirement for domain transfer tasks in computer vision and language domains is the sparsity of the transportation map, i.e., the transfer algorithm aims to modify the least number of input features while transporting samples across the source and target domains. In this work, we propose Elastic Net Optimal Transport (ENOT) to address the sparse distribution transfer problem. The ENOT framework utilizes the $L_1$-norm and $L_2$-norm regularization mechanisms to find a sparse and stable transportation map between the source and target domains. To compute the ENOT transport map, we consider the dual formulation of the ENOT optimization task and prove that the sparsified gradient of the optimal potential function in the ENOT's dual representation provides the ENOT transport map. Furthermore, we demonstrate the application of the ENOT framework to perform feature selection for sparse domain transfer. We present the numerical results of applying ENOT to several domain transfer problems for synthetic Gaussian mixtures and real image and text data. Our empirical results indicate the success of the ENOT framework in identifying a sparse domain transport map. | [
"['Jingwei Zhang' 'Farzan Farnia']"
]
|
null | null | 2405.07497 | null | null | http://arxiv.org/pdf/2405.07497v1 | 2024-05-13T06:33:06Z | 2024-05-13T06:33:06Z | Towards Subgraph Isomorphism Counting with Graph Kernels | Subgraph isomorphism counting is known as #P-complete and requires exponential time to find the accurate solution. Utilizing representation learning has been shown as a promising direction to represent substructures and approximate the solution. Graph kernels that implicitly capture the correlations among substructures in diverse graphs have exhibited great discriminative power in graph classification, so we pioneeringly investigate their potential in counting subgraph isomorphisms and further explore the augmentation of kernel capability through various variants, including polynomial and Gaussian kernels. Through comprehensive analysis, we enhance the graph kernels by incorporating neighborhood information. Finally, we present the results of extensive experiments to demonstrate the effectiveness of the enhanced graph kernels and discuss promising directions for future research. | [
"['Xin Liu' 'Weiqi Wang' 'Jiaxin Bai' 'Yangqiu Song']"
]
|
null | null | 2405.07509 | null | null | http://arxiv.org/pdf/2405.07509v1 | 2024-05-13T07:10:35Z | 2024-05-13T07:10:35Z | RESTAD: REconstruction and Similarity based Transformer for time series
Anomaly Detection | Anomaly detection in time series data is crucial across various domains. The scarcity of labeled data for such tasks has increased the attention towards unsupervised learning methods. These approaches, often relying solely on reconstruction error, typically fail to detect subtle anomalies in complex datasets. To address this, we introduce RESTAD, an adaptation of the Transformer model by incorporating a layer of Radial Basis Function (RBF) neurons within its architecture. This layer fits a non-parametric density in the latent representation, such that a high RBF output indicates similarity with predominantly normal training data. RESTAD integrates the RBF similarity scores with the reconstruction errors to increase sensitivity to anomalies. Our empirical evaluations demonstrate that RESTAD outperforms various established baselines across multiple benchmark datasets. | [
"['Ramin Ghorbani' 'Marcel J. T. Reinders' 'David M. J. Tax']"
]
|
null | null | 2405.07510 | null | null | http://arxiv.org/pdf/2405.07510v3 | 2024-05-29T07:39:56Z | 2024-05-13T07:10:53Z | PeRFlow: Piecewise Rectified Flow as Universal Plug-and-Play Accelerator | We present Piecewise Rectified Flow (PeRFlow), a flow-based method for accelerating diffusion models. PeRFlow divides the sampling process of generative flows into several time windows and straightens the trajectories in each interval via the reflow operation, thereby approaching piecewise linear flows. PeRFlow achieves superior performance in a few-step generation. Moreover, through dedicated parameterizations, the PeRFlow models inherit knowledge from the pretrained diffusion models. Thus, the training converges fast and the obtained models show advantageous transfer ability, serving as universal plug-and-play accelerators that are compatible with various workflows based on the pre-trained diffusion models. Codes for training and inference are publicly released. https://github.com/magic-research/piecewise-rectified-flow | [
"['Hanshu Yan' 'Xingchao Liu' 'Jiachun Pan' 'Jun Hao Liew' 'Qiang Liu'\n 'Jiashi Feng']"
]
|
null | null | 2405.07515 | null | null | http://arxiv.org/pdf/2405.07515v1 | 2024-05-13T07:22:50Z | 2024-05-13T07:22:50Z | OpenBot-Fleet: A System for Collective Learning with Real Robots | We introduce OpenBot-Fleet, a comprehensive open-source cloud robotics system for navigation. OpenBot-Fleet uses smartphones for sensing, local compute and communication, Google Firebase for secure cloud storage and off-board compute, and a robust yet low-cost wheeled robot toact in real-world environments. The robots collect task data and upload it to the cloud where navigation policies can be learned either offline or online and can then be sent back to the robot fleet. In our experiments we distribute 72 robots to a crowd of workers who operate them in homes, and show that OpenBot-Fleet can learn robust navigation policies that generalize to unseen homes with >80% success rate. OpenBot-Fleet represents a significant step forward in cloud robotics, making it possible to deploy large continually learning robot fleets in a cost-effective and scalable manner. All materials can be found at https://www.openbot.org. A video is available at https://youtu.be/wiv2oaDgDi8 | [
"['Matthias Müller' 'Samarth Brahmbhatt' 'Ankur Deka' 'Quentin Leboutet'\n 'David Hafner' 'Vladlen Koltun']"
]
|
null | null | 2405.07527 | null | null | http://arxiv.org/pdf/2405.07527v1 | 2024-05-13T07:46:48Z | 2024-05-13T07:46:48Z | Train Faster, Perform Better: Modular Adaptive Training in
Over-Parameterized Models | Despite their prevalence in deep-learning communities, over-parameterized models convey high demands of computational costs for proper training. This work studies the fine-grained, modular-level learning dynamics of over-parameterized models to attain a more efficient and fruitful training strategy. Empirical evidence reveals that when scaling down into network modules, such as heads in self-attention models, we can observe varying learning patterns implicitly associated with each module's trainability. To describe such modular-level learning capabilities, we introduce a novel concept dubbed modular neural tangent kernel (mNTK), and we demonstrate that the quality of a module's learning is tightly associated with its mNTK's principal eigenvalue $lambda_{max}$. A large $lambda_{max}$ indicates that the module learns features with better convergence, while those miniature ones may impact generalization negatively. Inspired by the discovery, we propose a novel training strategy termed Modular Adaptive Training (MAT) to update those modules with their $lambda_{max}$ exceeding a dynamic threshold selectively, concentrating the model on learning common features and ignoring those inconsistent ones. Unlike most existing training schemes with a complete BP cycle across all network modules, MAT can significantly save computations by its partially-updating strategy and can further improve performance. Experiments show that MAT nearly halves the computational cost of model training and outperforms the accuracy of baselines. | [
"['Yubin Shi' 'Yixuan Chen' 'Mingzhi Dong' 'Xiaochen Yang' 'Dongsheng Li'\n 'Yujiang Wang' 'Robert P. Dick' 'Qin Lv' 'Yingying Zhao' 'Fan Yang'\n 'Tun Lu' 'Ning Gu' 'Li Shang']"
]
|
null | null | 2405.07543 | null | null | http://arxiv.org/pdf/2405.07543v1 | 2024-05-13T08:25:45Z | 2024-05-13T08:25:45Z | Accelerating the Evolution of Personalized Automated Lane Change through
Lesson Learning | Personalization is crucial for the widespread adoption of advanced driver assistance system. To match up with each user's preference, the online evolution capability is a must. However, conventional evolution methods learn from naturalistic driving data, which requires a lot computing power and cannot be applied online. To address this challenge, this paper proposes a lesson learning approach: learning from driver's takeover interventions. By leveraging online takeover data, the driving zone is generated to ensure perceived safety using Gaussian discriminant analysis. Real-time corrections to trajectory planning rewards are enacted through apprenticeship learning. Guided by the objective of optimizing rewards within the constraints of the driving zone, this approach employs model predictive control for trajectory planning. This lesson learning framework is highlighted for its faster evolution capability, adeptness at experience accumulating, assurance of perceived safety, and computational efficiency. Simulation results demonstrate that the proposed system consistently achieves a successful customization without further takeover interventions. Accumulated experience yields a 24% enhancement in evolution efficiency. The average number of learning iterations is only 13.8. The average computation time is 0.08 seconds. | [
"['Jia Hu' 'Mingyue Lei' 'Duo Li' 'Zhenning Li' 'Jaehyun' 'So'\n 'Haoran Wang']"
]
|
null | null | 2405.07552 | null | null | http://arxiv.org/pdf/2405.07552v3 | 2024-06-01T14:58:24Z | 2024-05-13T08:32:22Z | Distributed High-Dimensional Quantile Regression: Estimation Efficiency
and Support Recovery | In this paper, we focus on distributed estimation and support recovery for high-dimensional linear quantile regression. Quantile regression is a popular alternative tool to the least squares regression for robustness against outliers and data heterogeneity. However, the non-smoothness of the check loss function poses big challenges to both computation and theory in the distributed setting. To tackle these problems, we transform the original quantile regression into the least-squares optimization. By applying a double-smoothing approach, we extend a previous Newton-type distributed approach without the restrictive independent assumption between the error term and covariates. An efficient algorithm is developed, which enjoys high computation and communication efficiency. Theoretically, the proposed distributed estimator achieves a near-oracle convergence rate and high support recovery accuracy after a constant number of iterations. Extensive experiments on synthetic examples and a real data application further demonstrate the effectiveness of the proposed method. | [
"['Caixing Wang' 'Ziliang Shen']"
]
|
null | null | 2405.07560 | null | null | http://arxiv.org/pdf/2405.07560v1 | 2024-05-13T08:50:18Z | 2024-05-13T08:50:18Z | Coding historical causes of death data with Large Language Models | This paper investigates the feasibility of using pre-trained generative Large Language Models (LLMs) to automate the assignment of ICD-10 codes to historical causes of death. Due to the complex narratives often found in historical causes of death, this task has traditionally been manually performed by coding experts. We evaluate the ability of GPT-3.5, GPT-4, and Llama 2 LLMs to accurately assign ICD-10 codes on the HiCaD dataset that contains causes of death recorded in the civil death register entries of 19,361 individuals from Ipswich, Kilmarnock, and the Isle of Skye from the UK between 1861-1901. Our findings show that GPT-3.5, GPT-4, and Llama 2 assign the correct code for 69%, 83%, and 40% of causes, respectively. However, we achieve a maximum accuracy of 89% by standard machine learning techniques. All LLMs performed better for causes of death that contained terms still in use today, compared to archaic terms. Also they perform better for short causes (1-2 words) compared to longer causes. LLMs therefore do not currently perform well enough for historical ICD-10 code assignment tasks. We suggest further fine-tuning or alternative frameworks to achieve adequate performance. | [
"['Bjørn Pedersen' 'Maisha Islam' 'Doris Tove Kristoffersen'\n 'Lars Ailo Bongo' 'Eilidh Garrett' 'Alice Reid' 'Hilde Sommerseth']"
]
|
null | null | 2405.07562 | null | null | http://arxiv.org/pdf/2405.07562v1 | 2024-05-13T08:52:04Z | 2024-05-13T08:52:04Z | GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation | While Deep Neural Networks (DNNs) have demonstrated remarkable performance in tasks related to perception and control, there are still several unresolved concerns regarding the privacy of their training data, particularly in the context of vulnerability to Membership Inference Attacks (MIAs). In this paper, we explore a connection between the susceptibility to membership inference attacks and the vulnerability to distillation-based functionality stealing attacks. In particular, we propose {GLiRA}, a distillation-guided approach to membership inference attack on the black-box neural network. We observe that the knowledge distillation significantly improves the efficiency of likelihood ratio of membership inference attack, especially in the black-box setting, i.e., when the architecture of the target model is unknown to the attacker. We evaluate the proposed method across multiple image classification datasets and models and demonstrate that likelihood ratio attacks when guided by the knowledge distillation, outperform the current state-of-the-art membership inference attacks in the black-box setting. | [
"['Andrey V. Galichin' 'Mikhail Pautov' 'Alexey Zhavoronkin'\n 'Oleg Y. Rogov' 'Ivan Oseledets']"
]
|
null | null | 2405.07590 | null | null | http://arxiv.org/pdf/2405.07590v1 | 2024-05-13T09:53:25Z | 2024-05-13T09:53:25Z | Evaluating the Explainable AI Method Grad-CAM for Breath Classification
on Newborn Time Series Data | With the digitalization of health care systems, artificial intelligence becomes more present in medicine. Especially machine learning shows great potential for complex tasks such as time series classification, usually at the cost of transparency and comprehensibility. This leads to a lack of trust by humans and thus hinders its active usage. Explainable artificial intelligence tries to close this gap by providing insight into the decision-making process, the actual usefulness of its different methods is however unclear. This paper proposes a user study based evaluation of the explanation method Grad-CAM with application to a neural network for the classification of breaths in time series neonatal ventilation data. We present the perceived usefulness of the explainability method by different stakeholders, exposing the difficulty to achieve actual transparency and the wish for more in-depth explanations by many of the participants. | [
"['Camelia Oprea' 'Mike Grüne' 'Mateusz Buglowski' 'Lena Olivier'\n 'Thorsten Orlikowsky' 'Stefan Kowalewski' 'Mark Schoberer'\n 'André Stollenwerk']"
]
|
null | null | 2405.07599 | null | null | http://arxiv.org/pdf/2405.07599v1 | 2024-05-13T09:59:59Z | 2024-05-13T09:59:59Z | Transferable Neural Wavefunctions for Solids | Deep-Learning-based Variational Monte Carlo (DL-VMC) has recently emerged as a highly accurate approach for finding approximate solutions to the many-electron Schr"odinger equation. Despite its favorable scaling with the number of electrons, $mathcal{O}(n_text{el}^{4})$, the practical value of DL-VMC is limited by the high cost of optimizing the neural network weights for every system studied. To mitigate this problem, recent research has proposed optimizing a single neural network across multiple systems, reducing the cost per system. Here we extend this approach to solids, where similar but distinct calculations using different geometries, boundary conditions, and supercell sizes are often required. We show how to optimize a single ansatz across all of these variations, reducing the required number of optimization steps by an order of magnitude. Furthermore, we exploit the transfer capabilities of a pre-trained network. We successfully transfer a network, pre-trained on 2x2x2 supercells of LiH, to 3x3x3 supercells. This reduces the number of optimization steps required to simulate the large system by a factor of 50 compared to previous work. | [
"['Leon Gerard' 'Michael Scherbela' 'Halvard Sutterud' 'Matthew Foulkes'\n 'Philipp Grohs']"
]
|
null | null | 2405.07601 | null | null | http://arxiv.org/abs/2405.07601v2 | 2024-05-15T20:09:26Z | 2024-05-13T10:03:34Z | On-device Online Learning and Semantic Management of TinyML Systems | Recent advances in Tiny Machine Learning (TinyML) empower low-footprint embedded devices for real-time on-device Machine Learning. While many acknowledge the potential benefits of TinyML, its practical implementation presents unique challenges. This study aims to bridge the gap between prototyping single TinyML models and developing reliable TinyML systems in production: (1) Embedded devices operate in dynamically changing conditions. Existing TinyML solutions primarily focus on inference, with models trained offline on powerful machines and deployed as static objects. However, static models may underperform in the real world due to evolving input data distributions. We propose online learning to enable training on constrained devices, adapting local models towards the latest field conditions. (2) Nevertheless, current on-device learning methods struggle with heterogeneous deployment conditions and the scarcity of labeled data when applied across numerous devices. We introduce federated meta-learning incorporating online learning to enhance model generalization, facilitating rapid learning. This approach ensures optimal performance among distributed devices by knowledge sharing. (3) Moreover, TinyML's pivotal advantage is widespread adoption. Embedded devices and TinyML models prioritize extreme efficiency, leading to diverse characteristics ranging from memory and sensors to model architectures. Given their diversity and non-standardized representations, managing these resources becomes challenging as TinyML systems scale up. We present semantic management for the joint management of models and devices at scale. We demonstrate our methods through a basic regression example and then assess them in three real-world TinyML applications: handwritten character image classification, keyword audio classification, and smart building presence detection, confirming our approaches' effectiveness. | [
"['Haoyu Ren' 'Xue Li' 'Darko Anicic' 'Thomas A. Runkler']"
]
|
null | null | 2405.07609 | null | null | http://arxiv.org/pdf/2405.07609v1 | 2024-05-13T10:20:31Z | 2024-05-13T10:20:31Z | NoiseBench: Benchmarking the Impact of Real Label Noise on Named Entity
Recognition | Available training data for named entity recognition (NER) often contains a significant percentage of incorrect labels for entity types and entity boundaries. Such label noise poses challenges for supervised learning and may significantly deteriorate model quality. To address this, prior work proposed various noise-robust learning approaches capable of learning from data with partially incorrect labels. These approaches are typically evaluated using simulated noise where the labels in a clean dataset are automatically corrupted. However, as we show in this paper, this leads to unrealistic noise that is far easier to handle than real noise caused by human error or semi-automatic annotation. To enable the study of the impact of various types of real noise, we introduce NoiseBench, an NER benchmark consisting of clean training data corrupted with 6 types of real noise, including expert errors, crowdsourcing errors, automatic annotation errors and LLM errors. We present an analysis that shows that real noise is significantly more challenging than simulated noise, and show that current state-of-the-art models for noise-robust learning fall far short of their theoretically achievable upper bound. We release NoiseBench to the research community. | [
"['Elena Merdjanovska' 'Ansar Aynetdinov' 'Alan Akbik']"
]
|
null | null | 2405.07619 | null | null | http://arxiv.org/pdf/2405.07619v1 | 2024-05-13T10:26:28Z | 2024-05-13T10:26:28Z | Analysis of the rate of convergence of an over-parametrized
convolutional neural network image classifier learned by gradient descent | Image classification based on over-parametrized convolutional neural networks with a global average-pooling layer is considered. The weights of the network are learned by gradient descent. A bound on the rate of convergence of the difference between the misclassification risk of the newly introduced convolutional neural network estimate and the minimal possible value is derived. | [
"['Michael Kohler' 'Adam Krzyzak' 'Benjamin Walter']"
]
|
null | null | 2405.07621 | null | null | http://arxiv.org/pdf/2405.07621v2 | 2024-05-14T06:29:36Z | 2024-05-13T10:27:11Z | Towards Adaptive IMFs -- Generalization of utility functions in
Multi-Agent Frameworks | Intent Management Function (IMF) is an integral part of future-generation networks. In recent years, there has been some work on AI-based IMFs that can handle conflicting intents and prioritize the global objective based on apriori definition of the utility function and accorded priorities for competing intents. Some of the earlier works use Multi-Agent Reinforcement Learning (MARL) techniques with AdHoc Teaming (AHT) approaches for efficient conflict handling in IMF. However, the success of such frameworks in real-life scenarios requires them to be flexible to business situations. The intent priorities can change and the utility function, which measures the extent of intent fulfilment, may also vary in definition. This paper proposes a novel mechanism whereby the IMF can generalize to different forms of utility functions and change of intent priorities at run-time without additional training. Such generalization ability, without additional training requirements, would help to deploy IMF in live networks where customer intents and priorities change frequently. Results on the network emulator demonstrate the efficacy of the approach, scalability for new intents, outperforming existing techniques that require additional training to achieve the same degree of flexibility thereby saving cost, and increasing efficiency and adaptability. | [
"['Kaushik Dey' 'Satheesh K. Perepu' 'Abir Das' 'Pallab Dasgupta']"
]
|
null | null | 2405.07622 | null | null | http://arxiv.org/pdf/2405.07622v1 | 2024-05-13T10:27:17Z | 2024-05-13T10:27:17Z | De novo antibody design with SE(3) diffusion | We introduce IgDiff, an antibody variable domain diffusion model based on a general protein backbone diffusion framework which was extended to handle multiple chains. Assessing the designability and novelty of the structures generated with our model, we find that IgDiff produces highly designable antibodies that can contain novel binding regions. The backbone dihedral angles of sampled structures show good agreement with a reference antibody distribution. We verify these designed antibodies experimentally and find that all express with high yield. Finally, we compare our model with a state-of-the-art generative backbone diffusion model on a range of antibody design tasks, such as the design of the complementarity determining regions or the pairing of a light chain to an existing heavy chain, and show improved properties and designability. | [
"['Daniel Cutting' 'Frédéric A. Dreyer' 'David Errington'\n 'Constantin Schneider' 'Charlotte M. Deane']"
]
|
null | null | 2405.07626 | null | null | http://arxiv.org/pdf/2405.07626v1 | 2024-05-13T10:37:50Z | 2024-05-13T10:37:50Z | AnomalyLLM: Few-shot Anomaly Edge Detection for Dynamic Graphs using
Large Language Models | Detecting anomaly edges for dynamic graphs aims to identify edges significantly deviating from the normal pattern and can be applied in various domains, such as cybersecurity, financial transactions and AIOps. With the evolving of time, the types of anomaly edges are emerging and the labeled anomaly samples are few for each type. Current methods are either designed to detect randomly inserted edges or require sufficient labeled data for model training, which harms their applicability for real-world applications. In this paper, we study this problem by cooperating with the rich knowledge encoded in large language models(LLMs) and propose a method, namely AnomalyLLM. To align the dynamic graph with LLMs, AnomalyLLM pre-trains a dynamic-aware encoder to generate the representations of edges and reprograms the edges using the prototypes of word embeddings. Along with the encoder, we design an in-context learning framework that integrates the information of a few labeled samples to achieve few-shot anomaly detection. Experiments on four datasets reveal that AnomalyLLM can not only significantly improve the performance of few-shot anomaly detection, but also achieve superior results on new anomalies without any update of model parameters. | [
"['Shuo Liu' 'Di Yao' 'Lanting Fang' 'Zhetao Li' 'Wenbin Li' 'Kaiyu Feng'\n 'XiaoWen Ji' 'Jingping Bi']"
]
|
null | null | 2405.07637 | null | null | http://arxiv.org/pdf/2405.07637v2 | 2024-05-14T08:10:15Z | 2024-05-13T10:51:01Z | Near-Optimal Regret in Linear MDPs with Aggregate Bandit Feedback | In many real-world applications, it is hard to provide a reward signal in each step of a Reinforcement Learning (RL) process and more natural to give feedback when an episode ends. To this end, we study the recently proposed model of RL with Aggregate Bandit Feedback (RL-ABF), where the agent only observes the sum of rewards at the end of an episode instead of each reward individually. Prior work studied RL-ABF only in tabular settings, where the number of states is assumed to be small. In this paper, we extend ABF to linear function approximation and develop two efficient algorithms with near-optimal regret guarantees: a value-based optimistic algorithm built on a new randomization technique with a Q-functions ensemble, and a policy optimization algorithm that uses a novel hedging scheme over the ensemble. | [
"['Asaf Cassel' 'Haipeng Luo' 'Aviv Rosenberg' 'Dmitry Sotnikov']"
]
|
null | null | 2405.07640 | null | null | http://arxiv.org/pdf/2405.07640v2 | 2024-05-15T08:32:56Z | 2024-05-13T11:00:25Z | Hyperparameter Importance Analysis for Multi-Objective AutoML | Hyperparameter optimization plays a pivotal role in enhancing the predictive performance and generalization capabilities of ML models. However, in many applications, we do not only care about predictive performance but also about objectives such as inference time, memory, or energy consumption. In such MOO scenarios, determining the importance of hyperparameters poses a significant challenge due to the complex interplay between the conflicting objectives. In this paper, we propose the first method for assessing the importance of hyperparameters in the context of multi-objective hyperparameter optimization. Our approach leverages surrogate-based hyperparameter importance (HPI) measures, i.e. fANOVA and ablation paths, to provide insights into the impact of hyperparameters on the optimization objectives. Specifically, we compute the a-priori scalarization of the objectives and determine the importance of the hyperparameters for different objective tradeoffs. Through extensive empirical evaluations on diverse benchmark datasets with three different objectives paired with accuracy, namely time, demographic parity, and energy consumption, we demonstrate the effectiveness and robustness of our proposed method. Our findings not only offer valuable guidance for hyperparameter tuning in MOO tasks but also contribute to advancing the understanding of HPI in complex optimization scenarios. | [
"['Daphne Theodorakopoulos' 'Frederic Stahl' 'Marius Lindauer']"
]
|
null | null | 2405.07649 | null | null | http://arxiv.org/pdf/2405.07649v1 | 2024-05-13T11:13:49Z | 2024-05-13T11:13:49Z | Efficient Matrix Factorization Via Householder Reflections | Motivated by orthogonal dictionary learning problems, we propose a novel method for matrix factorization, where the data matrix $mathbf{Y}$ is a product of a Householder matrix $mathbf{H}$ and a binary matrix $mathbf{X}$. First, we show that the exact recovery of the factors $mathbf{H}$ and $mathbf{X}$ from $mathbf{Y}$ is guaranteed with $Omega(1)$ columns in $mathbf{Y}$ . Next, we show approximate recovery (in the $linfty$ sense) can be done in polynomial time($O(np)$) with $Omega(log n)$ columns in $mathbf{Y}$ . We hope the techniques in this work help in developing alternate algorithms for orthogonal dictionary learning. | [
"['Anirudh Dash' 'Aditya Siripuram']"
]
|
null | null | 2405.07657 | null | null | http://arxiv.org/pdf/2405.07657v1 | 2024-05-13T11:37:50Z | 2024-05-13T11:37:50Z | Beyond traditional Magnetic Resonance processing with Artificial
Intelligence | Smart signal processing approaches using Artificial Intelligence are gaining momentum in NMR applications. In this study, we demonstrate that AI offers new opportunities beyond tasks addressed by traditional techniques. We developed and trained several artificial neural networks in our new toolbox Magnetic Resonance with Artificial intelligence (MR-Ai) to solve three "impossible" problems: quadrature detection using only Echo (or Anti-Echo) modulation from the traditional Echo/Anti-Echo scheme; accessing uncertainty of signal intensity at each point in a spectrum processed by any given method; and defining a reference-free score for quantitative access of NMR spectrum quality. Our findings highlight the potential of AI techniques to revolutionize NMR processing and analysis. | [
"['Amir Jahangiri' 'Vladislav Orekhov']"
]
|
null | null | 2405.07662 | null | null | http://arxiv.org/pdf/2405.07662v1 | 2024-05-13T11:43:38Z | 2024-05-13T11:43:38Z | Squeezing Lemons with Hammers: An Evaluation of AutoML and Tabular Deep
Learning for Data-Scarce Classification Applications | Many industry verticals are confronted with small-sized tabular data. In this low-data regime, it is currently unclear whether the best performance can be expected from simple baselines, or more complex machine learning approaches that leverage meta-learning and ensembling. On 44 tabular classification datasets with sample sizes $leq$ 500, we find that L2-regularized logistic regression performs similar to state-of-the-art automated machine learning (AutoML) frameworks (AutoPrognosis, AutoGluon) and off-the-shelf deep neural networks (TabPFN, HyperFast) on the majority of the benchmark datasets. We therefore recommend to consider logistic regression as the first choice for data-scarce applications with tabular data and provide practitioners with best practices for further method selection. | [
"['Ricardo Knauer' 'Erik Rodner']"
]
|
null | null | 2405.07670 | null | null | http://arxiv.org/pdf/2405.07670v1 | 2024-05-13T11:59:20Z | 2024-05-13T11:59:20Z | Impact of white Gaussian internal noise on analog echo-state neural
networks | In recent years, more and more works have appeared devoted to the analog (hardware) implementation of artificial neural networks, in which neurons and the connection between them are based not on computer calculations, but on physical principles. Such networks offer improved energy efficiency and, in some cases, scalability, but may be susceptible to internal noise. This paper studies the influence of noise on the functioning of recurrent networks using the example of trained echo state networks (ESNs). The most common reservoir connection matrices were chosen as various topologies of ESNs: random uniform and band matrices with different connectivity. White Gaussian noise was chosen as the influence, and according to the way of its introducing it was additive or multiplicative, as well as correlated or uncorrelated. In the paper, we show that the propagation of noise in reservoir is mainly controlled by the statistical properties of the output connection matrix, namely the mean and the mean square. Depending on these values, more correlated or uncorrelated noise accumulates in the network. We also show that there are conditions under which even noise with an intensity of $10^{-20}$ is already enough to completely lose the useful signal. In the article we show which types of noise are most critical for networks with different activation functions (hyperbolic tangent, sigmoid and linear) and if the network is self-closed. | [
"['Nadezhda Semenova']"
]
|
null | null | 2405.07671 | null | null | http://arxiv.org/pdf/2405.07671v1 | 2024-05-13T11:59:24Z | 2024-05-13T11:59:24Z | Constructing a BPE Tokenization DFA | Many natural language processing systems operate over tokenizations of text to address the open-vocabulary problem. In this paper, we give and analyze an algorithm for the efficient construction of deterministic finite automata designed to operate directly on tokenizations produced by the popular byte pair encoding technique. This makes it possible to apply many existing techniques and algorithms to the tokenized case, such as pattern matching, equivalence checking of tokenization dictionaries, and composing tokenized languages in various ways. | [
"['Martin Berglund' 'Willeke Martens' 'Brink van der Merwe']"
]
|
null | null | 2405.07679 | null | null | http://arxiv.org/pdf/2405.07679v1 | 2024-05-13T12:07:48Z | 2024-05-13T12:07:48Z | Class-wise Activation Unravelling the Engima of Deep Double Descent | Double descent presents a counter-intuitive aspect within the machine learning domain, and researchers have observed its manifestation in various models and tasks. While some theoretical explanations have been proposed for this phenomenon in specific contexts, an accepted theory for its occurring mechanism in deep learning remains yet to be established. In this study, we revisited the phenomenon of double descent and discussed the conditions of its occurrence. This paper introduces the concept of class-activation matrices and a methodology for estimating the effective complexity of functions, on which we unveil that over-parameterized models exhibit more distinct and simpler class patterns in hidden activations compared to under-parameterized ones. We further looked into the interpolation of noisy labelled data among clean representations and demonstrated overfitting w.r.t. expressive capacity. By comprehensively analysing hypotheses and presenting corresponding empirical evidence that either validates or contradicts these hypotheses, we aim to provide fresh insights into the phenomenon of double descent and benign over-parameterization and facilitate future explorations. By comprehensively studying different hypotheses and the corresponding empirical evidence either supports or challenges these hypotheses, our goal is to offer new insights into the phenomena of double descent and benign over-parameterization, thereby enabling further explorations in the field. The source code is available at https://github.com/Yufei-Gu-451/sparse-generalization.git. | [
"['Yufei Gu']"
]
|
null | null | 2405.07680 | null | null | http://arxiv.org/pdf/2405.07680v1 | 2024-05-13T12:10:57Z | 2024-05-13T12:10:57Z | Establishing a Unified Evaluation Framework for Human Motion Generation:
A Comparative Analysis of Metrics | The development of generative artificial intelligence for human motion generation has expanded rapidly, necessitating a unified evaluation framework. This paper presents a detailed review of eight evaluation metrics for human motion generation, highlighting their unique features and shortcomings. We propose standardized practices through a unified evaluation setup to facilitate consistent model comparisons. Additionally, we introduce a novel metric that assesses diversity in temporal distortion by analyzing warping diversity, thereby enhancing the evaluation of temporal data. We also conduct experimental analyses of three generative models using a publicly available dataset, offering insights into the interpretation of each metric in specific case scenarios. Our goal is to offer a clear, user-friendly evaluation framework for newcomers, complemented by publicly accessible code. | [
"['Ali Ismail-Fawaz' 'Maxime Devanne' 'Stefano Berretti' 'Jonathan Weber'\n 'Germain Forestier']"
]
|
null | null | 2405.07702 | null | null | http://arxiv.org/pdf/2405.07702v1 | 2024-05-13T12:39:08Z | 2024-05-13T12:39:08Z | FORESEE: Multimodal and Multi-view Representation Learning for Robust
Prediction of Cancer Survival | Integrating the different data modalities of cancer patients can significantly improve the predictive performance of patient survival. However, most existing methods ignore the simultaneous utilization of rich semantic features at different scales in pathology images. When collecting multimodal data and extracting features, there is a likelihood of encountering intra-modality missing data, introducing noise into the multimodal data. To address these challenges, this paper proposes a new end-to-end framework, FORESEE, for robustly predicting patient survival by mining multimodal information. Specifically, the cross-fusion transformer effectively utilizes features at the cellular level, tissue level, and tumor heterogeneity level to correlate prognosis through a cross-scale feature cross-fusion method. This enhances the ability of pathological image feature representation. Secondly, the hybrid attention encoder (HAE) uses the denoising contextual attention module to obtain the contextual relationship features and local detail features of the molecular data. HAE's channel attention module obtains global features of molecular data. Furthermore, to address the issue of missing information within modalities, we propose an asymmetrically masked triplet masked autoencoder to reconstruct lost information within modalities. Extensive experiments demonstrate the superiority of our method over state-of-the-art methods on four benchmark datasets in both complete and missing settings. | [
"['Liangrui Pan' 'Yijun Peng' 'Yan Li' 'Yiyi Liang' 'Liwen Xu'\n 'Qingchun Liang' 'Shaoliang Peng']"
]
|
null | null | 2405.07708 | null | null | http://arxiv.org/pdf/2405.07708v2 | 2024-05-14T12:37:15Z | 2024-05-13T12:52:58Z | Secure Aggregation Meets Sparsification in Decentralized Learning | Decentralized learning (DL) faces increased vulnerability to privacy breaches due to sophisticated attacks on machine learning (ML) models. Secure aggregation is a computationally efficient cryptographic technique that enables multiple parties to compute an aggregate of their private data while keeping their individual inputs concealed from each other and from any central aggregator. To enhance communication efficiency in DL, sparsification techniques are used, selectively sharing only the most crucial parameters or gradients in a model, thereby maintaining efficiency without notably compromising accuracy. However, applying secure aggregation to sparsified models in DL is challenging due to the transmission of disjoint parameter sets by distinct nodes, which can prevent masks from canceling out effectively. This paper introduces CESAR, a novel secure aggregation protocol for DL designed to be compatible with existing sparsification mechanisms. CESAR provably defends against honest-but-curious adversaries and can be formally adapted to counteract collusion between them. We provide a foundational understanding of the interaction between the sparsification carried out by the nodes and the proportion of the parameters shared under CESAR in both colluding and non-colluding environments, offering analytical insight into the working and applicability of the protocol. Experiments on a network with 48 nodes in a 3-regular topology show that with random subsampling, CESAR is always within 0.5% accuracy of decentralized parallel stochastic gradient descent (D-PSGD), while adding only 11% of data overhead. Moreover, it surpasses the accuracy on TopK by up to 0.3% on independent and identically distributed (IID) data. | [
"['Sayan Biswas' 'Anne-Marie Kermarrec' 'Rafael Pires' 'Rishi Sharma'\n 'Milos Vujasinovic']"
]
|
null | null | 2405.07719 | null | null | http://arxiv.org/pdf/2405.07719v5 | 2024-07-02T09:03:26Z | 2024-05-13T13:08:02Z | USP: A Unified Sequence Parallelism Approach for Long Context Generative
AI | Sequence parallelism (SP), which divides the sequence dimension of input tensors across multiple computational devices, is becoming key to unlocking the long-context capabilities of generative AI models. This paper investigates the state-of-the-art SP approaches, i.e. DeepSpeed-Ulysses and Ring-Attention, and proposes a unified SP approach, which is more robust to transformer model architectures and network hardware topology. This paper compares the communication and memory cost of SP and existing parallelism, including data/tensor/zero/pipeline parallelism, and discusses the best practices for designing hybrid 4D parallelism involving SP. We achieved 47% MFU on two 8xA800 nodes using SP for the LLAMA3-8B model training using sequence length 208K. Our code is publicly available at https://github.com/feifeibear/long-context-attention. | [
"['Jiarui Fang' 'Shangchun Zhao']"
]
|
null | null | 2405.07735 | null | null | http://arxiv.org/pdf/2405.07735v2 | 2024-07-04T01:27:00Z | 2024-05-13T13:32:02Z | Federated Hierarchical Tensor Networks: a Collaborative Learning Quantum
AI-Driven Framework for Healthcare | Healthcare industries frequently handle sensitive and proprietary data, and due to strict privacy regulations, they are often reluctant to share data directly. In today's context, Federated Learning (FL) stands out as a crucial remedy, facilitating the rapid advancement of distributed machine learning while effectively managing critical concerns regarding data privacy and governance. The fusion of federated learning and quantum computing represents a groundbreaking interdisciplinary approach with immense potential to revolutionize various industries, from healthcare to finance. In this work, we proposed a federated learning framework based on quantum tensor networks, which leverages the principles of many-body quantum physics. Currently, there are no known classical tensor networks implemented in federated settings. Furthermore, we investigated the effectiveness and feasibility of the proposed framework by conducting a differential privacy analysis to ensure the security of sensitive data across healthcare institutions. Experiments on popular medical image datasets show that the federated quantum tensor network model achieved a mean receiver-operator characteristic area under the curve (ROC-AUC) between 0.91-0.98. Experimental results demonstrate that the quantum federated global model, consisting of highly entangled tensor network structures, showed better generalization and robustness and achieved higher testing accuracy, surpassing the performance of locally trained clients under unbalanced data distributions among healthcare institutions. | [
"['Amandeep Singh Bhatia' 'David E. Bernal Neira']"
]
|
null | null | 2405.07748 | null | null | http://arxiv.org/pdf/2405.07748v1 | 2024-05-13T13:46:02Z | 2024-05-13T13:46:02Z | Neural Network Compression for Reinforcement Learning Tasks | In real applications of Reinforcement Learning (RL), such as robotics, low latency and energy efficient inference is very desired. The use of sparsity and pruning for optimizing Neural Network inference, and particularly to improve energy and latency efficiency, is a standard technique. In this work, we perform a systematic investigation of applying these optimization techniques for different RL algorithms in different RL environments, yielding up to a 400-fold reduction in the size of neural networks. | [
"['Dmitry A. Ivanov' 'Denis A. Larionov' 'Oleg V. Maslennikov'\n 'Vladimir V. Voevodin']"
]
|
null | null | 2405.07749 | null | null | http://arxiv.org/abs/2405.07749v1 | 2024-05-13T13:47:15Z | 2024-05-13T13:47:15Z | DeepHYDRA: Resource-Efficient Time-Series Anomaly Detection in
Dynamically-Configured Systems | Anomaly detection in distributed systems such as High-Performance Computing (HPC) clusters is vital for early fault detection, performance optimisation, security monitoring, reliability in general but also operational insights. Deep Neural Networks have seen successful use in detecting long-term anomalies in multidimensional data, originating for instance from industrial or medical systems, or weather prediction. A downside of such methods is that they require a static input size, or lose data through cropping, sampling, or other dimensionality reduction methods, making deployment on systems with variability on monitored data channels, such as computing clusters difficult. To address these problems, we present DeepHYDRA (Deep Hybrid DBSCAN/Reduction-Based Anomaly Detection) which combines DBSCAN and learning-based anomaly detection. DBSCAN clustering is used to find point anomalies in time-series data, mitigating the risk of missing outliers through loss of information when reducing input data to a fixed number of channels. A deep learning-based time-series anomaly detection method is then applied to the reduced data in order to identify long-term outliers. This hybrid approach reduces the chances of missing anomalies that might be made indistinguishable from normal data by the reduction process, and likewise enables the algorithm to be scalable and tolerate partial system failures while retaining its detection capabilities. Using a subset of the well-known SMD dataset family, a modified variant of the Eclipse dataset, as well as an in-house dataset with a large variability in active data channels, made publicly available with this work, we furthermore analyse computational intensity, memory footprint, and activation counts. DeepHYDRA is shown to reliably detect different types of anomalies in both large and complex datasets. | [
"['Franz Kevin Stehle' 'Wainer Vandelli' 'Giuseppe Avolio' 'Felix Zahn'\n 'Holger Fröning']"
]
|
null | null | 2405.07751 | null | null | http://arxiv.org/pdf/2405.07751v1 | 2024-05-13T13:50:44Z | 2024-05-13T13:50:44Z | Integrating supervised and unsupervised learning approaches to unveil
critical process inputs | This study introduces a machine learning framework tailored to large-scale industrial processes characterized by a plethora of numerical and categorical inputs. The framework aims to (i) discern critical parameters influencing the output and (ii) generate accurate out-of-sample qualitative and quantitative predictions of production outcomes. Specifically, we address the pivotal question of the significance of each input in shaping the process outcome, using an industrial Chemical Vapor Deposition (CVD) process as an example. The initial objective involves merging subject matter expertise and clustering techniques exclusively on the process output, here, coating thickness measurements at various positions in the reactor. This approach identifies groups of production runs that share similar qualitative characteristics, such as film mean thickness and standard deviation. In particular, the differences of the outcomes represented by the different clusters can be attributed to differences in specific inputs, indicating that these inputs are critical for the production outcome. Leveraging this insight, we subsequently implement supervised classification and regression methods using the identified critical process inputs. The proposed methodology proves to be valuable in scenarios with a multitude of inputs and insufficient data for the direct application of deep learning techniques, providing meaningful insights into the underlying processes. | [
"['Paris Papavasileiou' 'Dimitrios G. Giovanis' 'Gabriele Pozzetti'\n 'Martin Kathrein' 'Christoph Czettl' 'Ioannis G. Kevrekidis'\n 'Andreas G. Boudouvis' 'Stéphane P. A. Bordas' 'Eleni D. Koronaki']"
]
|
null | null | 2405.07760 | null | null | http://arxiv.org/pdf/2405.07760v1 | 2024-05-13T14:00:02Z | 2024-05-13T14:00:02Z | CAGES: Cost-Aware Gradient Entropy Search for Efficient Local
Multi-Fidelity Bayesian Optimization | Bayesian optimization (BO) is a popular approach for optimizing expensive-to-evaluate black-box objective functions. An important challenge in BO is its application to high-dimensional search spaces due in large part to the curse of dimensionality. One way to overcome this challenge is to focus on local BO methods that aim to efficiently learn gradients, which have shown strong empirical performance on a variety of high-dimensional problems including policy search in reinforcement learning (RL). However, current local BO methods assume access to only a single high-fidelity information source whereas, in many engineering and control problems, one has access to multiple cheaper approximations of the objective. We propose a novel algorithm, Cost-Aware Gradient Entropy Search (CAGES), for local BO of multi-fidelity black-box functions. CAGES makes no assumption about the relationship between different information sources, making it more flexible than other multi-fidelity methods. It also employs a new type of information-theoretic acquisition function, which enables systematic identification of samples that maximize the information gain about the unknown gradient per cost of the evaluation. We demonstrate CAGES can achieve significant performance improvements compared to other state-of-the-art methods on a variety of synthetic and benchmark RL problems. | [
"['Wei-Ting Tang' 'Joel A. Paulson']"
]
|
null | null | 2405.07761 | null | null | http://arxiv.org/pdf/2405.07761v1 | 2024-05-13T14:03:49Z | 2024-05-13T14:03:49Z | LLM4ED: Large Language Models for Automatic Equation Discovery | Equation discovery is aimed at directly extracting physical laws from data and has emerged as a pivotal research domain. Previous methods based on symbolic mathematics have achieved substantial advancements, but often require the design of implementation of complex algorithms. In this paper, we introduce a new framework that utilizes natural language-based prompts to guide large language models (LLMs) in automatically mining governing equations from data. Specifically, we first utilize the generation capability of LLMs to generate diverse equations in string form, and then evaluate the generated equations based on observations. In the optimization phase, we propose two alternately iterated strategies to optimize generated equations collaboratively. The first strategy is to take LLMs as a black-box optimizer and achieve equation self-improvement based on historical samples and their performance. The second strategy is to instruct LLMs to perform evolutionary operators for global search. Experiments are extensively conducted on both partial differential equations and ordinary differential equations. Results demonstrate that our framework can discover effective equations to reveal the underlying physical laws under various nonlinear dynamic systems. Further comparisons are made with state-of-the-art models, demonstrating good stability and usability. Our framework substantially lowers the barriers to learning and applying equation discovery techniques, demonstrating the application potential of LLMs in the field of knowledge discovery. | [
"['Mengge Du' 'Yuntian Chen' 'Zhongzheng Wang' 'Longfeng Nie'\n 'Dongxiao Zhang']"
]
|
null | null | 2405.07769 | null | null | http://arxiv.org/pdf/2405.07769v1 | 2024-05-13T14:12:33Z | 2024-05-13T14:12:33Z | $α$VIL: Learning to Leverage Auxiliary Tasks for Multitask Learning | Multitask Learning is a Machine Learning paradigm that aims to train a range of (usually related) tasks with the help of a shared model. While the goal is often to improve the joint performance of all training tasks, another approach is to focus on the performance of a specific target task, while treating the remaining ones as auxiliary data from which to possibly leverage positive transfer towards the target during training. In such settings, it becomes important to estimate the positive or negative influence auxiliary tasks will have on the target. While many ways have been proposed to estimate task weights before or during training they typically rely on heuristics or extensive search of the weighting space. We propose a novel method called $alpha$-Variable Importance Learning ($alpha$VIL) that is able to adjust task weights dynamically during model training, by making direct use of task-specific updates of the underlying model's parameters between training epochs. Experiments indicate that $alpha$VIL is able to outperform other Multitask Learning approaches in a variety of settings. To our knowledge, this is the first attempt at making direct use of model updates for task weight estimation. | [
"['Rafael Kourdis' 'Gabriel Gordon-Hall' 'Philip John Gorinski']"
]
|
null | null | 2405.07770 | null | null | http://arxiv.org/pdf/2405.07770v1 | 2024-05-13T14:14:12Z | 2024-05-13T14:14:12Z | Hype or Heuristic? Quantum Reinforcement Learning for Join Order
Optimisation | Identifying optimal join orders (JOs) stands out as a key challenge in database research and engineering. Owing to the large search space, established classical methods rely on approximations and heuristics. Recent efforts have successfully explored reinforcement learning (RL) for JO. Likewise, quantum versions of RL have received considerable scientific attention. Yet, it is an open question if they can achieve sustainable, overall practical advantages with improved quantum processors. In this paper, we present a novel approach that uses quantum reinforcement learning (QRL) for JO based on a hybrid variational quantum ansatz. It is able to handle general bushy join trees instead of resorting to simpler left-deep variants as compared to approaches based on quantum(-inspired) optimisation, yet requires multiple orders of magnitudes fewer qubits, which is a scarce resource even for post-NISQ systems. Despite moderate circuit depth, the ansatz exceeds current NISQ capabilities, which requires an evaluation by numerical simulations. While QRL may not significantly outperform classical approaches in solving the JO problem with respect to result quality (albeit we see parity), we find a drastic reduction in required trainable parameters. This benefits practically relevant aspects ranging from shorter training times compared to classical RL, less involved classical optimisation passes, or better use of available training data, and fits data-stream and low-latency processing scenarios. Our comprehensive evaluation and careful discussion delivers a balanced perspective on possible practical quantum advantage, provides insights for future systemic approaches, and allows for quantitatively assessing trade-offs of quantum approaches for one of the most crucial problems of database management systems. | [
"['Maja Franz' 'Tobias Winker' 'Sven Groppe' 'Wolfgang Mauerer']"
]
|
null | null | 2405.07780 | null | null | http://arxiv.org/pdf/2405.07780v1 | 2024-05-13T14:24:56Z | 2024-05-13T14:24:56Z | Harnessing Hierarchical Label Distribution Variations in Test Agnostic
Long-tail Recognition | This paper explores test-agnostic long-tail recognition, a challenging long-tail task where the test label distributions are unknown and arbitrarily imbalanced. We argue that the variation in these distributions can be broken down hierarchically into global and local levels. The global ones reflect a broad range of diversity, while the local ones typically arise from milder changes, often focused on a particular neighbor. Traditional methods predominantly use a Mixture-of-Expert (MoE) approach, targeting a few fixed test label distributions that exhibit substantial global variations. However, the local variations are left unconsidered. To address this issue, we propose a new MoE strategy, $mathsf{DirMixE}$, which assigns experts to different Dirichlet meta-distributions of the label distribution, each targeting a specific aspect of local variations. Additionally, the diversity among these Dirichlet meta-distributions inherently captures global variations. This dual-level approach also leads to a more stable objective function, allowing us to sample different test distributions better to quantify the mean and variance of performance outcomes. Theoretically, we show that our proposed objective benefits from enhanced generalization by virtue of the variance-based regularization. Comprehensive experiments across multiple benchmarks confirm the effectiveness of $mathsf{DirMixE}$. The code is available at url{https://github.com/scongl/DirMixE}. | [
"['Zhiyong Yang' 'Qianqian Xu' 'Zitai Wang' 'Sicong Li' 'Boyu Han'\n 'Shilong Bao' 'Xiaochun Cao' 'Qingming Huang']"
]
|
null | null | 2405.07790 | null | null | http://arxiv.org/pdf/2405.07790v1 | 2024-05-13T14:36:22Z | 2024-05-13T14:36:22Z | Hamiltonian-based Quantum Reinforcement Learning for Neural
Combinatorial Optimization | Advancements in Quantum Computing (QC) and Neural Combinatorial Optimization (NCO) represent promising steps in tackling complex computational challenges. On the one hand, Variational Quantum Algorithms such as QAOA can be used to solve a wide range of combinatorial optimization problems. On the other hand, the same class of problems can be solved by NCO, a method that has shown promising results, particularly since the introduction of Graph Neural Networks. Given recent advances in both research areas, we introduce Hamiltonian-based Quantum Reinforcement Learning (QRL), an approach at the intersection of QC and NCO. We model our ansatzes directly on the combinatorial optimization problem's Hamiltonian formulation, which allows us to apply our approach to a broad class of problems. Our ansatzes show favourable trainability properties when compared to the hardware efficient ansatzes, while also not being limited to graph-based problems, unlike previous works. In this work, we evaluate the performance of Hamiltonian-based QRL on a diverse set of combinatorial optimization problems to demonstrate the broad applicability of our approach and compare it to QAOA. | [
"['Georg Kruse' 'Rodrigo Coehlo' 'Andreas Rosskopf' 'Robert Wille'\n 'Jeanette Miriam Lorenz']"
]
|
null | null | 2405.07791 | null | null | http://arxiv.org/abs/2405.07791v2 | 2024-07-06T03:51:42Z | 2024-05-13T14:37:03Z | Decentralized Kernel Ridge Regression Based on Data-Dependent Random
Feature | Random feature (RF) has been widely used for node consistency in decentralized kernel ridge regression (KRR). Currently, the consistency is guaranteed by imposing constraints on coefficients of features, necessitating that the random features on different nodes are identical. However, in many applications, data on different nodes varies significantly on the number or distribution, which calls for adaptive and data-dependent methods that generate different RFs. To tackle the essential difficulty, we propose a new decentralized KRR algorithm that pursues consensus on decision functions, which allows great flexibility and well adapts data on nodes. The convergence is rigorously given and the effectiveness is numerically verified: by capturing the characteristics of the data on each node, while maintaining the same communication costs as other methods, we achieved an average regression accuracy improvement of 25.5% across six real-world data sets. | [
"['Ruikai Yang' 'Fan He' 'Mingzhen He' 'Jie Yang' 'Xiaolin Huang']"
]
|
null | null | 2405.07792 | null | null | http://arxiv.org/pdf/2405.07792v1 | 2024-05-13T14:38:35Z | 2024-05-13T14:38:35Z | Optimal Matrix Sketching over Sliding Windows | Matrix sketching, aimed at approximating a matrix $boldsymbol{A} in mathbb{R}^{Ntimes d}$ consisting of vector streams of length $N$ with a smaller sketching matrix $boldsymbol{B} in mathbb{R}^{elltimes d}, ell ll N$, has garnered increasing attention in fields such as large-scale data analytics and machine learning. A well-known deterministic matrix sketching method is the Frequent Directions algorithm, which achieves the optimal $Oleft(frac{d}{varepsilon}right)$ space bound and provides a covariance error guarantee of $varepsilon = lVert boldsymbol{A}^top boldsymbol{A} - boldsymbol{B}^top boldsymbol{B} rVert_2/lVert boldsymbol{A} rVert_F^2$. The matrix sketching problem becomes particularly interesting in the context of sliding windows, where the goal is to approximate the matrix $boldsymbol{A}_W$, formed by input vectors over the most recent $N$ time units. However, despite recent efforts, whether achieving the optimal $Oleft(frac{d}{varepsilon}right)$ space bound on sliding windows is possible has remained an open question. In this paper, we introduce the DS-FD algorithm, which achieves the optimal $Oleft(frac{d}{varepsilon}right)$ space bound for matrix sketching over row-normalized, sequence-based sliding windows. We also present matching upper and lower space bounds for time-based and unnormalized sliding windows, demonstrating the generality and optimality of dsfd across various sliding window models. This conclusively answers the open question regarding the optimal space bound for matrix sketching over sliding windows. Furthermore, we conduct extensive experiments with both synthetic and real-world datasets, validating our theoretical claims and thus confirming the correctness and effectiveness of our algorithm, both theoretically and empirically. | [
"['Hanyan Yin' 'Dongxie Wen' 'Jiajun Li' 'Zhewei Wei' 'Xiao Zhang'\n 'Zengfeng Huang' 'Feifei Li']"
]
|
null | null | 2405.07795 | null | null | http://arxiv.org/pdf/2405.07795v1 | 2024-05-13T14:41:28Z | 2024-05-13T14:41:28Z | Improved Bound for Robust Causal Bandits with Linear Models | This paper investigates the robustness of causal bandits (CBs) in the face of temporal model fluctuations. This setting deviates from the existing literature's widely-adopted assumption of constant causal models. The focus is on causal systems with linear structural equation models (SEMs). The SEMs and the time-varying pre- and post-interventional statistical models are all unknown and subject to variations over time. The goal is to design a sequence of interventions that incur the smallest cumulative regret compared to an oracle aware of the entire causal model and its fluctuations. A robust CB algorithm is proposed, and its cumulative regret is analyzed by establishing both upper and lower bounds on the regret. It is shown that in a graph with maximum in-degree $d$, length of the largest causal path $L$, and an aggregate model deviation $C$, the regret is upper bounded by $tilde{mathcal{O}}(d^{L-frac{1}{2}}(sqrt{T} + C))$ and lower bounded by $Omega(d^{frac{L}{2}-2}max{sqrt{T}; ,; d^2C})$. The proposed algorithm achieves nearly optimal $tilde{mathcal{O}}(sqrt{T})$ regret when $C$ is $o(sqrt{T})$, maintaining sub-linear regret for a broad range of $C$. | [
"['Zirui Yan' 'Arpan Mukherjee' 'Burak Varıcı' 'Ali Tajer']"
]
|
null | null | 2405.07800 | null | null | http://arxiv.org/pdf/2405.07800v2 | 2024-07-09T13:54:24Z | 2024-05-13T14:44:02Z | Data Imputation by Pursuing Better Classification: A Supervised
Kernel-Based Method | Data imputation, the process of filling in missing feature elements for incomplete data sets, plays a crucial role in data-driven learning. A fundamental belief is that data imputation is helpful for learning performance, and it follows that the pursuit of better classification can guide the data imputation process. While some works consider using label information to assist in this task, their simplistic utilization of labels lacks flexibility and may rely on strict assumptions. In this paper, we propose a new framework that effectively leverages supervision information to complete missing data in a manner conducive to classification. Specifically, this framework operates in two stages. Firstly, it leverages labels to supervise the optimization of similarity relationships among data, represented by the kernel matrix, with the goal of enhancing classification accuracy. To mitigate overfitting that may occur during this process, a perturbation variable is introduced to improve the robustness of the framework. Secondly, the learned kernel matrix serves as additional supervision information to guide data imputation through regression, utilizing the block coordinate descent method. The superiority of the proposed method is evaluated on four real-world data sets by comparing it with state-of-the-art imputation methods. Remarkably, our algorithm significantly outperforms other methods when the data is missing more than 60% of the features | [
"['Ruikai Yang' 'Fan He' 'Mingzhen He' 'Kaijie Wang' 'Xiaolin Huang']"
]
|
null | null | 2405.07813 | null | null | http://arxiv.org/pdf/2405.07813v1 | 2024-05-13T14:54:37Z | 2024-05-13T14:54:37Z | Localizing Task Information for Improved Model Merging and Compression | Model merging and task arithmetic have emerged as promising scalable approaches to merge multiple single-task checkpoints to one multi-task model, but their applicability is reduced by significant performance loss. Previous works have linked these drops to interference in the weight space and erasure of important task-specific features. Instead, in this work we show that the information required to solve each task is still preserved after merging as different tasks mostly use non-overlapping sets of weights. We propose TALL-masks, a method to identify these task supports given a collection of task vectors and show that one can retrieve >99% of the single task accuracy by applying our masks to the multi-task vector, effectively compressing the individual checkpoints. We study the statistics of intersections among constructed masks and reveal the existence of selfish and catastrophic weights, i.e., parameters that are important exclusively to one task and irrelevant to all tasks but detrimental to multi-task fusion. For this reason, we propose Consensus Merging, an algorithm that eliminates such weights and improves the general performance of existing model merging approaches. Our experiments in vision and NLP benchmarks with up to 20 tasks, show that Consensus Merging consistently improves existing approaches. Furthermore, our proposed compression scheme reduces storage from 57Gb to 8.2Gb while retaining 99.7% of original performance. | [
"['Ke Wang' 'Nikolaos Dimitriadis' 'Guillermo Ortiz-Jimenez'\n 'François Fleuret' 'Pascal Frossard']"
]
|
null | null | 2405.07816 | null | null | http://arxiv.org/pdf/2405.07816v1 | 2024-05-13T14:58:57Z | 2024-05-13T14:58:57Z | Quick and Accurate Affordance Learning | Infants learn actively in their environments, shaping their own learning curricula. They learn about their environments' affordances, that is, how local circumstances determine how their behavior can affect the environment. Here we model this type of behavior by means of a deep learning architecture. The architecture mediates between global cognitive map exploration and local affordance learning. Inference processes actively move the simulated agent towards regions where they expect affordance-related knowledge gain. We contrast three measures of uncertainty to guide this exploration: predicted uncertainty of a model, standard deviation between the means of several models (SD), and the Jensen-Shannon Divergence (JSD) between several models. We show that the first measure gets fooled by aleatoric uncertainty inherent in the environment, while the two other measures focus learning on epistemic uncertainty. JSD exhibits the most balanced exploration strategy. From a computational perspective, our model suggests three key ingredients for coordinating the active generation of learning curricula: (1) Navigation behavior needs to be coordinated with local motor behavior for enabling active affordance learning. (2) Affordances need to be encoded locally for acquiring generalized knowledge. (3) Effective active affordance learning mechanisms should use density comparison techniques for estimating expected knowledge gain. Future work may seek collaborations with developmental psychology to model active play in children in more realistic scenarios. | [
"['Fedor Scholz' 'Erik Ayari' 'Johannes Bertram' 'Martin V. Butz']"
]
|
null | null | 2405.07822 | null | null | http://arxiv.org/pdf/2405.07822v1 | 2024-05-13T15:07:52Z | 2024-05-13T15:07:52Z | Synthetic Tabular Data Validation: A Divergence-Based Approach | The ever-increasing use of generative models in various fields where tabular data is used highlights the need for robust and standardized validation metrics to assess the similarity between real and synthetic data. Current methods lack a unified framework and rely on diverse and often inconclusive statistical measures. Divergences, which quantify discrepancies between data distributions, offer a promising avenue for validation. However, traditional approaches calculate divergences independently for each feature due to the complexity of joint distribution modeling. This paper addresses this challenge by proposing a novel approach that uses divergence estimation to overcome the limitations of marginal comparisons. Our core contribution lies in applying a divergence estimator to build a validation metric considering the joint distribution of real and synthetic data. We leverage a probabilistic classifier to approximate the density ratio between datasets, allowing the capture of complex relationships. We specifically calculate two divergences: the well-known Kullback-Leibler (KL) divergence and the Jensen-Shannon (JS) divergence. KL divergence offers an established use in the field, while JS divergence is symmetric and bounded, providing a reliable metric. The efficacy of this approach is demonstrated through a series of experiments with varying distribution complexities. The initial phase involves comparing estimated divergences with analytical solutions for simple distributions, setting a benchmark for accuracy. Finally, we validate our method on a real-world dataset and its corresponding synthetic counterpart, showcasing its effectiveness in practical applications. This research offers a significant contribution with applicability beyond tabular data and the potential to improve synthetic data validation in various fields. | [
"['Patricia A. Apellániz' 'Ana Jiménez' 'Borja Arroyo Galende'\n 'Juan Parras' 'Santiago Zazo']"
]
|
null | null | 2405.07823 | null | null | http://arxiv.org/pdf/2405.07823v1 | 2024-05-13T15:08:02Z | 2024-05-13T15:08:02Z | Integrating Multi-Physics Simulations and Machine Learning to Define the
Spatter Mechanism and Process Window in Laser Powder Bed Fusion | Laser powder bed fusion (LPBF) has shown promise for wide range of applications due to its ability to fabricate freeform geometries and generate a controlled microstructure. However, components generated by LPBF still possess sub-optimal mechanical properties due to the defects that are created during laser-material interactions. In this work, we investigate mechanism of spatter formation, using a high-fidelity modelling tool that was built to simulate the multi-physics phenomena in LPBF. The modelling tool have the capability to capture the 3D resolution of the meltpool and the spatter behavior. To understand spatter behavior and formation, we reveal its properties at ejection and evaluate its variation from the meltpool, the source where it is formed. The dataset of the spatter and the meltpool collected consist of 50 % spatter and 50 % melt pool samples, with features that include position components, velocity components, velocity magnitude, temperature, density and pressure. The relationship between the spatter and the meltpool were evaluated via correlation analysis and machine learning (ML) algorithms for classification tasks. Upon screening different ML algorithms on the dataset, a high accuracy was observed for all the ML models, with ExtraTrees having the highest at 96 % and KNN having the lowest at 94 %. | [
"['Olabode T. Ajenifujah' 'Francis Ogoke' 'Florian Wirth' 'Jack Beuth'\n 'Amir Barati Farimani']"
]
|
null | null | 2405.07836 | null | null | http://arxiv.org/pdf/2405.07836v2 | 2024-05-17T14:26:18Z | 2024-05-13T15:22:15Z | Forecasting with Hyper-Trees | This paper introduces the concept of Hyper-Trees and offers a new direction in applying tree-based models to time series data. Unlike conventional applications of decision trees that forecast time series directly, Hyper-Trees are designed to learn the parameters of a target time series model. Our framework leverages the gradient-based nature of boosted trees, which allows us to extend the concept of Hyper-Networks to Hyper-Trees and to induce a time-series inductive bias to tree models. By relating the parameters of a target time series model to features, Hyper-Trees address the issue of parameter non-stationarity and enable tree-based forecasts to extend beyond their training range. With our research, we aim to explore the effectiveness of Hyper-Trees across various forecasting scenarios and to extend the application of gradient boosted decision trees outside their conventional use in time series modeling. | [
"['Alexander März' 'Kashif Rasul']"
]
|
null | null | 2405.07838 | null | null | http://arxiv.org/pdf/2405.07838v1 | 2024-05-13T15:24:27Z | 2024-05-13T15:24:27Z | Adaptive Exploration for Data-Efficient General Value Function
Evaluations | General Value Functions (GVFs) (Sutton et al, 2011) are an established way to represent predictive knowledge in reinforcement learning. Each GVF computes the expected return for a given policy, based on a unique pseudo-reward. Multiple GVFs can be estimated in parallel using off-policy learning from a single stream of data, often sourced from a fixed behavior policy or pre-collected dataset. This leaves an open question: how can behavior policy be chosen for data-efficient GVF learning? To address this gap, we propose GVFExplorer, which aims at learning a behavior policy that efficiently gathers data for evaluating multiple GVFs in parallel. This behavior policy selects actions in proportion to the total variance in the return across all GVFs, reducing the number of environmental interactions. To enable accurate variance estimation, we use a recently proposed temporal-difference-style variance estimator. We prove that each behavior policy update reduces the mean squared error in the summed predictions over all GVFs. We empirically demonstrate our method's performance in both tabular representations and nonlinear function approximation. | [
"['Arushi Jain' 'Josiah P. Hanna' 'Doina Precup']"
]
|
null | null | 2405.07839 | null | null | http://arxiv.org/pdf/2405.07839v2 | 2024-06-03T13:48:52Z | 2024-05-13T15:25:03Z | Constrained Exploration via Reflected Replica Exchange Stochastic
Gradient Langevin Dynamics | Replica exchange stochastic gradient Langevin dynamics (reSGLD) is an effective sampler for non-convex learning in large-scale datasets. However, the simulation may encounter stagnation issues when the high-temperature chain delves too deeply into the distribution tails. To tackle this issue, we propose reflected reSGLD (r2SGLD): an algorithm tailored for constrained non-convex exploration by utilizing reflection steps within a bounded domain. Theoretically, we observe that reducing the diameter of the domain enhances mixing rates, exhibiting a $textit{quadratic}$ behavior. Empirically, we test its performance through extensive experiments, including identifying dynamical systems with physical constraints, simulations of constrained multi-modal distributions, and image classification tasks. The theoretical and empirical findings highlight the crucial role of constrained exploration in improving the simulation efficiency. | [
"['Haoyang Zheng' 'Hengrong Du' 'Qi Feng' 'Wei Deng' 'Guang Lin']"
]
|
null | null | 2405.07841 | null | null | http://arxiv.org/pdf/2405.07841v1 | 2024-05-13T15:30:35Z | 2024-05-13T15:30:35Z | Sample Selection Bias in Machine Learning for Healthcare | While machine learning algorithms hold promise for personalised medicine, their clinical adoption remains limited. One critical factor contributing to this restraint is sample selection bias (SSB) which refers to the study population being less representative of the target population, leading to biased and potentially harmful decisions. Despite being well-known in the literature, SSB remains scarcely studied in machine learning for healthcare. Moreover, the existing techniques try to correct the bias by balancing distributions between the study and the target populations, which may result in a loss of predictive performance. To address these problems, our study illustrates the potential risks associated with SSB by examining SSB's impact on the performance of machine learning algorithms. Most importantly, we propose a new research direction for addressing SSB, based on the target population identification rather than the bias correction. Specifically, we propose two independent networks (T-Net) and a multitasking network (MT-Net) for addressing SSB, where one network/task identifies the target subpopulation which is representative of the study population and the second makes predictions for the identified subpopulation. Our empirical results with synthetic and semi-synthetic datasets highlight that SSB can lead to a large drop in the performance of an algorithm for the target population as compared with the study population, as well as a substantial difference in the performance for the target subpopulations that are representative of the selected and the non-selected patients from the study population. Furthermore, our proposed techniques demonstrate robustness across various settings, including different dataset sizes, event rates, and selection rates, outperforming the existing bias correction techniques. | [
"['Vinod Kumar Chauhan' 'Lei Clifton' 'Achille Salaün' 'Huiqi Yvonne Lu'\n 'Kim Branson' 'Patrick Schwab' 'Gaurav Nigam' 'David A. Clifton']"
]
|
null | null | 2405.07863 | null | null | http://arxiv.org/pdf/2405.07863v2 | 2024-06-12T04:40:53Z | 2024-05-13T15:50:39Z | RLHF Workflow: From Reward Modeling to Online RLHF | We present the workflow of Online Iterative Reinforcement Learning from Human Feedback (RLHF) in this technical report, which is widely reported to outperform its offline counterpart by a large margin in the recent large language model (LLM) literature. However, existing open-source RLHF projects are still largely confined to the offline learning setting. In this technical report, we aim to fill in this gap and provide a detailed recipe that is easy to reproduce for online iterative RLHF. In particular, since online human feedback is usually infeasible for open-source communities with limited resources, we start by constructing preference models using a diverse set of open-source datasets and use the constructed proxy preference model to approximate human feedback. Then, we discuss the theoretical insights and algorithmic principles behind online iterative RLHF, followed by a detailed practical implementation. Our trained LLM, LLaMA-3-8B-SFR-Iterative-DPO-R, achieves impressive performance on LLM chatbot benchmarks, including AlpacaEval-2, Arena-Hard, and MT-Bench, as well as other academic benchmarks such as HumanEval and TruthfulQA. We have shown that supervised fine-tuning (SFT) and iterative RLHF can obtain state-of-the-art performance with fully open-source datasets. Further, we have made our models, curated datasets, and comprehensive step-by-step code guidebooks publicly available. Please refer to https://github.com/RLHFlow/RLHF-Reward-Modeling and https://github.com/RLHFlow/Online-RLHF for more detailed information. | [
"['Hanze Dong' 'Wei Xiong' 'Bo Pang' 'Haoxiang Wang' 'Han Zhao'\n 'Yingbo Zhou' 'Nan Jiang' 'Doyen Sahoo' 'Caiming Xiong' 'Tong Zhang']"
]
|
null | null | 2405.07879 | null | null | http://arxiv.org/pdf/2405.07879v1 | 2024-05-13T16:09:29Z | 2024-05-13T16:09:29Z | On the Relation Between Autoencoders and Non-negative Matrix
Factorization, and Their Application for Mutational Signature Extraction | The aim of this study is to provide a foundation to understand the relationship between non-negative matrix factorization (NMF) and non-negative autoencoders enabling proper interpretation and understanding of autoencoder-based alternatives to NMF. Since its introduction, NMF has been a popular tool for extracting interpretable, low-dimensional representations of high-dimensional data. However, recently, several studies have proposed to replace NMF with autoencoders. This increasing popularity of autoencoders warrants an investigation on whether this replacement is in general valid and reasonable. Moreover, the exact relationship between non-negative autoencoders and NMF has not been thoroughly explored. Thus, a main aim of this study is to investigate in detail the relationship between non-negative autoencoders and NMF. We find that the connection between the two models can be established through convex NMF, which is a restricted case of NMF. In particular, convex NMF is a special case of an autoencoder. The performance of NMF and autoencoders is compared within the context of extraction of mutational signatures from cancer genomics data. We find that the reconstructions based on NMF are more accurate compared to autoencoders, while the signatures extracted using both methods show comparable consistencies and values when externally validated. These findings suggest that the non-negative autoencoders investigated in this article do not provide an improvement of NMF in the field of mutational signature extraction. | [
"['Ida Egendal' 'Rasmus Froberg Brøndum' 'Marta Pelizzola' 'Asger Hobolth'\n 'Martin Bøgsted']"
]
|
null | null | 2405.07884 | null | null | http://arxiv.org/pdf/2405.07884v2 | 2024-05-23T19:41:32Z | 2024-05-13T16:17:57Z | Lai Loss: A Novel Loss for Gradient Control | In the field of machine learning, traditional regularization methods tend to directly add regularization terms to the loss function. This paper introduces the "Lai loss", a novel loss design that integrates the regularization terms (specifically, gradients) into the traditional loss function through straightforward geometric concepts. This design penalizes the gradients with the loss itself, allowing for control of the gradients while ensuring maximum accuracy. With this loss, we can effectively control the model's smoothness and sensitivity, potentially offering the dual benefits of improving the model's generalization performance and enhancing its noise resistance on specific features. Additionally, we proposed a training method that successfully addresses the challenges in practical applications. We conducted preliminary experiments using publicly available datasets from Kaggle, demonstrating that the design of Lai loss can control the model's smoothness and sensitivity while maintaining stable model performance. | [
"['YuFei Lai']"
]
|
null | null | 2405.07892 | null | null | http://arxiv.org/pdf/2405.07892v1 | 2024-05-13T16:27:12Z | 2024-05-13T16:27:12Z | All Nodes are created Not Equal: Node-Specific Layer Aggregation and
Filtration for GNN | The ever-designed Graph Neural Networks, though opening a promising path for the modeling of the graph-structure data, unfortunately introduce two daunting obstacles to their deployment on devices. (I) Most of existing GNNs are shallow, due mostly to the over-smoothing and gradient-vanish problem as they go deeper as convolutional architectures. (II) The vast majority of GNNs adhere to the homophily assumption, where the central node and its adjacent nodes share the same label. This assumption often poses challenges for many GNNs working with heterophilic graphs. Addressing the aforementioned issue has become a looming challenge in enhancing the robustness and scalability of GNN applications. In this paper, we take a comprehensive and systematic approach to overcoming the two aforementioned challenges for the first time. We propose a Node-Specific Layer Aggregation and Filtration architecture, termed NoSAF, a framework capable of filtering and processing information from each individual nodes. NoSAF introduces the concept of "All Nodes are Created Not Equal" into every layer of deep networks, aiming to provide a reliable information filter for each layer's nodes to sieve out information beneficial for the subsequent layer. By incorporating a dynamically updated codebank, NoSAF dynamically optimizes the optimal information outputted downwards at each layer. This effectively overcomes heterophilic issues and aids in deepening the network. To compensate for the information loss caused by the continuous filtering in NoSAF, we also propose NoSAF-D (Deep), which incorporates a compensation mechanism that replenishes information in every layer of the model, allowing NoSAF to perform meaningful computations even in very deep layers. | [
"['Shilong Wang' 'Hao Wu' 'Yifan Duan' 'Guibin Zhang' 'Guohao Li'\n 'Yuxuan Liang' 'Shirui Pan' 'Kun Wang' 'Yang Wang']"
]
|
null | null | 2405.07896 | null | null | http://arxiv.org/pdf/2405.07896v2 | 2024-05-14T20:25:23Z | 2024-04-30T22:55:27Z | Almanac Copilot: Towards Autonomous Electronic Health Record Navigation | Clinicians spend large amounts of time on clinical documentation, and inefficiencies impact quality of care and increase clinician burnout. Despite the promise of electronic medical records (EMR), the transition from paper-based records has been negatively associated with clinician wellness, in part due to poor user experience, increased burden of documentation, and alert fatigue. In this study, we present Almanac Copilot, an autonomous agent capable of assisting clinicians with EMR-specific tasks such as information retrieval and order placement. On EHR-QA, a synthetic evaluation dataset of 300 common EHR queries based on real patient data, Almanac Copilot obtains a successful task completion rate of 74% (n = 221 tasks) with a mean score of 2.45 over 3 (95% CI:2.34-2.56). By automating routine tasks and streamlining the documentation process, our findings highlight the significant potential of autonomous agents to mitigate the cognitive load imposed on clinicians by current EMR systems. | [
"['Cyril Zakka' 'Joseph Cho' 'Gracia Fahed' 'Rohan Shad' 'Michael Moor'\n 'Robyn Fong' 'Dhamanpreet Kaur' 'Vishnu Ravi' 'Oliver Aalami'\n 'Roxana Daneshjou' 'Akshay Chaudhari' 'William Hiesinger']"
]
|
null | null | 2405.07914 | null | null | http://arxiv.org/pdf/2405.07914v1 | 2024-05-13T16:47:05Z | 2024-05-13T16:47:05Z | Distribution Learning Meets Graph Structure Sampling | This work establishes a novel link between the problem of PAC-learning high-dimensional graphical models and the task of (efficient) counting and sampling of graph structures, using an online learning framework. We observe that if we apply the exponentially weighted average (EWA) or randomized weighted majority (RWM) forecasters on a sequence of samples from a distribution P using the log loss function, the average regret incurred by the forecaster's predictions can be used to bound the expected KL divergence between P and the predictions. Known regret bounds for EWA and RWM then yield new sample complexity bounds for learning Bayes nets. Moreover, these algorithms can be made computationally efficient for several interesting classes of Bayes nets. Specifically, we give a new sample-optimal and polynomial time learning algorithm with respect to trees of unknown structure and the first polynomial sample and time algorithm for learning with respect to Bayes nets over a given chordal skeleton. | [
"['Arnab Bhattacharyya' 'Sutanu Gayen' 'Philips George John' 'Sayantan Sen'\n 'N. V. Vinodchandran']"
]
|
null | null | 2405.07916 | null | null | http://arxiv.org/pdf/2405.07916v1 | 2024-05-13T16:47:53Z | 2024-05-13T16:47:53Z | IMAFD: An Interpretable Multi-stage Approach to Flood Detection from
time series Multispectral Data | In this paper, we address two critical challenges in the domain of flood detection: the computational expense of large-scale time series change detection and the lack of interpretable decision-making processes on explainable AI (XAI). To overcome these challenges, we proposed an interpretable multi-stage approach to flood detection, IMAFD has been proposed. It provides an automatic, efficient and interpretable solution suitable for large-scale remote sensing tasks and offers insight into the decision-making process. The proposed IMAFD approach combines the analysis of the dynamic time series image sequences to identify images with possible flooding with the static, within-image semantic segmentation. It combines anomaly detection (at both image and pixel level) with semantic segmentation. The flood detection problem is addressed through four stages: (1) at a sequence level: identifying the suspected images (2) at a multi-image level: detecting change within suspected images (3) at an image level: semantic segmentation of images into Land, Water or Cloud class (4) decision making. Our contributions are two folder. First, we efficiently reduced the number of frames to be processed for dense change detection by providing a multi-stage holistic approach to flood detection. Second, the proposed semantic change detection method (stage 3) provides human users with an interpretable decision-making process, while most of the explainable AI (XAI) methods provide post hoc explanations. The evaluation of the proposed IMAFD framework was performed on three datasets, WorldFloods, RavAEn and MediaEval. For all the above datasets, the proposed framework demonstrates a competitive performance compared to other methods offering also interpretability and insight. | [
"['Ziyang Zhang' 'Plamen Angelov' 'Dmitry Kangin' 'Nicolas Longépé']"
]
|
null | null | 2405.07925 | null | null | http://arxiv.org/pdf/2405.07925v1 | 2024-05-13T16:57:48Z | 2024-05-13T16:57:48Z | Stable Diffusion-based Data Augmentation for Federated Learning with
Non-IID Data | The proliferation of edge devices has brought Federated Learning (FL) to the forefront as a promising paradigm for decentralized and collaborative model training while preserving the privacy of clients' data. However, FL struggles with a significant performance reduction and poor convergence when confronted with Non-Independent and Identically Distributed (Non-IID) data distributions among participating clients. While previous efforts, such as client drift mitigation and advanced server-side model fusion techniques, have shown some success in addressing this challenge, they often overlook the root cause of the performance reduction - the absence of identical data accurately mirroring the global data distribution among clients. In this paper, we introduce Gen-FedSD, a novel approach that harnesses the powerful capability of state-of-the-art text-to-image foundation models to bridge the significant Non-IID performance gaps in FL. In Gen-FedSD, each client constructs textual prompts for each class label and leverages an off-the-shelf state-of-the-art pre-trained Stable Diffusion model to synthesize high-quality data samples. The generated synthetic data is tailored to each client's unique local data gaps and distribution disparities, effectively making the final augmented local data IID. Through extensive experimentation, we demonstrate that Gen-FedSD achieves state-of-the-art performance and significant communication cost savings across various datasets and Non-IID settings. | [
"['Mahdi Morafah' 'Matthias Reisser' 'Bill Lin' 'Christos Louizos']"
]
|
null | null | 2405.07930 | null | null | http://arxiv.org/pdf/2405.07930v1 | 2024-05-13T17:01:28Z | 2024-05-13T17:01:28Z | Improving Multimodal Learning with Multi-Loss Gradient Modulation | Learning from multiple modalities, such as audio and video, offers opportunities for leveraging complementary information, enhancing robustness, and improving contextual understanding and performance. However, combining such modalities presents challenges, especially when modalities differ in data structure, predictive contribution, and the complexity of their learning processes. It has been observed that one modality can potentially dominate the learning process, hindering the effective utilization of information from other modalities and leading to sub-optimal model performance. To address this issue the vast majority of previous works suggest to assess the unimodal contributions and dynamically adjust the training to equalize them. We improve upon previous work by introducing a multi-loss objective and further refining the balancing process, allowing it to dynamically adjust the learning pace of each modality in both directions, acceleration and deceleration, with the ability to phase out balancing effects upon convergence. We achieve superior results across three audio-video datasets: on CREMA-D, models with ResNet backbone encoders surpass the previous best by 1.9% to 12.4%, and Conformer backbone models deliver improvements ranging from 2.8% to 14.1% across different fusion methods. On AVE, improvements range from 2.7% to 7.7%, while on UCF101, gains reach up to 6.1%. | [
"['Konstantinos Kontras' 'Christos Chatzichristos' 'Matthew Blaschko'\n 'Maarten De Vos']"
]
|
null | null | 2405.07937 | null | null | http://arxiv.org/pdf/2405.07937v2 | 2024-06-10T15:42:50Z | 2024-05-13T17:13:32Z | Active Learning with Simple Questions | We consider an active learning setting where a learner is presented with a pool S of n unlabeled examples belonging to a domain X and asks queries to find the underlying labeling that agrees with a target concept h^* in H. In contrast to traditional active learning that queries a single example for its label, we study more general region queries that allow the learner to pick a subset of the domain T subset X and a target label y and ask a labeler whether h^*(x) = y for every example in the set T cap S. Such more powerful queries allow us to bypass the limitations of traditional active learning and use significantly fewer rounds of interactions to learn but can potentially lead to a significantly more complex query language. Our main contribution is quantifying the trade-off between the number of queries and the complexity of the query language used by the learner. We measure the complexity of the region queries via the VC dimension of the family of regions. We show that given any hypothesis class H with VC dimension d, one can design a region query family Q with VC dimension O(d) such that for every set of n examples S subset X and every h^* in H, a learner can submit O(d log n) queries from Q to a labeler and perfectly label S. We show a matching lower bound by designing a hypothesis class H with VC dimension d and a dataset S subset X of size n such that any learning algorithm using any query class with VC dimension less than O(d) must make poly(n) queries to label S perfectly. Finally, we focus on well-studied hypothesis classes including unions of intervals, high-dimensional boxes, and d-dimensional halfspaces, and obtain stronger results. In particular, we design learning algorithms that (i) are computationally efficient and (ii) work even when the queries are not answered based on the learner's pool of examples S but on some unknown superset L of S | [
"['Vasilis Kontonis' 'Mingchen Ma' 'Christos Tzamos']"
]
|
null | null | 2405.07943 | null | null | http://arxiv.org/pdf/2405.07943v1 | 2024-05-13T17:18:08Z | 2024-05-13T17:18:08Z | Hierarchical Decision Mamba | Recent advancements in imitation learning have been largely fueled by the integration of sequence models, which provide a structured flow of information to effectively mimic task behaviours. Currently, Decision Transformer (DT) and subsequently, the Hierarchical Decision Transformer (HDT), presented Transformer-based approaches to learn task policies. Recently, the Mamba architecture has shown to outperform Transformers across various task domains. In this work, we introduce two novel methods, Decision Mamba (DM) and Hierarchical Decision Mamba (HDM), aimed at enhancing the performance of the Transformer models. Through extensive experimentation across diverse environments such as OpenAI Gym and D4RL, leveraging varying demonstration data sets, we demonstrate the superiority of Mamba models over their Transformer counterparts in a majority of tasks. Results show that HDM outperforms other methods in most settings. The code can be found at https://github.com/meowatthemoon/HierarchicalDecisionMamba. | [
"['André Correia' 'Luís A. Alexandre']"
]
|
null | null | 2405.07965 | null | null | http://arxiv.org/pdf/2405.07965v2 | 2024-05-20T21:49:14Z | 2024-05-13T17:46:30Z | Fast Computation of Superquantile-Constrained Optimization Through
Implicit Scenario Reduction | Superquantiles have recently gained significant interest as a risk-aware metric for addressing fairness and distribution shifts in statistical learning and decision making problems. This paper introduces a fast, scalable and robust second-order computational framework to solve large-scale optimization problems with superquantile-based constraints. Unlike empirical risk minimization, superquantile-based optimization requires ranking random functions evaluated across all scenarios to compute the tail conditional expectation. While this tail-based feature might seem computationally unfriendly, it provides an advantageous setting for a semismooth-Newton-based augmented Lagrangian method. The superquantile operator effectively reduces the dimensions of the Newton systems since the tail expectation involves considerably fewer scenarios. Notably, the extra cost of obtaining relevant second-order information and performing matrix inversions is often comparable to, and sometimes even less than, the effort required for gradient computation. Our developed solver is particularly effective when the number of scenarios substantially exceeds the number of decision variables. In synthetic problems with linear and convex diagonal quadratic objectives, numerical experiments demonstrate that our method outperforms existing approaches by a large margin: It achieves speeds more than 750 times faster for linear and quadratic objectives than the alternating direction method of multipliers as implemented by OSQP for computing low-accuracy solutions. Additionally, it is up to 25 times faster for linear objectives and 70 times faster for quadratic objectives than the commercial solver Gurobi, and 20 times faster for linear objectives and 30 times faster for quadratic objectives than the Portfolio Safeguard optimization suite for high-accuracy solution computations. | [
"['Jake Roth' 'Ying Cui']"
]
|
null | null | 2405.07971 | null | null | http://arxiv.org/pdf/2405.07971v1 | 2024-05-13T17:47:40Z | 2024-05-13T17:47:40Z | Sensitivity Analysis for Active Sampling, with Applications to the
Simulation of Analog Circuits | We propose an active sampling flow, with the use-case of simulating the impact of combined variations on analog circuits. In such a context, given the large number of parameters, it is difficult to fit a surrogate model and to efficiently explore the space of design features. By combining a drastic dimension reduction using sensitivity analysis and Bayesian surrogate modeling, we obtain a flexible active sampling flow. On synthetic and real datasets, this flow outperforms the usual Monte-Carlo sampling which often forms the foundation of design space exploration. | [
"['Reda Chhaibi' 'Fabrice Gamboa' 'Christophe Oger' 'Vinicius Oliveira'\n 'Clément Pellegrini' 'Damien Remot']"
]
|
null | null | 2405.07976 | null | null | http://arxiv.org/pdf/2405.07976v2 | 2024-06-09T16:23:49Z | 2024-05-13T17:48:45Z | Localized Adaptive Risk Control | Adaptive Risk Control (ARC) is an online calibration strategy based on set prediction that offers worst-case deterministic long-term risk control, as well as statistical marginal coverage guarantees. ARC adjusts the size of the prediction set by varying a single scalar threshold based on feedback from past decisions. In this work, we introduce Localized Adaptive Risk Control (L-ARC), an online calibration scheme that targets statistical localized risk guarantees ranging from conditional risk to marginal risk, while preserving the worst-case performance of ARC. L-ARC updates a threshold function within a reproducing kernel Hilbert space (RKHS), with the kernel determining the level of localization of the statistical risk guarantee. The theoretical results highlight a trade-off between localization of the statistical risk and convergence speed to the long-term risk target. Thanks to localization, L-ARC is demonstrated via experiments to produce prediction sets with risk guarantees across different data subpopulations, significantly improving the fairness of the calibrated model for tasks such as image segmentation and beam selection in wireless networks. | [
"['Matteo Zecchin' 'Osvaldo Simeone']"
]
|
null | null | 2405.07977 | null | null | http://arxiv.org/pdf/2405.07977v1 | 2024-05-13T17:49:20Z | 2024-05-13T17:49:20Z | A Demographic-Conditioned Variational Autoencoder for fMRI Distribution
Sampling and Removal of Confounds | Objective: fMRI and derived measures such as functional connectivity (FC) have been used to predict brain age, general fluid intelligence, psychiatric disease status, and preclinical neurodegenerative disease. However, it is not always clear that all demographic confounds, such as age, sex, and race, have been removed from fMRI data. Additionally, many fMRI datasets are restricted to authorized researchers, making dissemination of these valuable data sources challenging. Methods: We create a variational autoencoder (VAE)-based model, DemoVAE, to decorrelate fMRI features from demographics and generate high-quality synthetic fMRI data based on user-supplied demographics. We train and validate our model using two large, widely used datasets, the Philadelphia Neurodevelopmental Cohort (PNC) and Bipolar and Schizophrenia Network for Intermediate Phenotypes (BSNIP). Results: We find that DemoVAE recapitulates group differences in fMRI data while capturing the full breadth of individual variations. Significantly, we also find that most clinical and computerized battery fields that are correlated with fMRI data are not correlated with DemoVAE latents. An exception are several fields related to schizophrenia medication and symptom severity. Conclusion: Our model generates fMRI data that captures the full distribution of FC better than traditional VAE or GAN models. We also find that most prediction using fMRI data is dependent on correlation with, and prediction of, demographics. Significance: Our DemoVAE model allows for generation of high quality synthetic data conditioned on subject demographics as well as the removal of the confounding effects of demographics. We identify that FC-based prediction tasks are highly influenced by demographic confounds. | [
"['Anton Orlichenko' 'Gang Qu' 'Ziyu Zhou' 'Anqi Liu' 'Hong-Wen Deng'\n 'Zhengming Ding' 'Julia M. Stephen' 'Tony W. Wilson' 'Vince D. Calhoun'\n 'Yu-Ping Wang']"
]
|
null | null | 2405.07987 | null | null | http://arxiv.org/pdf/2405.07987v1 | 2024-05-13T17:58:30Z | 2024-05-13T17:58:30Z | The Platonic Representation Hypothesis | We argue that representations in AI models, particularly deep networks, are converging. First, we survey many examples of convergence in the literature: over time and across multiple domains, the ways by which different neural networks represent data are becoming more aligned. Next, we demonstrate convergence across data modalities: as vision models and language models get larger, they measure distance between datapoints in a more and more alike way. We hypothesize that this convergence is driving toward a shared statistical model of reality, akin to Plato's concept of an ideal reality. We term such a representation the platonic representation and discuss several possible selective pressures toward it. Finally, we discuss the implications of these trends, their limitations, and counterexamples to our analysis. | [
"['Minyoung Huh' 'Brian Cheung' 'Tongzhou Wang' 'Phillip Isola']"
]
|
null | null | 2405.07991 | null | null | http://arxiv.org/pdf/2405.07991v1 | 2024-05-13T17:59:36Z | 2024-05-13T17:59:36Z | SPIN: Simultaneous Perception, Interaction and Navigation | While there has been remarkable progress recently in the fields of manipulation and locomotion, mobile manipulation remains a long-standing challenge. Compared to locomotion or static manipulation, a mobile system must make a diverse range of long-horizon tasks feasible in unstructured and dynamic environments. While the applications are broad and interesting, there are a plethora of challenges in developing these systems such as coordination between the base and arm, reliance on onboard perception for perceiving and interacting with the environment, and most importantly, simultaneously integrating all these parts together. Prior works approach the problem using disentangled modular skills for mobility and manipulation that are trivially tied together. This causes several limitations such as compounding errors, delays in decision-making, and no whole-body coordination. In this work, we present a reactive mobile manipulation framework that uses an active visual system to consciously perceive and react to its environment. Similar to how humans leverage whole-body and hand-eye coordination, we develop a mobile manipulator that exploits its ability to move and see, more specifically -- to move in order to see and to see in order to move. This allows it to not only move around and interact with its environment but also, choose "when" to perceive "what" using an active visual system. We observe that such an agent learns to navigate around complex cluttered scenarios while displaying agile whole-body coordination using only ego-vision without needing to create environment maps. Results visualizations and videos at https://spin-robot.github.io/ | [
"['Shagun Uppal' 'Ananye Agarwal' 'Haoyu Xiong' 'Kenneth Shaw'\n 'Deepak Pathak']"
]
|
null | null | 2405.07992 | null | null | http://arxiv.org/pdf/2405.07992v3 | 2024-05-20T16:36:21Z | 2024-05-13T17:59:56Z | MambaOut: Do We Really Need Mamba for Vision? | Mamba, an architecture with RNN-like token mixer of state space model (SSM), was recently introduced to address the quadratic complexity of the attention mechanism and subsequently applied to vision tasks. Nevertheless, the performance of Mamba for vision is often underwhelming when compared with convolutional and attention-based models. In this paper, we delve into the essence of Mamba, and conceptually conclude that Mamba is ideally suited for tasks with long-sequence and autoregressive characteristics. For vision tasks, as image classification does not align with either characteristic, we hypothesize that Mamba is not necessary for this task; Detection and segmentation tasks are also not autoregressive, yet they adhere to the long-sequence characteristic, so we believe it is still worthwhile to explore Mamba's potential for these tasks. To empirically verify our hypotheses, we construct a series of models named MambaOut through stacking Mamba blocks while removing their core token mixer, SSM. Experimental results strongly support our hypotheses. Specifically, our MambaOut model surpasses all visual Mamba models on ImageNet image classification, indicating that Mamba is indeed unnecessary for this task. As for detection and segmentation, MambaOut cannot match the performance of state-of-the-art visual Mamba models, demonstrating the potential of Mamba for long-sequence visual tasks. The code is available at https://github.com/yuweihao/MambaOut | [
"['Weihao Yu' 'Xinchao Wang']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.