categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2406.00135
null
null
http://arxiv.org/pdf/2406.00135v1
2024-05-31T18:55:10Z
2024-05-31T18:55:10Z
Advancing Ear Biometrics: Enhancing Accuracy and Robustness through Deep Learning
Biometric identification is a reliable method to verify individuals based on their unique physical or behavioral traits, offering a secure alternative to traditional methods like passwords or PINs. This study focuses on ear biometric identification, exploiting its distinctive features for enhanced accuracy, reliability, and usability. While past studies typically investigate face recognition and fingerprint analysis, our research demonstrates the effectiveness of ear biometrics in overcoming limitations such as variations in facial expressions and lighting conditions. We utilized two datasets: AMI (700 images from 100 individuals) and EarNV1.0 (28,412 images from 164 individuals). To improve the accuracy and robustness of our ear biometric identification system, we applied various techniques including data preprocessing and augmentation. Our models achieved a testing accuracy of 99.35% on the AMI Dataset and 98.1% on the EarNV1.0 dataset, showcasing the effectiveness of our approach in precisely identifying individuals based on ear biometric characteristics.
[ "['Youssef Mohamed' 'Zeyad Youssef' 'Ahmed Heakl' 'Ahmed Zaky']" ]
null
null
2406.00144
null
null
http://arxiv.org/pdf/2406.00144v1
2024-05-31T19:17:00Z
2024-05-31T19:17:00Z
Query2CAD: Generating CAD models using natural language queries
Computer Aided Design (CAD) engineers typically do not achieve their best prototypes in a single attempt. Instead, they iterate and refine their designs to achieve an optimal solution through multiple revisions. This traditional approach, though effective, is time-consuming and relies heavily on the expertise of skilled engineers. To address these challenges, we introduce Query2CAD, a novel framework to generate CAD designs. The framework uses a large language model to generate executable CAD macros. Additionally, Query2CAD refines the generation of the CAD model with the help of its self-refinement loops. Query2CAD operates without supervised data or additional training, using the LLM as both a generator and a refiner. The refiner leverages feedback generated by the BLIP2 model, and to address false negatives, we have incorporated human-in-the-loop feedback into our system. Additionally, we have developed a dataset that encompasses most operations used in CAD model designing and have evaluated our framework using this dataset. Our findings reveal that when we used GPT-4 Turbo as our language model, the architecture achieved a success rate of 53.6% on the first attempt. With subsequent refinements, the success rate increased by 23.1%. In particular, the most significant improvement in the success rate was observed with the first iteration of the refinement. With subsequent refinements, the accuracy of the correct designs did not improve significantly. We have open-sourced our data, model, and code (github.com/akshay140601/Query2CAD).
[ "['Akshay Badagabettu' 'Sai Sravan Yarlagadda' 'Amir Barati Farimani']" ]
null
null
2406.00146
null
null
http://arxiv.org/pdf/2406.00146v1
2024-05-31T19:20:27Z
2024-05-31T19:20:27Z
A Survey of Deep Learning Audio Generation Methods
This article presents a review of typical techniques used in three distinct aspects of deep learning model development for audio generation. In the first part of the article, we provide an explanation of audio representations, beginning with the fundamental audio waveform. We then progress to the frequency domain, with an emphasis on the attributes of human hearing, and finally introduce a relatively recent development. The main part of the article focuses on explaining basic and extended deep learning architecture variants, along with their practical applications in the field of audio generation. The following architectures are addressed: 1) Autoencoders 2) Generative adversarial networks 3) Normalizing flows 4) Transformer networks 5) Diffusion models. Lastly, we will examine four distinct evaluation metrics that are commonly employed in audio generation. This article aims to offer novice readers and beginners in the field a comprehensive understanding of the current state of the art in audio generation methods as well as relevant studies that can be explored for future research.
[ "['Matej Božić' 'Marko Horvat']" ]
null
null
2406.00147
null
null
http://arxiv.org/pdf/2406.00147v2
2024-06-15T18:23:49Z
2024-05-31T19:26:05Z
Fair Allocation in Dynamic Mechanism Design
We consider a dynamic mechanism design problem where an auctioneer sells an indivisible good to two groups of buyers in every round, for a total of $T$ rounds. The auctioneer aims to maximize their discounted overall revenue while adhering to a fairness constraint that guarantees a minimum average allocation for each group. We begin by studying the static case ($T=1$) and establish that the optimal mechanism involves two types of subsidization: one that increases the overall probability of allocation to all buyers, and another that favors the group which otherwise has a lower probability of winning the item. We then extend our results to the dynamic case by characterizing a set of recursive functions that determine the optimal allocation and payments in each round. Notably, our results establish that in the dynamic case, the seller, on the one hand, commits to a participation reward to incentivize truth-telling, and on the other hand, charges an entry fee for every round. Moreover, the optimal allocation once more involves subsidization in favor of one group, where the extent of subsidization depends on the difference in future utilities for both the seller and buyers when allocating the item to one group versus the other. Finally, we present an approximation scheme to solve the recursive equations and determine an approximately optimal and fair allocation efficiently.
[ "['Alireza Fallah' 'Michael I. Jordan' 'Annie Ulichney']" ]
null
null
2406.00150
null
null
http://arxiv.org/pdf/2406.00150v1
2024-05-31T19:27:03Z
2024-05-31T19:27:03Z
Non-Federated Multi-Task Split Learning for Heterogeneous Sources
With the development of edge networks and mobile computing, the need to serve heterogeneous data sources at the network edge requires the design of new distributed machine learning mechanisms. As a prevalent approach, Federated Learning (FL) employs parameter-sharing and gradient-averaging between clients and a server. Despite its many favorable qualities, such as convergence and data-privacy guarantees, it is well-known that classic FL fails to address the challenge of data heterogeneity and computation heterogeneity across clients. Most existing works that aim to accommodate such sources of heterogeneity stay within the FL operation paradigm, with modifications to overcome the negative effect of heterogeneous data. In this work, as an alternative paradigm, we propose a Multi-Task Split Learning (MTSL) framework, which combines the advantages of Split Learning (SL) with the flexibility of distributed network architectures. In contrast to the FL counterpart, in this paradigm, heterogeneity is not an obstacle to overcome, but a useful property to take advantage of. As such, this work aims to introduce a new architecture and methodology to perform multi-task learning for heterogeneous data sources efficiently, with the hope of encouraging the community to further explore the potential advantages we reveal. To support this promise, we first show through theoretical analysis that MTSL can achieve fast convergence by tuning the learning rate of the server and clients. Then, we compare the performance of MTSL with existing multi-task FL methods numerically on several image classification datasets to show that MTSL has advantages over FL in training speed, communication cost, and robustness to heterogeneous data.
[ "['Yilin Zheng' 'Atilla Eryilmaz']" ]
null
null
2406.00153
null
null
http://arxiv.org/pdf/2406.00153v1
2024-05-31T19:28:47Z
2024-05-31T19:28:47Z
$μ$LO: Compute-Efficient Meta-Generalization of Learned Optimizers
Learned optimizers (LOs) can significantly reduce the wall-clock training time of neural networks, substantially reducing training costs. However, they often suffer from poor meta-generalization, especially when training networks larger than those seen during meta-training. To address this, we use the recently proposed Maximal Update Parametrization ($mu$P), which allows zero-shot generalization of optimizer hyperparameters from smaller to larger models. We extend $mu$P theory to learned optimizers, treating the meta-training problem as finding the learned optimizer under $mu$P. Our evaluation shows that LOs meta-trained with $mu$P substantially improve meta-generalization as compared to LOs trained under standard parametrization (SP). Notably, when applied to large-width models, our best $mu$LO, trained for 103 GPU-hours, matches or exceeds the performance of VeLO, the largest publicly available learned optimizer, meta-trained with 4000 TPU-months of compute. Moreover, $mu$LOs demonstrate better generalization than their SP counterparts to deeper networks and to much longer training horizons (25 times longer) than those seen during meta-training.
[ "['Benjamin Thérien' 'Charles-Étienne Joseph' 'Boris Knyazev'\n 'Edouard Oyallon' 'Irina Rish' 'Eugene Belilovsky']" ]
null
null
2406.00177
null
null
http://arxiv.org/pdf/2406.00177v1
2024-05-31T20:14:35Z
2024-05-31T20:14:35Z
Flexible and Efficient Surrogate Gradient Modeling with Forward Gradient Injection
Automatic differentiation is a key feature of present deep learning frameworks. Moreover, they typically provide various ways to specify custom gradients within the computation graph, which is of particular importance for defining surrogate gradients in the realms of non-differentiable operations such as the Heaviside function in spiking neural networks (SNNs). PyTorch, for example, allows the custom specification of the backward pass of an operation by overriding its backward method. Other frameworks provide comparable options. While these methods are common practice and usually work well, they also have several disadvantages such as limited flexibility, additional source code overhead, poor usability, or a potentially strong negative impact on the effectiveness of automatic model optimization procedures. In this paper, an alternative way to formulate surrogate gradients is presented, namely, forward gradient injection (FGI). FGI applies a simple but effective combination of basic standard operations to inject an arbitrary gradient shape into the computational graph directly within the forward pass. It is demonstrated that using FGI is straightforward and convenient. Moreover, it is shown that FGI can significantly increase the model performance in comparison to custom backward methods in SNNs when using TorchScript. These results are complemented with a general performance study on recurrent SNNs with TorchScript and torch.compile, revealing the potential for a training speedup of more than 7x and an inference speedup of more than 16x in comparison with pure PyTorch.
[ "['Sebastian Otte']" ]
null
null
2406.00183
null
null
http://arxiv.org/pdf/2406.00183v1
2024-05-31T20:28:08Z
2024-05-31T20:28:08Z
Predicting solvation free energies with an implicit solvent machine learning potential
Machine learning (ML) potentials are a powerful tool in molecular modeling, enabling ab initio accuracy for comparably small computational costs. Nevertheless, all-atom simulations employing best-performing graph neural network architectures are still too expensive for applications requiring extensive sampling, such as free energy computations. Implicit solvent models could provide the necessary speed-up due to reduced degrees of freedom and faster dynamics. Here, we introduce a Solvation Free Energy Path Reweighting (ReSolv) framework to parametrize an implicit solvent ML potential for small organic molecules that accurately predicts the hydration free energy, an essential parameter in drug design and pollutant modeling. With a combination of top-down (experimental hydration free energy data) and bottom-up (ab initio data of molecules in a vacuum) learning, ReSolv bypasses the need for intractable ab initio data of molecules in explicit bulk solvent and does not have to resort to less accurate data-generating models. On the FreeSolv dataset, ReSolv achieves a mean absolute error close to average experimental uncertainty, significantly outperforming standard explicit solvent force fields. The presented framework paves the way toward deep molecular models that are more accurate yet computationally cheaper than classical atomistic models.
[ "['Sebastien Röcken' 'Anton F. Burnet' 'Julija Zavadlav']" ]
null
null
2406.00192
null
null
http://arxiv.org/pdf/2406.00192v1
2024-05-31T20:54:12Z
2024-05-31T20:54:12Z
Direct Cardiac Segmentation from Undersampled K-space Using Transformers
The prevailing deep learning-based methods of predicting cardiac segmentation involve reconstructed magnetic resonance (MR) images. The heavy dependency of segmentation approaches on image quality significantly limits the acceleration rate in fast MR reconstruction. Moreover, the practice of treating reconstruction and segmentation as separate sequential processes leads to artifact generation and information loss in the intermediate stage. These issues pose a great risk to achieving high-quality outcomes. To leverage the redundant k-space information overlooked in this dual-step pipeline, we introduce a novel approach to directly deriving segmentations from sparse k-space samples using a transformer (DiSK). DiSK operates by globally extracting latent features from 2D+time k-space data with attention blocks and subsequently predicting the segmentation label of query points. We evaluate our model under various acceleration factors (ranging from 4 to 64) and compare against two image-based segmentation baselines. Our model consistently outperforms the baselines in Dice and Hausdorff distances across foreground classes for all presented sampling rates.
[ "['Yundi Zhang' 'Nil Stolt-Ansó' 'Jiazhen Pan' 'Wenqi Huang'\n 'Kerstin Hammernik' 'Daniel Rueckert']" ]
null
null
2406.00198
null
null
http://arxiv.org/pdf/2406.00198v1
2024-05-31T21:19:41Z
2024-05-31T21:19:41Z
ImplicitSLIM and How it Improves Embedding-based Collaborative Filtering
We present ImplicitSLIM, a novel unsupervised learning approach for sparse high-dimensional data, with applications to collaborative filtering. Sparse linear methods (SLIM) and their variations show outstanding performance, but they are memory-intensive and hard to scale. ImplicitSLIM improves embedding-based models by extracting embeddings from SLIM-like models in a computationally cheap and memory-efficient way, without explicit learning of heavy SLIM-like models. We show that ImplicitSLIM improves performance and speeds up convergence for both state of the art and classical collaborative filtering methods. The source code for ImplicitSLIM, related models, and applications is available at https://github.com/ilya-shenbin/ImplicitSLIM.
[ "['Ilya Shenbin' 'Sergey Nikolenko']" ]
null
null
2406.00209
null
null
http://arxiv.org/pdf/2406.00209v1
2024-05-31T21:46:23Z
2024-05-31T21:46:23Z
Mamba State-Space Models Can Be Strong Downstream Learners
Mamba state-space models (SSMs) have recently outperformed state-of-the-art (SOTA) Transformer large language models (LLMs) in various tasks and been widely adapted. However, Mamba's downstream learning capabilities remain either unexplored$unicode{x2013}$e.g., mixed-precision (MPFT) and parameter-efficient fine-tuning (PEFT)--or under-evaluated$unicode{x2013}$e.g., in-context learning (ICL). For the latter, recent works reported Mamba's ICL rivals SOTA Transformer LLMs using non-standard benchmarks. In contrast, we show that on standard benchmarks, pretrained Mamba models achieve only 38% of the ICL performance improvements (over zero-shot) of comparable Transformers. Enabling MPFT and PEFT in Mamba architectures is challenging due to recurrent dynamics and highly customized CUDA kernels, respectively. However, we prove that Mamba's recurrent dynamics are robust to small input changes using dynamical systems theory. Empirically, we show that performance changes in Mamba's inference and fine-tuning due to mixed-precision align with Transformer LLMs. Furthermore, we show that targeting key memory buffers in Mamba's customized CUDA kernels for low-rank adaptation regularizes SSM parameters, thus achieving parameter efficiency while retaining speedups. We show that combining MPFT and PEFT enables up to 2.15 times more tokens-per-second and 65.5% reduced per-token-memory compared to full Mamba fine-tuning, while achieving up to 81.5% of the ICL performance improvements (over zero-shot) of comparably fine-tuned Transformers.
[ "['John T. Halloran' 'Manbir Gulati' 'Paul F. Roysdon']" ]
null
null
2406.00222
null
null
http://arxiv.org/pdf/2406.00222v1
2024-05-31T22:44:48Z
2024-05-31T22:44:48Z
Learning to Clarify: Multi-turn Conversations with Action-Based Contrastive Self-Training
Large language models (LLMs) aligned through reinforcement learning from human feedback (RLHF) have quickly become one of the dominant paradigms for building intelligent conversational assistant agents. However, despite their strong performance across many benchmarks, LLM-based agents still lack conversational skills such as disambiguation: when generalized assistants are faced with ambiguity, they often overhedge or implicitly guess users' ground-truth intents rather than asking clarification questions, and under task-specific settings, high-quality conversation samples are often limited, affecting models' ability to learn optimal dialogue action policies. We propose Action-Based Contrastive Self-Training (henceforth ACT), a quasi-online preference optimization algorithm based on Direct Preference Optimization (DPO) which allows for sample-efficient dialogue policy learning in multi-turn conversation. We demonstrate ACT's efficacy under sample-efficient conditions in three difficult conversational tasks: tabular-grounded question-answering, machine reading comprehension, and AmbigSQL, a novel task for disambiguating information-seeking requests for text-to-SQL generation. Additionally, we propose evaluating LLMs' ability to function as conversational agents by examining whether they can implicitly recognize and reason about ambiguity in conversation. ACT demonstrates substantial conversation modeling improvements over standard approaches to supervised fine-tuning and DPO.
[ "['Maximillian Chen' 'Ruoxi Sun' 'Sercan Ö. Arık' 'Tomas Pfister']" ]
null
null
2406.00234
null
null
http://arxiv.org/pdf/2406.00234v1
2024-05-31T23:38:51Z
2024-05-31T23:38:51Z
Learning to Stabilize Unknown LTI Systems on a Single Trajectory under Stochastic Noise
We study the problem of learning to stabilize unknown noisy Linear Time-Invariant (LTI) systems on a single trajectory. It is well known in the literature that the learn-to-stabilize problem suffers from exponential blow-up in which the state norm blows up in the order of $Theta(2^n)$ where $n$ is the state space dimension. This blow-up is due to the open-loop instability when exploring the $n$-dimensional state space. To address this issue, we develop a novel algorithm that decouples the unstable subspace of the LTI system from the stable subspace, based on which the algorithm only explores and stabilizes the unstable subspace, the dimension of which can be much smaller than $n$. With a new singular-value-decomposition(SVD)-based analytical framework, we prove that the system is stabilized before the state norm reaches $2^{O(k log n)}$, where $k$ is the dimension of the unstable subspace. Critically, this bound avoids exponential blow-up in state dimension in the order of $Theta(2^n)$ as in the previous works, and to the best of our knowledge, this is the first paper to avoid exponential blow-up in dimension for stabilizing LTI systems with noise.
[ "['Ziyi Zhang' 'Yorie Nakahira' 'Guannan Qu']" ]
null
null
2406.00237
null
null
http://arxiv.org/pdf/2406.00237v1
2024-05-31T23:56:42Z
2024-05-31T23:56:42Z
A Comparative Study of CNN, ResNet, and Vision Transformers for Multi-Classification of Chest Diseases
Large language models, notably utilizing Transformer architectures, have emerged as powerful tools due to their scalability and ability to process large amounts of data. Dosovitskiy et al. expanded this architecture to introduce Vision Transformers (ViT), extending its applicability to image processing tasks. Motivated by this advancement, we fine-tuned two variants of ViT models, one pre-trained on ImageNet and another trained from scratch, using the NIH Chest X-ray dataset containing over 100,000 frontal-view X-ray images. Our study evaluates the performance of these models in the multi-label classification of 14 distinct diseases, while using Convolutional Neural Networks (CNNs) and ResNet architectures as baseline models for comparison. Through rigorous assessment based on accuracy metrics, we identify that the pre-trained ViT model surpasses CNNs and ResNet in this multilabel classification task, highlighting its potential for accurate diagnosis of various lung conditions from chest X-ray images.
[ "['Ananya Jain' 'Aviral Bhardwaj' 'Kaushik Murali' 'Isha Surani']" ]
null
null
2406.00238
null
null
http://arxiv.org/pdf/2406.00238v1
2024-06-01T00:02:41Z
2024-06-01T00:02:41Z
Robust Biharmonic Skinning Using Geometric Fields
Skinning is a popular way to rig and deform characters for animation, to compute reduced-order simulations, and to define features for geometry processing. Methods built on skinning rely on weight functions that distribute the influence of each degree of freedom across the mesh. Automatic skinning methods generate these weight functions with minimal user input, usually by solving a variational problem on a mesh whose boundary is the skinned surface. This formulation necessitates tetrahedralizing the volume inside the surface, which brings with it meshing artifacts, the possibility of tetrahedralization failure, and the impossibility of generating weights for surfaces that are not closed. We introduce a mesh-free and robust automatic skinning method that generates high-quality skinning weights comparable to the current state of the art without volumetric meshes. Our method reliably works even on open surfaces and triangle soups where current methods fail. We achieve this through the use of a Lagrangian representation for skinning weights, which circumvents the need for finite elements while optimizing the biharmonic energy.
[ "['Ana Dodik' 'Vincent Sitzmann' 'Justin Solomon' 'Oded Stein']" ]
null
null
2406.00239
null
null
http://arxiv.org/pdf/2406.00239v1
2024-06-01T00:10:05Z
2024-06-01T00:10:05Z
A Review of Pulse-Coupled Neural Network Applications in Computer Vision and Image Processing
Research in neural models inspired by mammal's visual cortex has led to many spiking neural networks such as pulse-coupled neural networks (PCNNs). These models are oscillating, spatio-temporal models stimulated with images to produce several time-based responses. This paper reviews PCNN's state of the art, covering its mathematical formulation, variants, and other simplifications found in the literature. We present several applications in which PCNN architectures have successfully addressed some fundamental image processing and computer vision challenges, including image segmentation, edge detection, medical imaging, image fusion, image compression, object recognition, and remote sensing. Results achieved in these applications suggest that the PCNN architecture generates useful perceptual information relevant to a wide variety of computer vision tasks.
[ "['Nurul Rafi' 'Pablo Rivas']" ]
null
null
2406.00240
null
null
http://arxiv.org/pdf/2406.00240v1
2024-06-01T00:11:09Z
2024-06-01T00:11:09Z
Exploring Vulnerabilities and Protections in Large Language Models: A Survey
As Large Language Models (LLMs) increasingly become key components in various AI applications, understanding their security vulnerabilities and the effectiveness of defense mechanisms is crucial. This survey examines the security challenges of LLMs, focusing on two main areas: Prompt Hacking and Adversarial Attacks, each with specific types of threats. Under Prompt Hacking, we explore Prompt Injection and Jailbreaking Attacks, discussing how they work, their potential impacts, and ways to mitigate them. Similarly, we analyze Adversarial Attacks, breaking them down into Data Poisoning Attacks and Backdoor Attacks. This structured examination helps us understand the relationships between these vulnerabilities and the defense strategies that can be implemented. The survey highlights these security challenges and discusses robust defensive frameworks to protect LLMs against these threats. By detailing these security issues, the survey contributes to the broader discussion on creating resilient AI systems that can resist sophisticated attacks.
[ "['Frank Weizhen Liu' 'Chenhui Hu']" ]
null
null
2406.00249
null
null
http://arxiv.org/pdf/2406.00249v1
2024-06-01T01:10:35Z
2024-06-01T01:10:35Z
Privacy Challenges in Meta-Learning: An Investigation on Model-Agnostic Meta-Learning
Meta-learning involves multiple learners, each dedicated to specific tasks, collaborating in a data-constrained setting. In current meta-learning methods, task learners locally learn models from sensitive data, termed support sets. These task learners subsequently share model-related information, such as gradients or loss values, which is computed using another part of the data termed query set, with a meta-learner. The meta-learner employs this information to update its meta-knowledge. Despite the absence of explicit data sharing, privacy concerns persist. This paper examines potential data leakage in a prominent metalearning algorithm, specifically Model-Agnostic Meta-Learning (MAML). In MAML, gradients are shared between the metalearner and task-learners. The primary objective is to scrutinize the gradient and the information it encompasses about the task dataset. Subsequently, we endeavor to propose membership inference attacks targeting the task dataset containing support and query sets. Finally, we explore various noise injection methods designed to safeguard the privacy of task data and thwart potential attacks. Experimental results demonstrate the effectiveness of these attacks on MAML and the efficacy of proper noise injection methods in countering them.
[ "['Mina Rafiei' 'Mohammadmahdi Maheri' 'Hamid R. Rabiee']" ]
null
null
2406.00262
null
null
http://arxiv.org/pdf/2406.00262v1
2024-06-01T01:53:51Z
2024-06-01T01:53:51Z
Contrastive Learning Via Equivariant Representation
Invariant-based Contrastive Learning (ICL) methods have achieved impressive performance across various domains. However, the absence of latent space representation for distortion (augmentation)-related information in the latent space makes ICL sub-optimal regarding training efficiency and robustness in downstream tasks. Recent studies suggest that introducing equivariance into Contrastive Learning (CL) can improve overall performance. In this paper, we rethink the roles of augmentation strategies and equivariance in improving CL efficacy. We propose a novel Equivariant-based Contrastive Learning (ECL) framework, CLeVER (Contrastive Learning Via Equivariant Representation), compatible with augmentation strategies of arbitrary complexity for various mainstream CL methods and model frameworks. Experimental results demonstrate that CLeVER effectively extracts and incorporates equivariant information from data, thereby improving the training efficiency and robustness of baseline models in downstream tasks.
[ "['Sifan Song' 'Jinfeng Wang' 'Qiaochu Zhao' 'Xiang Li' 'Dufan Wu'\n 'Angelos Stefanidis' 'Jionglong Su' 'S. Kevin Zhou' 'Quanzheng Li']" ]
null
null
2406.00275
null
null
http://arxiv.org/pdf/2406.00275v1
2024-06-01T02:41:34Z
2024-06-01T02:41:34Z
StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain, making it a highly ambitious and challenging task. State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data and thus increase robustness. Nevertheless, they have largely overlooked the underlying coherence between the augmented domains, which in turn leads to inferior results in real-world scenarios. In this paper, we propose a simple yet effective scheme, termed as emph{StyDeSty}, to explicitly account for the alignment of the source and pseudo domains in the process of data augmentation, enabling them to interact with each other in a self-consistent manner and further giving rise to a latent domain with strong generalization power. The heart of StyDeSty lies in the interaction between a emph{stylization} module for generating novel stylized samples using the source domain, and a emph{destylization} module for transferring stylized and source samples to a latent domain to learn content-invariant features. The stylization and destylization modules work adversarially and reinforce each other. During inference, the destylization module transforms the input sample with an arbitrary style shift to the latent domain, in which the downstream tasks are carried out. Specifically, the location of the destylization layer within the backbone network is determined by a dedicated neural architecture search (NAS) strategy. We evaluate StyDeSty on multiple benchmarks and demonstrate that it yields encouraging results, outperforming the state of the art by up to {13.44%} on classification accuracy. Codes are available here: https://github.com/Huage001/StyDeSty.
[ "['Songhua Liu' 'Xin Jin' 'Xingyi Yang' 'Jingwen Ye' 'Xinchao Wang']" ]
null
null
2406.00276
null
null
http://arxiv.org/pdf/2406.00276v1
2024-06-01T02:43:41Z
2024-06-01T02:43:41Z
Non-destructive Degradation Pattern Decoupling for Ultra-early Battery Prototype Verification Using Physics-informed Machine Learning
Manufacturing complexities and uncertainties have impeded the transition from material prototypes to commercial batteries, making prototype verification critical to quality assessment. A fundamental challenge involves deciphering intertwined chemical processes to characterize degradation patterns and their quantitative relationship with battery performance. Here we show that a physics-informed machine learning approach can quantify and visualize temporally resolved losses concerning thermodynamics and kinetics only using electric signals. Our method enables non-destructive degradation pattern characterization, expediting temperature-adaptable predictions of entire lifetime trajectories, rather than end-of-life points. The verification speed is 25 times faster yet maintaining 95.1% accuracy across temperatures. Such advances facilitate more sustainable management of defective prototypes before massive production, establishing a 19.76 billion USD scrap material recycling market by 2060 in China. By incorporating stepwise charge acceptance as a measure of the initial manufacturing variability of normally identical batteries, we can immediately identify long-term degradation variations. We attribute the predictive power to interpreting machine learning insights using material-agnostic featurization taxonomy for degradation pattern decoupling. Our findings offer new possibilities for dynamic system analysis, such as battery prototype degradation, demonstrating that complex pattern evolutions can be accurately predicted in a non-destructive and data-driven fashion by integrating physics-informed machine learning.
[ "['Shengyu Tao' 'Mengtian Zhang' 'Zixi Zhao' 'Haoyang Li' 'Ruifei Ma'\n 'Yunhong Che' 'Xin Sun' 'Lin Su' 'Xiangyu Chen' 'Zihao Zhou' 'Heng Chang'\n 'Tingwei Cao' 'Xiao Xiao' 'Yaojun Liu' 'Wenjun Yu' 'Zhongling Xu'\n 'Yang Li' 'Han Hao' 'Xuan Zhang' 'Xiaosong Hu' 'Guangmin ZHou']" ]
null
null
2406.00281
null
null
http://arxiv.org/pdf/2406.00281v1
2024-06-01T03:24:31Z
2024-06-01T03:24:31Z
Cross-Table Pretraining towards a Universal Function Space for Heterogeneous Tabular Data
Tabular data from different tables exhibit significant diversity due to varied definitions and types of features, as well as complex inter-feature and feature-target relationships. Cross-dataset pretraining, which learns reusable patterns from upstream data to support downstream tasks, have shown notable success in various fields. Yet, when applied to tabular data prediction, this paradigm faces challenges due to the limited reusable patterns among diverse tabular datasets (tables) and the general scarcity of tabular data available for fine-tuning. In this study, we fill this gap by introducing a cross-table pretrained Transformer, XTFormer, for versatile downstream tabular prediction tasks. Our methodology insight is pretraining XTFormer to establish a "meta-function" space that encompasses all potential feature-target mappings. In pre-training, a variety of potential mappings are extracted from pre-training tabular datasets and are embedded into the "meta-function" space, and suited mappings are extracted from the "meta-function" space for downstream tasks by a specified coordinate positioning approach. Experiments show that, in 190 downstream tabular prediction tasks, our cross-table pretrained XTFormer wins both XGBoost and Catboost on 137 (72%) tasks, and surpasses representative deep learning models FT-Transformer and the tabular pre-training approach XTab on 144 (76%) and 162 (85%) tasks.
[ "['Jintai Chen' 'Zhen Lin' 'Qiyuan Chen' 'Jimeng Sun']" ]
null
null
2406.00288
null
null
http://arxiv.org/pdf/2406.00288v1
2024-06-01T03:34:00Z
2024-06-01T03:34:00Z
Neural Optimal Transport with Lagrangian Costs
We investigate the optimal transport problem between probability measures when the underlying cost function is understood to satisfy a least action principle, also known as a Lagrangian cost. These generalizations are useful when connecting observations from a physical system where the transport dynamics are influenced by the geometry of the system, such as obstacles (e.g., incorporating barrier functions in the Lagrangian), and allows practitioners to incorporate a priori knowledge of the underlying system such as non-Euclidean geometries (e.g., paths must be circular). Our contributions are of computational interest, where we demonstrate the ability to efficiently compute geodesics and amortize spline-based paths, which has not been done before, even in low dimensional problems. Unlike prior work, we also output the resulting Lagrangian optimal transport map without requiring an ODE solver. We demonstrate the effectiveness of our formulation on low-dimensional examples taken from prior work. The source code to reproduce our experiments is available at https://github.com/facebookresearch/lagrangian-ot.
[ "['Aram-Alexandre Pooladian' 'Carles Domingo-Enrich' 'Ricky T. Q. Chen'\n 'Brandon Amos']" ]
null
null
2406.00290
null
null
http://arxiv.org/pdf/2406.00290v1
2024-06-01T03:36:03Z
2024-06-01T03:36:03Z
Phasor-Driven Acceleration for FFT-based CNNs
Recent research in deep learning (DL) has investigated the use of the Fast Fourier Transform (FFT) to accelerate the computations involved in Convolutional Neural Networks (CNNs) by replacing spatial convolution with element-wise multiplications on the spectral domain. These approaches mainly rely on the FFT to reduce the number of operations, which can be further decreased by adopting the Real-Valued FFT. In this paper, we propose using the phasor form, a polar representation of complex numbers, as a more efficient alternative to the traditional approach. The experimental results, evaluated on the CIFAR-10, demonstrate that our method achieves superior speed improvements of up to a factor of 1.376 (average of 1.316) during training and up to 1.390 (average of 1.321) during inference when compared to the traditional rectangular form employed in modern CNN architectures. Similarly, when evaluated on the CIFAR-100, our method achieves superior speed improvements of up to a factor of 1.375 (average of 1.299) during training and up to 1.387 (average of 1.300) during inference. Most importantly, given the modular aspect of our approach, the proposed method can be applied to any existing convolution-based DL model without design changes.
[ "['Eduardo Reis' 'Thangarajah Akilan' 'Mohammed Khalid']" ]
null
null
2406.00291
null
null
http://arxiv.org/pdf/2406.00291v1
2024-06-01T03:51:34Z
2024-06-01T03:51:34Z
Multi-objective Neural Architecture Search by Learning Search Space Partitions
Deploying deep learning models requires taking into consideration neural network metrics such as model size, inference latency, and #FLOPs, aside from inference accuracy. This results in deep learning model designers leveraging multi-objective optimization to design effective deep neural networks in multiple criteria. However, applying multi-objective optimizations to neural architecture search (NAS) is nontrivial because NAS tasks usually have a huge search space, along with a non-negligible searching cost. This requires effective multi-objective search algorithms to alleviate the GPU costs. In this work, we implement a novel multi-objectives optimizer based on a recently proposed meta-algorithm called LaMOO on NAS tasks. In a nutshell, LaMOO speedups the search process by learning a model from observed samples to partition the search space and then focusing on promising regions likely to contain a subset of the Pareto frontier. Using LaMOO, we observe an improvement of more than 200% sample efficiency compared to Bayesian optimization and evolutionary-based multi-objective optimizers on different NAS datasets. For example, when combined with LaMOO, qEHVI achieves a 225% improvement in sample efficiency compared to using qEHVI alone in NasBench201. For real-world tasks, LaMOO achieves 97.36% accuracy with only 1.62M #Params on CIFAR10 in only 600 search samples. On ImageNet, our large model reaches 80.4% top-1 accuracy with only 522M #FLOPs.
[ "['Yiyang Zhao' 'Linnan Wang' 'Tian Guo']" ]
null
null
2406.00294
null
null
http://arxiv.org/pdf/2406.00294v1
2024-06-01T04:08:31Z
2024-06-01T04:08:31Z
Creative Text-to-Audio Generation via Synthesizer Programming
Neural audio synthesis methods now allow specifying ideas in natural language. However, these methods produce results that cannot be easily tweaked, as they are based on large latent spaces and up to billions of uninterpretable parameters. We propose a text-to-audio generation method that leverages a virtual modular sound synthesizer with only 78 parameters. Synthesizers have long been used by skilled sound designers for media like music and film due to their flexibility and intuitive controls. Our method, CTAG, iteratively updates a synthesizer's parameters to produce high-quality audio renderings of text prompts that can be easily inspected and tweaked. Sounds produced this way are also more abstract, capturing essential conceptual features over fine-grained acoustic details, akin to how simple sketches can vividly convey visual concepts. Our results show how CTAG produces sounds that are distinctive, perceived as artistic, and yet similarly identifiable to recent neural audio synthesis models, positioning it as a valuable and complementary tool.
[ "['Manuel Cherep' 'Nikhil Singh' 'Jessica Shand']" ]
null
null
2406.00300
null
null
http://arxiv.org/pdf/2406.00300v1
2024-06-01T05:01:25Z
2024-06-01T05:01:25Z
Coded Computing: A Learning-Theoretic Framework
Coded computing has emerged as a promising framework for tackling significant challenges in large-scale distributed computing, including the presence of slow, faulty, or compromised servers. In this approach, each worker node processes a combination of the data, rather than the raw data itself. The final result then is decoded from the collective outputs of the worker nodes. However, there is a significant gap between current coded computing approaches and the broader landscape of general distributed computing, particularly when it comes to machine learning workloads. To bridge this gap, we propose a novel foundation for coded computing, integrating the principles of learning theory, and developing a new framework that seamlessly adapts with machine learning applications. In this framework, the objective is to find the encoder and decoder functions that minimize the loss function, defined as the mean squared error between the estimated and true values. Facilitating the search for the optimum decoding and functions, we show that the loss function can be upper-bounded by the summation of two terms: the generalization error of the decoding function and the training error of the encoding function. Focusing on the second-order Sobolev space, we then derive the optimal encoder and decoder. We show that in the proposed solution, the mean squared error of the estimation decays with the rate of $O(S^4 N^{-3})$ and $O(S^{frac{8}{5}}N^{frac{-3}{5}})$ in noiseless and noisy computation settings, respectively, where $N$ is the number of worker nodes with at most $S$ slow servers (stragglers). Finally, we evaluate the proposed scheme on inference tasks for various machine learning models and demonstrate that the proposed framework outperforms the state-of-the-art in terms of accuracy and rate of convergence.
[ "['Parsa Moradi' 'Behrooz Tahmasebi' 'Mohammad Ali Maddah-Ali']" ]
null
null
2406.00302
null
null
http://arxiv.org/pdf/2406.00302v1
2024-06-01T05:14:20Z
2024-06-01T05:14:20Z
FedAST: Federated Asynchronous Simultaneous Training
Federated Learning (FL) enables edge devices or clients to collaboratively train machine learning (ML) models without sharing their private data. Much of the existing work in FL focuses on efficiently learning a model for a single task. In this paper, we study simultaneous training of multiple FL models using a common set of clients. The few existing simultaneous training methods employ synchronous aggregation of client updates, which can cause significant delays because large models and/or slow clients can bottleneck the aggregation. On the other hand, a naive asynchronous aggregation is adversely affected by stale client updates. We propose FedAST, a buffered asynchronous federated simultaneous training algorithm that overcomes bottlenecks from slow models and adaptively allocates client resources across heterogeneous tasks. We provide theoretical convergence guarantees for FedAST for smooth non-convex objective functions. Extensive experiments over multiple real-world datasets demonstrate that our proposed method outperforms existing simultaneous FL approaches, achieving up to 46.0% reduction in time to train multiple tasks to completion.
[ "['Baris Askin' 'Pranay Sharma' 'Carlee Joe-Wong' 'Gauri Joshi']" ]
null
null
2406.00314
null
null
http://arxiv.org/pdf/2406.00314v2
2024-06-16T10:33:34Z
2024-06-01T06:17:32Z
CASE: Efficient Curricular Data Pre-training for Building Assistive Psychology Expert Models
The limited availability of psychologists necessitates efficient identification of individuals requiring urgent mental healthcare. This study explores the use of Natural Language Processing (NLP) pipelines to analyze text data from online mental health forums used for consultations. By analyzing forum posts, these pipelines can flag users who may require immediate professional attention. A crucial challenge in this domain is data privacy and scarcity. To address this, we propose utilizing readily available curricular texts used in institutes specializing in mental health for pre-training the NLP pipelines. This helps us mimic the training process of a psychologist. Our work presents CASE-BERT that flags potential mental health disorders based on forum text. CASE-BERT demonstrates superior performance compared to existing methods, achieving an f1 score of 0.91 for Depression and 0.88 for Anxiety, two of the most commonly reported mental health disorders. Our code is publicly available.
[ "['Sarthak Harne' 'Monjoy Narayan Choudhury' 'Madhav Rao' 'TK Srikanth'\n 'Seema Mehrotra' 'Apoorva Vashisht' 'Aarushi Basu' 'Manjit Sodhi']" ]
null
null
2406.00317
null
null
http://arxiv.org/pdf/2406.00317v1
2024-06-01T06:26:28Z
2024-06-01T06:26:28Z
Combining Experimental and Historical Data for Policy Evaluation
This paper studies policy evaluation with multiple data sources, especially in scenarios that involve one experimental dataset with two arms, complemented by a historical dataset generated under a single control arm. We propose novel data integration methods that linearly integrate base policy value estimators constructed based on the experimental and historical data, with weights optimized to minimize the mean square error (MSE) of the resulting combined estimator. We further apply the pessimistic principle to obtain more robust estimators, and extend these developments to sequential decision making. Theoretically, we establish non-asymptotic error bounds for the MSEs of our proposed estimators, and derive their oracle, efficiency and robustness properties across a broad spectrum of reward shift scenarios. Numerical experiments and real-data-based analyses from a ridesharing company demonstrate the superior performance of the proposed estimators.
[ "['Ting Li' 'Chengchun Shi' 'Qianglin Wen' 'Yang Sui' 'Yongli Qin'\n 'Chunbo Lai' 'Hongtu Zhu']" ]
null
null
2406.00318
null
null
http://arxiv.org/pdf/2406.00318v1
2024-06-01T06:28:41Z
2024-06-01T06:28:41Z
KGLink: A column type annotation method that combines knowledge graph and pre-trained language model
The semantic annotation of tabular data plays a crucial role in various downstream tasks. Previous research has proposed knowledge graph (KG)-based and deep learning-based methods, each with its inherent limitations. KG-based methods encounter difficulties annotating columns when there is no match for column cells in the KG. Moreover, KG-based methods can provide multiple predictions for one column, making it challenging to determine the semantic type with the most suitable granularity for the dataset. This type granularity issue limits their scalability. On the other hand, deep learning-based methods face challenges related to the valuable context missing issue. This occurs when the information within the table is insufficient for determining the correct column type. This paper presents KGLink, a method that combines WikiData KG information with a pre-trained deep learning language model for table column annotation, effectively addressing both type granularity and valuable context missing issues. Through comprehensive experiments on widely used tabular datasets encompassing numeric and string columns with varying type granularity, we showcase the effectiveness and efficiency of KGLink. By leveraging the strengths of KGLink, we successfully surmount challenges related to type granularity and valuable context issues, establishing it as a robust solution for the semantic annotation of tabular data.
[ "['Yubo Wang' 'Hao Xin' 'Lei Chen']" ]
null
null
2406.00324
null
null
http://arxiv.org/pdf/2406.00324v1
2024-06-01T06:56:27Z
2024-06-01T06:56:27Z
Do's and Don'ts: Learning Desirable Skills with Instruction Videos
Unsupervised skill discovery is a learning paradigm that aims to acquire diverse behaviors without explicit rewards. However, it faces challenges in learning complex behaviors and often leads to learning unsafe or undesirable behaviors. For instance, in various continuous control tasks, current unsupervised skill discovery methods succeed in learning basic locomotions like standing but struggle with learning more complex movements such as walking and running. Moreover, they may acquire unsafe behaviors like tripping and rolling or navigate to undesirable locations such as pitfalls or hazardous areas. In response, we present DoDont (Do's and Don'ts), an instruction-based skill discovery algorithm composed of two stages. First, in an instruction learning stage, DoDont leverages action-free instruction videos to train an instruction network to distinguish desirable transitions from undesirable ones. Then, in the skill learning stage, the instruction network adjusts the reward function of the skill discovery algorithm to weight the desired behaviors. Specifically, we integrate the instruction network into a distance-maximizing skill discovery algorithm, where the instruction network serves as the distance function. Empirically, with less than 8 instruction videos, DoDont effectively learns desirable behaviors and avoids undesirable ones across complex continuous control tasks. Code and videos are available at https://mynsng.github.io/dodont/
[ "['Hyunseung Kim' 'Byungkun Lee' 'Hojoon Lee' 'Dongyoon Hwang' 'Donghu Kim'\n 'Jaegul Choo']" ]
null
null
2406.00328
null
null
http://arxiv.org/pdf/2406.00328v1
2024-06-01T07:03:40Z
2024-06-01T07:03:40Z
Optimal bounds for $\ell_p$ sensitivity sampling via $\ell_2$ augmentation
Data subsampling is one of the most natural methods to approximate a massively large data set by a small representative proxy. In particular, sensitivity sampling received a lot of attention, which samples points proportional to an individual importance measure called sensitivity. This framework reduces in very general settings the size of data to roughly the VC dimension $d$ times the total sensitivity $mathfrak S$ while providing strong $(1pmvarepsilon)$ guarantees on the quality of approximation. The recent work of Woodruff & Yasuda (2023c) improved substantially over the general $tilde O(varepsilon^{-2}mathfrak Sd)$ bound for the important problem of $ell_p$ subspace embeddings to $tilde O(varepsilon^{-2}mathfrak S^{2/p})$ for $pin[1,2]$. Their result was subsumed by an earlier $tilde O(varepsilon^{-2}mathfrak Sd^{1-p/2})$ bound which was implicitly given in the work of Chen & Derezinski (2021). We show that their result is tight when sampling according to plain $ell_p$ sensitivities. We observe that by augmenting the $ell_p$ sensitivities by $ell_2$ sensitivities, we obtain better bounds improving over the aforementioned results to optimal linear $tilde O(varepsilon^{-2}(mathfrak S+d)) = tilde O(varepsilon^{-2}d)$ sampling complexity for all $p in [1,2]$. In particular, this resolves an open question of Woodruff & Yasuda (2023c) in the affirmative for $p in [1,2]$ and brings sensitivity subsampling into the regime that was previously only known to be possible using Lewis weights (Cohen & Peng, 2015). As an application of our main result, we also obtain an $tilde O(varepsilon^{-2}mu d)$ sensitivity sampling bound for logistic regression, where $mu$ is a natural complexity measure for this problem. This improves over the previous $tilde O(varepsilon^{-2}mu^2 d)$ bound of Mai et al. (2021) which was based on Lewis weights subsampling.
[ "['Alexander Munteanu' 'Simon Omlor']" ]
null
null
2406.00329
null
null
http://arxiv.org/pdf/2406.00329v2
2024-06-06T15:27:12Z
2024-06-01T07:08:45Z
Whole Heart 3D+T Representation Learning Through Sparse 2D Cardiac MR Images
Cardiac Magnetic Resonance (CMR) imaging serves as the gold-standard for evaluating cardiac morphology and function. Typically, a multi-view CMR stack, covering short-axis (SA) and 2/3/4-chamber long-axis (LA) views, is acquired for a thorough cardiac assessment. However, efficiently streamlining the complex, high-dimensional 3D+T CMR data and distilling compact, coherent representation remains a challenge. In this work, we introduce a whole-heart self-supervised learning framework that utilizes masked imaging modeling to automatically uncover the correlations between spatial and temporal patches throughout the cardiac stacks. This process facilitates the generation of meaningful and well-clustered heart representations without relying on the traditionally required, and often costly, labeled data. The learned heart representation can be directly used for various downstream tasks. Furthermore, our method demonstrates remarkable robustness, ensuring consistent representations even when certain CMR planes are missing/flawed. We train our model on 14,000 unlabeled CMR data from UK BioBank and evaluate it on 1,000 annotated data. The proposed method demonstrates superior performance to baselines in tasks that demand comprehensive 3D+T cardiac information, e.g. cardiac phenotype (ejection fraction and ventricle volume) prediction and multi-plane/multi-frame CMR segmentation, highlighting its effectiveness in extracting comprehensive cardiac features that are both anatomically and pathologically relevant.
[ "['Yundi Zhang' 'Chen Chen' 'Suprosanna Shit' 'Sophie Starck'\n 'Daniel Rueckert' 'Jiazhen Pan']" ]
null
null
2406.00332
null
null
http://arxiv.org/pdf/2406.00332v1
2024-06-01T07:17:38Z
2024-06-01T07:17:38Z
A Structured Review of Literature on Uncertainty in Machine Learning & Deep Learning
The adaptation and use of Machine Learning (ML) in our daily lives has led to concerns in lack of transparency, privacy, reliability, among others. As a result, we are seeing research in niche areas such as interpretability, causality, bias and fairness, and reliability. In this survey paper, we focus on a critical concern for adaptation of ML in risk-sensitive applications, namely understanding and quantifying uncertainty. Our paper approaches this topic in a structured way, providing a review of the literature in the various facets that uncertainty is enveloped in the ML process. We begin by defining uncertainty and its categories (e.g., aleatoric and epistemic), understanding sources of uncertainty (e.g., data and model), and how uncertainty can be assessed in terms of uncertainty quantification techniques (Ensembles, Bayesian Neural Networks, etc.). As part of our assessment and understanding of uncertainty in the ML realm, we cover metrics for uncertainty quantification for a single sample, dataset, and metrics for accuracy of the uncertainty estimation itself. This is followed by discussions on calibration (model and uncertainty), and decision making under uncertainty. Thus, we provide a more complete treatment of uncertainty: from the sources of uncertainty to the decision-making process. We have focused the review of uncertainty quantification methods on Deep Learning (DL), while providing the necessary background for uncertainty discussion within ML in general. Key contributions in this review are broadening the scope of uncertainty discussion, as well as an updated review of uncertainty quantification methods in DL.
[ "['Fahimeh Fakour' 'Ali Mosleh' 'Ramin Ramezani']" ]
null
null
2406.00335
null
null
http://arxiv.org/pdf/2406.00335v1
2024-06-01T07:23:37Z
2024-06-01T07:23:37Z
Benchmarking for Deep Uplift Modeling in Online Marketing
Online marketing is critical for many industrial platforms and business applications, aiming to increase user engagement and platform revenue by identifying corresponding delivery-sensitive groups for specific incentives, such as coupons and bonuses. As the scale and complexity of features in industrial scenarios increase, deep uplift modeling (DUM) as a promising technique has attracted increased research from academia and industry, resulting in various predictive models. However, current DUM still lacks some standardized benchmarks and unified evaluation protocols, which limit the reproducibility of experimental results in existing studies and the practical value and potential impact in this direction. In this paper, we provide an open benchmark for DUM and present comparison results of existing models in a reproducible and uniform manner. To this end, we conduct extensive experiments on two representative industrial datasets with different preprocessing settings to re-evaluate 13 existing models. Surprisingly, our experimental results show that the most recent work differs less than expected from traditional work in many cases. In addition, our experiments also reveal the limitations of DUM in generalization, especially for different preprocessing and test distributions. Our benchmarking work allows researchers to evaluate the performance of new models quickly but also reasonably demonstrates fair comparison results with existing models. It also gives practitioners valuable insights into often overlooked considerations when deploying DUM. We will make this benchmarking library, evaluation protocol, and experimental setup available on GitHub.
[ "['Dugang Liu' 'Xing Tang' 'Yang Qiao' 'Miao Liu' 'Zexu Sun' 'Xiuqiang He'\n 'Zhong Ming']" ]
null
null
2406.00339
null
null
http://arxiv.org/pdf/2406.00339v1
2024-06-01T07:33:41Z
2024-06-01T07:33:41Z
Turnstile $\ell_p$ leverage score sampling with applications
The turnstile data stream model offers the most flexible framework where data can be manipulated dynamically, i.e., rows, columns, and even single entries of an input matrix can be added, deleted, or updated multiple times in a data stream. We develop a novel algorithm for sampling rows $a_i$ of a matrix $Ainmathbb{R}^{ntimes d}$, proportional to their $ell_p$ norm, when $A$ is presented in a turnstile data stream. Our algorithm not only returns the set of sampled row indexes, it also returns slightly perturbed rows $tilde{a}_i approx a_i$, and approximates their sampling probabilities up to $varepsilon$ relative error. When combined with preconditioning techniques, our algorithm extends to $ell_p$ leverage score sampling over turnstile data streams. With these properties in place, it allows us to simulate subsampling constructions of coresets for important regression problems to operate over turnstile data streams with very little overhead compared to their respective off-line subsampling algorithms. For logistic regression, our framework yields the first algorithm that achieves a $(1+varepsilon)$ approximation and works in a turnstile data stream using polynomial sketch/subsample size, improving over $O(1)$ approximations, or $exp(1/varepsilon)$ sketch size of previous work. We compare experimentally to plain oblivious sketching and plain leverage score sampling algorithms for $ell_p$ and logistic regression.
[ "['Alexander Munteanu' 'Simon Omlor']" ]
null
null
2406.00345
null
null
http://arxiv.org/pdf/2406.00345v1
2024-06-01T07:46:42Z
2024-06-01T07:46:42Z
DeCoOp: Robust Prompt Tuning with Out-of-Distribution Detection
Vision-language models (VLMs), such as CLIP, have demonstrated impressive zero-shot capabilities for various downstream tasks. Their performance can be further enhanced through few-shot prompt tuning methods. However, current studies evaluate the performance of learned prompts separately on base and new classes. This evaluation lacks practicality for real-world applications since downstream tasks cannot determine whether the data belongs to base or new classes in advance. In this paper, we explore a problem setting called Open-world Prompt Tuning (OPT), which involves tuning prompts on base classes and evaluating on a combination of base and new classes. By introducing Decomposed Prompt Tuning framework (DePT), we theoretically demonstrate that OPT can be solved by incorporating out-of-distribution detection into prompt tuning, thereby enhancing the base-to-new discriminability. Based on DePT, we present a novel prompt tuning approach, namely, Decomposed Context Optimization (DeCoOp), which introduces new-class detectors and sub-classifiers to further enhance the base-class and new-class discriminability. Experimental results on 11 benchmark datasets validate the effectiveness of DePT and demonstrate that DeCoOp outperforms current state-of-the-art methods, providing a significant 2% average accuracy improvement.
[ "['Zhi Zhou' 'Ming Yang' 'Jiang-Xin Shi' 'Lan-Zhe Guo' 'Yu-Feng Li']" ]
null
null
2406.00368
null
null
http://arxiv.org/pdf/2406.00368v1
2024-06-01T09:03:32Z
2024-06-01T09:03:32Z
Modeling Randomly Observed Spatiotemporal Dynamical Systems
Spatiotemporal processes are a fundamental tool for modeling dynamics across various domains, from heat propagation in materials to oceanic and atmospheric flows. However, currently available neural network-based modeling approaches fall short when faced with data collected randomly over time and space, as is often the case with sensor networks in real-world applications like crowdsourced earthquake detection or pollution monitoring. In response, we developed a new spatiotemporal method that effectively handles such randomly sampled data. Our model integrates techniques from amortized variational inference, neural differential equations, neural point processes, and implicit neural representations to predict both the dynamics of the system and the probabilistic locations and timings of future observations. It outperforms existing methods on challenging spatiotemporal datasets by offering substantial improvements in predictive accuracy and computational efficiency, making it a useful tool for modeling and understanding complex dynamical systems observed under realistic, unconstrained conditions.
[ "['Valerii Iakovlev' 'Harri Lähdesmäki']" ]
null
null
2406.00371
null
null
http://arxiv.org/pdf/2406.00371v2
2024-06-04T15:16:28Z
2024-06-01T09:10:57Z
Alternative Methods to SHAP Derived from Properties of Kernels: A Note on Theoretical Analysis
This study first derives a general and analytical expression of AFA (Additive Feature Attribution) in terms of the kernel in LIME (Local Interpretable Model-agnostic Explanations). Then, we propose some new AFAs that have appropriate properties of kernels or that coincide with the LS prenucleolus in cooperative game theory. We also revisit existing AFAs such as SHAP (SHapley Additive exPlanations) and re-examine the properties of their kernels.
[ "['Kazuhiro Hiraki' 'Shinichi Ishihara' 'Junnosuke Shino']" ]
null
null
2406.00389
null
null
http://arxiv.org/pdf/2406.00389v1
2024-06-01T10:04:55Z
2024-06-01T10:04:55Z
Understanding the Convergence in Balanced Resonate-and-Fire Neurons
Resonate-and-Fire (RF) neurons are an interesting complementary model for integrator neurons in spiking neural networks (SNNs). Due to their resonating membrane dynamics they can extract frequency patterns within the time domain. While established RF variants suffer from intrinsic shortcomings, the recently proposed balanced resonate-and-fire (BRF) neuron marked a significant methodological advance in terms of task performance, spiking and parameter efficiency, as well as, general stability and robustness, demonstrated for recurrent SNNs in various sequence learning tasks. One of the most intriguing result, however, was an immense improvement in training convergence speed and smoothness, overcoming the typical convergence dilemma in backprop-based SNN training. This paper aims at providing further intuitions about how and why these convergence advantages emerge. We show that BRF neurons, in contrast to well-established ALIF neurons, span a very clean and smooth - almost convex - error landscape. Furthermore, empirical results reveal that the convergence benefits are predominantly coupled with a divergence boundary-aware optimization, a major component of the BRF formulation that addresses the numerical stability of the time-discrete resonator approximation. These results are supported by a formal investigation of the membrane dynamics indicating that the gradient is transferred back through time without loss of magnitude.
[ "['Saya Higuchi' 'Sander M. Bohte' 'Sebastian Otte']" ]
null
null
2406.00394
null
null
http://arxiv.org/pdf/2406.00394v1
2024-06-01T10:42:52Z
2024-06-01T10:42:52Z
Learning Causal Abstractions of Linear Structural Causal Models
The need for modelling causal knowledge at different levels of granularity arises in several settings. Causal Abstraction provides a framework for formalizing this problem by relating two Structural Causal Models at different levels of detail. Despite increasing interest in applying causal abstraction, e.g. in the interpretability of large machine learning models, the graphical and parametrical conditions under which a causal model can abstract another are not known. Furthermore, learning causal abstractions from data is still an open problem. In this work, we tackle both issues for linear causal models with linear abstraction functions. First, we characterize how the low-level coefficients and the abstraction function determine the high-level coefficients and how the high-level model constrains the causal ordering of low-level variables. Then, we apply our theoretical results to learn high-level and low-level causal models and their abstraction function from observational data. In particular, we introduce Abs-LiNGAM, a method that leverages the constraints induced by the learned high-level model and the abstraction function to speedup the recovery of the larger low-level model, under the assumption of non-Gaussian noise terms. In simulated settings, we show the effectiveness of learning causal abstractions from data and the potential of our method in improving scalability of causal discovery.
[ "['Riccardo Massidda' 'Sara Magliacane' 'Davide Bacciu']" ]
null
null
2406.00396
null
null
http://arxiv.org/pdf/2406.00396v1
2024-06-01T10:45:41Z
2024-06-01T10:45:41Z
Stochastic Restarting to Overcome Overfitting in Neural Networks with Noisy Labels
Despite its prevalence, giving up and starting over may seem wasteful in many situations such as searching for a target or training deep neural networks (DNNs). Our study, though, demonstrates that restarting from a checkpoint can significantly improve generalization performance when training DNNs with noisy labels. In the presence of noisy labels, DNNs initially learn the general patterns of the data but then gradually overfit to the noisy labels. To combat this overfitting phenomenon, we developed a method based on stochastic restarting, which has been actively explored in the statistical physics field for finding targets efficiently. By approximating the dynamics of stochastic gradient descent into Langevin dynamics, we theoretically show that restarting can provide great improvements as the batch size and the proportion of corrupted data increase. We then empirically validate our theory, confirming the significant improvements achieved by restarting. An important aspect of our method is its ease of implementation and compatibility with other methods, while still yielding notably improved performance. We envision it as a valuable tool that can complement existing methods for handling noisy labels.
[ "['Youngkyoung Bae' 'Yeongwoo Song' 'Hawoong Jeong']" ]
null
null
2406.00403
null
null
http://arxiv.org/pdf/2406.00403v1
2024-06-01T11:11:49Z
2024-06-01T11:11:49Z
Dual-perspective Cross Contrastive Learning in Graph Transformers
Graph contrastive learning (GCL) is a popular method for leaning graph representations by maximizing the consistency of features across augmented views. Traditional GCL methods utilize single-perspective i.e. data or model-perspective) augmentation to generate positive samples, restraining the diversity of positive samples. In addition, these positive samples may be unreliable due to uncontrollable augmentation strategies that potentially alter the semantic information. To address these challenges, this paper proposed a innovative framework termed dual-perspective cross graph contrastive learning (DC-GCL), which incorporates three modifications designed to enhance positive sample diversity and reliability: 1) We propose dual-perspective augmentation strategy that provide the model with more diverse training data, enabling the model effective learning of feature consistency across different views. 2) From the data perspective, we slightly perturb the original graphs using controllable data augmentation, effectively preserving their semantic information. 3) From the model perspective, we enhance the encoder by utilizing more powerful graph transformers instead of graph neural networks. Based on the model's architecture, we propose three pruning-based strategies to slightly perturb the encoder, providing more reliable positive samples. These modifications collectively form the DC-GCL's foundation and provide more diverse and reliable training inputs, offering significant improvements over traditional GCL methods. Extensive experiments on various benchmarks demonstrate that DC-GCL consistently outperforms different baselines on various datasets and tasks.
[ "['Zelin Yao' 'Chuang Liu' 'Xueqi Ma' 'Mukun Chen' 'Jia Wu' 'Xiantao Cai'\n 'Bo Du' 'Wenbin Hu']" ]
null
null
2406.00409
null
null
http://arxiv.org/pdf/2406.00409v1
2024-06-01T11:43:00Z
2024-06-01T11:43:00Z
Arabic Handwritten Text for Person Biometric Identification: A Deep Learning Approach
This study thoroughly investigates how well deep learning models can recognize Arabic handwritten text for person biometric identification. It compares three advanced architectures -- ResNet50, MobileNetV2, and EfficientNetB7 -- using three widely recognized datasets: AHAWP, Khatt, and LAMIS-MSHD. Results show that EfficientNetB7 outperforms the others, achieving test accuracies of 98.57%, 99.15%, and 99.79% on AHAWP, Khatt, and LAMIS-MSHD datasets, respectively. EfficientNetB7's exceptional performance is credited to its innovative techniques, including compound scaling, depth-wise separable convolutions, and squeeze-and-excitation blocks. These features allow the model to extract more abstract and distinctive features from handwritten text images. The study's findings hold significant implications for enhancing identity verification and authentication systems, highlighting the potential of deep learning in Arabic handwritten text recognition for person biometric identification.
[ "['Mazen Balat' 'Youssef Mohamed' 'Ahmed Heakl' 'Ahmed Zaky']" ]
null
null
2406.00410
null
null
http://arxiv.org/pdf/2406.00410v1
2024-06-01T11:59:49Z
2024-06-01T11:59:49Z
Posterior Label Smoothing for Node Classification
Soft labels can improve the generalization of a neural network classifier in many domains, such as image classification. Despite its success, the current literature has overlooked the efficiency of label smoothing in node classification with graph-structured data. In this work, we propose a simple yet effective label smoothing for the transductive node classification task. We design the soft label to encapsulate the local context of the target node through the neighborhood label distribution. We apply the smoothing method for seven baseline models to show its effectiveness. The label smoothing methods improve the classification accuracy in 10 node classification datasets in most cases. In the following analysis, we find that incorporating global label statistics in posterior computation is the key to the success of label smoothing. Further investigation reveals that the soft labels mitigate overfitting during training, leading to better generalization performance.
[ "['Jaeseung Heo' 'Moonjeong Park' 'Dongwoo Kim']" ]
null
null
2406.00416
null
null
http://arxiv.org/pdf/2406.00416v1
2024-06-01T12:24:23Z
2024-06-01T12:24:23Z
Representation and De-interleaving of Mixtures of Hidden Markov Processes
De-interleaving of the mixtures of Hidden Markov Processes (HMPs) generally depends on its representation model. Existing representation models consider Markov chain mixtures rather than hidden Markov, resulting in the lack of robustness to non-ideal situations such as observation noise or missing observations. Besides, de-interleaving methods utilize a search-based strategy, which is time-consuming. To address these issues, this paper proposes a novel representation model and corresponding de-interleaving methods for the mixtures of HMPs. At first, a generative model for representing the mixtures of HMPs is designed. Subsequently, the de-interleaving process is formulated as a posterior inference for the generative model. Secondly, an exact inference method is developed to maximize the likelihood of the complete data, and two approximate inference methods are developed to maximize the evidence lower bound by creating tractable structures. Then, a theoretical error probability lower bound is derived using the likelihood ratio test, and the algorithms are shown to get reasonably close to the bound. Finally, simulation results demonstrate that the proposed methods are highly effective and robust for non-ideal situations, outperforming baseline methods on simulated and real-life data.
[ "['Jiadi Bao' 'Mengtao Zhu' 'Yunjie Li' 'Shafei Wang']" ]
null
null
2406.00418
null
null
http://arxiv.org/pdf/2406.00418v1
2024-06-01T12:31:15Z
2024-06-01T12:31:15Z
GATE: How to Keep Out Intrusive Neighbors
Graph Attention Networks (GATs) are designed to provide flexible neighborhood aggregation that assigns weights to neighbors according to their importance. In practice, however, GATs are often unable to switch off task-irrelevant neighborhood aggregation, as we show experimentally and analytically. To address this challenge, we propose GATE, a GAT extension that holds three major advantages: i) It alleviates over-smoothing by addressing its root cause of unnecessary neighborhood aggregation. ii) Similarly to perceptrons, it benefits from higher depth as it can still utilize additional layers for (non-)linear feature transformations in case of (nearly) switched-off neighborhood aggregation. iii) By down-weighting connections to unrelated neighbors, it often outperforms GATs on real-world heterophilic datasets. To further validate our claims, we construct a synthetic test bed to analyze a model's ability to utilize the appropriate amount of neighborhood aggregation, which could be of independent interest.
[ "['Nimrah Mustafa' 'Rebekka Burkholz']" ]
null
null
2406.00423
null
null
http://arxiv.org/abs/2406.00423v1
2024-06-01T12:41:03Z
2024-06-01T12:41:03Z
Multimodal Metadata Assignment for Cultural Heritage Artifacts
We develop a multimodal classifier for the cultural heritage domain using a late fusion approach and introduce a novel dataset. The three modalities are Image, Text, and Tabular data. We based the image classifier on a ResNet convolutional neural network architecture and the text classifier on a multilingual transformer architecture (XML-Roberta). Both are trained as multitask classifiers and use the focal loss to handle class imbalance. Tabular data and late fusion are handled by Gradient Tree Boosting. We also show how we leveraged specific data models and taxonomy in a Knowledge Graph to create the dataset and to store classification results. All individual classifiers accurately predict missing properties in the digitized silk artifacts, with the multimodal approach providing the best results.
[ "['Luis Rei' 'Dunja Mladenić' 'Mareike Dorozynski' 'Franz Rottensteiner'\n 'Thomas Schleider' 'Raphaël Troncy' 'Jorge Sebastián Lozano'\n 'Mar Gaitán Salvatella']" ]
null
null
2406.00424
null
null
http://arxiv.org/pdf/2406.00424v1
2024-06-01T12:41:50Z
2024-06-01T12:41:50Z
A Batch Sequential Halving Algorithm without Performance Degradation
In this paper, we investigate the problem of pure exploration in the context of multi-armed bandits, with a specific focus on scenarios where arms are pulled in fixed-size batches. Batching has been shown to enhance computational efficiency, but it can potentially lead to a degradation compared to the original sequential algorithm's performance due to delayed feedback and reduced adaptability. We introduce a simple batch version of the Sequential Halving (SH) algorithm (Karnin et al., 2013) and provide theoretical evidence that batching does not degrade the performance of the original algorithm under practical conditions. Furthermore, we empirically validate our claim through experiments, demonstrating the robust nature of the SH algorithm in fixed-size batch settings.
[ "['Sotetsu Koyamada' 'Soichiro Nishimori' 'Shin Ishii']" ]
null
null
2406.00426
null
null
http://arxiv.org/pdf/2406.00426v3
2024-06-11T12:53:03Z
2024-06-01T12:48:11Z
InterpreTabNet: Distilling Predictive Signals from Tabular Data by Salient Feature Interpretation
Tabular data are omnipresent in various sectors of industries. Neural networks for tabular data such as TabNet have been proposed to make predictions while leveraging the attention mechanism for interpretability. However, the inferred attention masks are often dense, making it challenging to come up with rationales about the predictive signal. To remedy this, we propose InterpreTabNet, a variant of the TabNet model that models the attention mechanism as a latent variable sampled from a Gumbel-Softmax distribution. This enables us to regularize the model to learn distinct concepts in the attention masks via a KL Divergence regularizer. It prevents overlapping feature selection by promoting sparsity which maximizes the model's efficacy and improves interpretability to determine the important features when predicting the outcome. To assist in the interpretation of feature interdependencies from our model, we employ a large language model (GPT-4) and use prompt engineering to map from the learned feature mask onto natural language text describing the learned signal. Through comprehensive experiments on real-world datasets, we demonstrate that InterpreTabNet outperforms previous methods for interpreting tabular data while attaining competitive accuracy.
[ "['Jacob Si' 'Wendy Yusi Cheng' 'Michael Cooper' 'Rahul G. Krishnan']" ]
null
null
2406.00431
null
null
http://arxiv.org/pdf/2406.00431v1
2024-06-01T13:10:35Z
2024-06-01T13:10:35Z
SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead
The large communication and computation overhead of federated learning (FL) is one of the main challenges facing its practical deployment over resource-constrained clients and systems. In this work, SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead. In SpaFL, a trainable threshold is defined for each filter/neuron to prune its all connected parameters, thereby leading to structured sparsity. To optimize the pruning process itself, only thresholds are communicated between a server and clients instead of parameters, thereby learning how to prune. Further, global thresholds are used to update model parameters by extracting aggregated parameter importance. The generalization bound of SpaFL is also derived, thereby proving key insights on the relation between sparsity and performance. Experimental results show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
[ "['Minsu Kim' 'Walid Saad' 'Merouane Debbah' 'Choong Seon Hong']" ]
null
null
2406.00438
null
null
http://arxiv.org/pdf/2406.00438v2
2024-06-04T09:57:19Z
2024-06-01T13:24:48Z
Stein Random Feature Regression
In large-scale regression problems, random Fourier features (RFFs) have significantly enhanced the computational scalability and flexibility of Gaussian processes (GPs) by defining kernels through their spectral density, from which a finite set of Monte Carlo samples can be used to form an approximate low-rank GP. However, the efficacy of RFFs in kernel approximation and Bayesian kernel learning depends on the ability to tractably sample the kernel spectral measure and the quality of the generated samples. We introduce Stein random features (SRF), leveraging Stein variational gradient descent, which can be used to both generate high-quality RFF samples of known spectral densities as well as flexibly and efficiently approximate traditionally non-analytical spectral measure posteriors. SRFs require only the evaluation of log-probability gradients to perform both kernel approximation and Bayesian kernel learning that results in superior performance over traditional approaches. We empirically validate the effectiveness of SRFs by comparing them to baselines on kernel approximation and well-known GP regression problems.
[ "['Houston Warren' 'Rafael Oliveira' 'Fabio Ramos']" ]
null
null
2406.00441
null
null
http://arxiv.org/pdf/2406.00441v1
2024-06-01T13:39:27Z
2024-06-01T13:39:27Z
Neural Polarization: Toward Electron Density for Molecules by Extending Equivariant Networks
Recent SO(3)-equivariant models embedded a molecule as a set of single atoms fixed in the three-dimensional space, which is analogous to a ball-and-stick view. This perspective provides a concise view of atom arrangements, however, the surrounding electron density cannot be represented and its polarization effects may be underestimated. To overcome this limitation, we propose textit{Neural Polarization}, a novel method extending equivariant network by embedding each atom as a pair of fixed and moving points. Motivated by density functional theory, Neural Polarization represents molecules as a space-filling view which includes an electron density, in contrast with a ball-and-stick view. Neural Polarization can flexibly be applied to most type of existing equivariant models. We showed that Neural Polarization can improve prediction performances of existing models over a wide range of targets. Finally, we verified that our method can improve the expressiveness and equivariance in terms of mathematical aspects.
[ "['Bumju Kwak' 'Jeonghee Jo']" ]
null
null
2406.00447
null
null
http://arxiv.org/pdf/2406.00447v1
2024-06-01T14:06:46Z
2024-06-01T14:06:46Z
DroneVis: Versatile Computer Vision Library for Drones
This paper introduces DroneVis, a novel library designed to automate computer vision algorithms on Parrot drones. DroneVis offers a versatile set of features and provides a diverse range of computer vision tasks along with a variety of models to choose from. Implemented in Python, the library adheres to high-quality code standards, facilitating effortless customization and feature expansion according to user requirements. In addition, comprehensive documentation is provided, encompassing usage guidelines and illustrative use cases. Our documentation, code, and examples are available in https://github.com/ahmedheakl/drone-vis.
[ "['Ahmed Heakl' 'Fatma Youssef' 'Victor Parque' 'Walid Gomaa']" ]
null
null
2406.00452
null
null
http://arxiv.org/pdf/2406.00452v1
2024-06-01T14:30:12Z
2024-06-01T14:30:12Z
Towards a Unified Framework of Clustering-based Anomaly Detection
Unsupervised Anomaly Detection (UAD) plays a crucial role in identifying abnormal patterns within data without labeled examples, holding significant practical implications across various domains. Although the individual contributions of representation learning and clustering to anomaly detection are well-established, their interdependencies remain under-explored due to the absence of a unified theoretical framework. Consequently, their collective potential to enhance anomaly detection performance remains largely untapped. To bridge this gap, in this paper, we propose a novel probabilistic mixture model for anomaly detection to establish a theoretical connection among representation learning, clustering, and anomaly detection. By maximizing a novel anomaly-aware data likelihood, representation learning and clustering can effectively reduce the adverse impact of anomalous data and collaboratively benefit anomaly detection. Meanwhile, a theoretically substantiated anomaly score is naturally derived from this framework. Lastly, drawing inspiration from gravitational analysis in physics, we have devised an improved anomaly score that more effectively harnesses the combined power of representation learning and clustering. Extensive experiments, involving 17 baseline methods across 30 diverse datasets, validate the effectiveness and generalization capability of the proposed method, surpassing state-of-the-art methods.
[ "['Zeyu Fang' 'Ming Gu' 'Sheng Zhou' 'Jiawei Chen' 'Qiaoyu Tan'\n 'Haishuai Wang' 'Jiajun Bu']" ]
null
null
2406.00456
null
null
http://arxiv.org/pdf/2406.00456v1
2024-06-01T14:45:03Z
2024-06-01T14:45:03Z
Mix-of-Granularity: Optimize the Chunking Granularity for Retrieval-Augmented Generation
Integrating information from different reference data sources is a major challenge for Retrieval-Augmented Generation (RAG) systems because each knowledge source adopts a unique data structure and follows different conventions. Retrieving from multiple knowledge sources with one fixed strategy usually leads to under-exploitation of information. To mitigate this drawback, inspired by Mix-of-Expert, we introduce Mix-of-Granularity (MoG), a method that dynamically determines the optimal granularity of a knowledge database based on input queries using a router. The router is efficiently trained with a newly proposed loss function employing soft labels. We further extend MoG to Mix-of-Granularity-Graph (MoGG), where reference documents are pre-processed into graphs, enabling the retrieval of relevant information from distantly situated chunks. Extensive experiments demonstrate that both MoG and MoGG effectively predict optimal granularity levels, significantly enhancing the performance of the RAG system in downstream tasks. The code of both MoG and MoGG will be made public.
[ "['Zijie Zhong' 'Hanwen Liu' 'Xiaoya Cui' 'Xiaofan Zhang' 'Zengchang Qin']" ]
null
null
2406.00469
null
null
http://arxiv.org/pdf/2406.00469v1
2024-06-01T15:38:57Z
2024-06-01T15:38:57Z
Learning to Solve Multiresolution Matrix Factorization by Manifold Optimization and Evolutionary Metaheuristics
Multiresolution Matrix Factorization (MMF) is unusual amongst fast matrix factorization algorithms in that it does not make a low rank assumption. This makes MMF especially well suited to modeling certain types of graphs with complex multiscale or hierarchical strucutre. While MMF promises to yields a useful wavelet basis, finding the factorization itself is hard, and existing greedy methods tend to be brittle. In this paper, we propose a ``learnable'' version of MMF that carfully optimizes the factorization using metaheuristics, specifically evolutionary algorithms and directed evolution, along with Stiefel manifold optimization through backpropagating errors. We show that the resulting wavelet basis far outperforms prior MMF algorithms and gives comparable performance on standard learning tasks on graphs. Furthermore, we construct the wavelet neural networks (WNNs) learning graphs on the spectral domain with the wavelet basis produced by our MMF learning algorithm. Our wavelet networks are competitive against other state-of-the-art methods in molecular graphs classification and node classification on citation graphs. We release our implementation at https://github.com/HySonLab/LearnMMF
[ "['Truong Son Hy' 'Thieu Khang' 'Risi Kondor']" ]
null
null
2406.00483
null
null
http://arxiv.org/pdf/2406.00483v1
2024-06-01T16:29:03Z
2024-06-01T16:29:03Z
Exploring the limits of Hierarchical World Models in Reinforcement Learning
Hierarchical model-based reinforcement learning (HMBRL) aims to combine the benefits of better sample efficiency of model based reinforcement learning (MBRL) with the abstraction capability of hierarchical reinforcement learning (HRL) to solve complex tasks efficiently. While HMBRL has great potential, it still lacks wide adoption. In this work we describe a novel HMBRL framework and evaluate it thoroughly. To complement the multi-layered decision making idiom characteristic for HRL, we construct hierarchical world models that simulate environment dynamics at various levels of temporal abstraction. These models are used to train a stack of agents that communicate in a top-down manner by proposing goals to their subordinate agents. A significant focus of this study is the exploration of a static and environment agnostic temporal abstraction, which allows concurrent training of models and agents throughout the hierarchy. Unlike most goal-conditioned H(MB)RL approaches, it also leads to comparatively low dimensional abstract actions. Although our HMBRL approach did not outperform traditional methods in terms of final episode returns, it successfully facilitated decision making across two levels of abstraction using compact, low dimensional abstract actions. A central challenge in enhancing our method's performance, as uncovered through comprehensive experimentation, is model exploitation on the abstract level of our world model stack. We provide an in depth examination of this issue, discussing its implications for the field and suggesting directions for future research to overcome this challenge. By sharing these findings, we aim to contribute to the broader discourse on refining HMBRL methodologies and to assist in the development of more effective autonomous learning systems for complex decision-making environments.
[ "['Robin Schiewer' 'Anand Subramoney' 'Laurenz Wiskott']" ]
null
null
2406.00487
null
null
http://arxiv.org/pdf/2406.00487v1
2024-06-01T16:36:40Z
2024-06-01T16:36:40Z
Optimistic Rates for Learning from Label Proportions
We consider a weakly supervised learning problem called Learning from Label Proportions (LLP), where examples are grouped into ``bags'' and only the average label within each bag is revealed to the learner. We study various learning rules for LLP that achieve PAC learning guarantees for classification loss. We establish that the classical Empirical Proportional Risk Minimization (EPRM) learning rule (Yu et al., 2014) achieves fast rates under realizability, but EPRM and similar proportion matching learning rules can fail in the agnostic setting. We also show that (1) a debiased proportional square loss, as well as (2) a recently proposed EasyLLP learning rule (Busa-Fekete et al., 2023) both achieve ``optimistic rates'' (Panchenko, 2002); in both the realizable and agnostic settings, their sample complexity is optimal (up to log factors) in terms of $epsilon, delta$, and VC dimension.
[ "['Gene Li' 'Lin Chen' 'Adel Javanmard' 'Vahab Mirrokni']" ]
null
null
2406.00488
null
null
http://arxiv.org/pdf/2406.00488v1
2024-06-01T16:37:08Z
2024-06-01T16:37:08Z
Federated Model Heterogeneous Matryoshka Representation Learning
Model heterogeneous federated learning (MHeteroFL) enables FL clients to collaboratively train models with heterogeneous structures in a distributed fashion. However, existing MHeteroFL methods rely on training loss to transfer knowledge between the client model and the server model, resulting in limited knowledge exchange. To address this limitation, we propose the Federated model heterogeneous Matryoshka Representation Learning (FedMRL) approach for supervised learning tasks. It adds an auxiliary small homogeneous model shared by clients with heterogeneous local models. (1) The generalized and personalized representations extracted by the two models' feature extractors are fused by a personalized lightweight representation projector. This step enables representation fusion to adapt to local data distribution. (2) The fused representation is then used to construct Matryoshka representations with multi-dimensional and multi-granular embedded representations learned by the global homogeneous model header and the local heterogeneous model header. This step facilitates multi-perspective representation learning and improves model learning capability. Theoretical analysis shows that FedMRL achieves a $O(1/T)$ non-convex convergence rate. Extensive experiments on benchmark datasets demonstrate its superior model accuracy with low communication and computational costs compared to seven state-of-the-art baselines. It achieves up to 8.48% and 24.94% accuracy improvement compared with the state-of-the-art and the best same-category baseline, respectively.
[ "['Liping Yi' 'Han Yu' 'Chao Ren' 'Gang Wang' 'Xiaoguang Liu' 'Xiaoxiao Li']" ]
null
null
2406.00489
null
null
http://arxiv.org/pdf/2406.00489v1
2024-06-01T16:38:43Z
2024-06-01T16:38:43Z
Efficient Sign-Based Optimization: Accelerating Convergence via Variance Reduction
Sign stochastic gradient descent (signSGD) is a communication-efficient method that transmits only the sign of stochastic gradients for parameter updating. Existing literature has demonstrated that signSGD can achieve a convergence rate of $mathcal{O}(d^{1/2}T^{-1/4})$, where $d$ represents the dimension and $T$ is the iteration number. In this paper, we improve this convergence rate to $mathcal{O}(d^{1/2}T^{-1/3})$ by introducing the Sign-based Stochastic Variance Reduction (SSVR) method, which employs variance reduction estimators to track gradients and leverages their signs to update. For finite-sum problems, our method can be further enhanced to achieve a convergence rate of $mathcal{O}(m^{1/4}d^{1/2}T^{-1/2})$, where $m$ denotes the number of component functions. Furthermore, we investigate the heterogeneous majority vote in distributed settings and introduce two novel algorithms that attain improved convergence rates of $mathcal{O}(d^{1/2}T^{-1/2} + dn^{-1/2})$ and $mathcal{O}(d^{1/4}T^{-1/4})$ respectively, outperforming the previous results of $mathcal{O}(dT^{-1/4} + dn^{-1/2})$ and $mathcal{O}(d^{3/8}T^{-1/8})$, where $n$ represents the number of nodes. Numerical experiments across different tasks validate the effectiveness of our proposed methods.
[ "['Wei Jiang' 'Sifan Yang' 'Wenhao Yang' 'Lijun Zhang']" ]
null
null
2406.00492
null
null
http://arxiv.org/pdf/2406.00492v1
2024-06-01T16:45:33Z
2024-06-01T16:45:33Z
SAM-VMNet: Deep Neural Networks For Coronary Angiography Vessel Segmentation
Coronary artery disease (CAD) is one of the most prevalent diseases in the cardiovascular field and one of the major contributors to death worldwide. Computed Tomography Angiography (CTA) images are regarded as the authoritative standard for the diagnosis of coronary artery disease, and by performing vessel segmentation and stenosis detection on CTA images, physicians are able to diagnose coronary artery disease more accurately. In order to combine the advantages of both the base model and the domain-specific model, and to achieve high-precision and fully-automatic segmentation and detection with a limited number of training samples, we propose a novel architecture, SAM-VMNet, which combines the powerful feature extraction capability of MedSAM with the advantage of the linear complexity of the visual state-space model of VM-UNet, giving it faster inferences than Vision Transformer with faster inference speed and stronger data processing capability, achieving higher segmentation accuracy and stability for CTA images. Experimental results show that the SAM-VMNet architecture performs excellently in the CTA image segmentation task, with a segmentation accuracy of up to 98.32% and a sensitivity of up to 99.33%, which is significantly better than other existing models and has stronger domain adaptability. Comprehensive evaluation of the CTA image segmentation task shows that SAM-VMNet accurately extracts the vascular trunks and capillaries, demonstrating its great potential and wide range of application scenarios for the vascular segmentation task, and also laying a solid foundation for further stenosis detection.
[ "['Xueying Zeng' 'Baixiang Huang' 'Yu Luo' 'Guangyu Wei' 'Songyan He'\n 'Yushuang Shao']" ]
null
null
2406.00494
null
null
http://arxiv.org/pdf/2406.00494v1
2024-06-01T16:46:46Z
2024-06-01T16:46:46Z
Activation-Descent Regularization for Input Optimization of ReLU Networks
We present a new approach for input optimization of ReLU networks that explicitly takes into account the effect of changes in activation patterns. We analyze local optimization steps in both the input space and the space of activation patterns to propose methods with superior local descent properties. To accomplish this, we convert the discrete space of activation patterns into differentiable representations and propose regularization terms that improve each descent step. Our experiments demonstrate the effectiveness of the proposed input-optimization methods for improving the state-of-the-art in various areas, such as adversarial learning, generative modeling, and reinforcement learning.
[ "['Hongzhan Yu' 'Sicun Gao']" ]
null
null
2406.00499
null
null
http://arxiv.org/pdf/2406.00499v1
2024-06-01T17:01:01Z
2024-06-01T17:01:01Z
Conformal Transformation of Kernels: A Geometric Perspective on Text Classification
In this article we investigate the effects of conformal transformations on kernel functions used in Support Vector Machines. Our focus lies in the task of text document categorization, which involves assigning each document to a particular category. We introduce a new Gaussian Cosine kernel alongside two conformal transformations. Building upon previous studies that demonstrated the efficacy of conformal transformations in increasing class separability on synthetic and low-dimensional datasets, we extend this analysis to the high-dimensional domain of text data. Our experiments, conducted on the Reuters dataset on two types of binary classification tasks, compare the performance of Linear, Gaussian, and Gaussian Cosine kernels against their conformally transformed counterparts. The findings indicate that conformal transformations can significantly improve kernel performance, particularly for sub-optimal kernels. Specifically, improvements were observed in 60% of the tested scenarios for the Linear kernel, 84% for the Gaussian kernel, and 80% for the Gaussian Cosine kernel. In light of these findings, it becomes clear that conformal transformations play a pivotal role in enhancing kernel performance, offering substantial benefits.
[ "['Ioana Rădulescu' 'Alexandra Băicoianu' 'Adela Mihai']" ]
null
null
2406.00501
null
null
http://arxiv.org/pdf/2406.00501v1
2024-06-01T17:09:18Z
2024-06-01T17:09:18Z
Diffusion-based Image Generation for In-distribution Data Augmentation in Surface Defect Detection
In this study, we show that diffusion models can be used in industrial scenarios to improve the data augmentation procedure in the context of surface defect detection. In general, defect detection classifiers are trained on ground-truth data formed by normal samples (negative data) and samples with defects (positive data), where the latter are consistently fewer than normal samples. For these reasons, state-of-the-art data augmentation procedures add synthetic defect data by superimposing artifacts to normal samples. This leads to out-of-distribution augmented data so that the classification system learns what is not a normal sample but does not know what a defect really is. We show that diffusion models overcome this situation, providing more realistic in-distribution defects so that the model can learn the defect's genuine appearance. We propose a novel approach for data augmentation that mixes out-of-distribution with in-distribution samples, which we call In&Out. The approach can deal with two data augmentation setups: i) when no defects are available (zero-shot data augmentation) and ii) when defects are available, which can be in a small number (few-shot) or a large one (full-shot). We focus the experimental part on the most challenging benchmark in the state-of-the-art, i.e., the Kolektor Surface-Defect Dataset 2, defining the new state-of-the-art classification AP score under weak supervision of .782. The code is available at https://github.com/intelligolabs/in_and_out.
[ "['Luigi Capogrosso' 'Federico Girella' 'Francesco Taioli'\n 'Michele Dalla Chiara' 'Muhammad Aqeel' 'Franco Fummi' 'Francesco Setti'\n 'Marco Cristani']" ]
null
null
2406.00502
null
null
http://arxiv.org/pdf/2406.00502v1
2024-06-01T17:10:56Z
2024-06-01T17:10:56Z
Non-geodesically-convex optimization in the Wasserstein space
We study a class of optimization problems in the Wasserstein space (the space of probability measures) where the objective function is emph{nonconvex} along generalized geodesics. When the regularization term is the negative entropy, the optimization problem becomes a sampling problem where it minimizes the Kullback-Leibler divergence between a probability measure (optimization variable) and a target probability measure whose logarithmic probability density is a nonconvex function. We derive multiple convergence insights for a novel {em semi Forward-Backward Euler scheme} under several nonconvex (and possibly nonsmooth) regimes. Notably, the semi Forward-Backward Euler is just a slight modification of the Forward-Backward Euler whose convergence is -- to our knowledge -- still unknown in our very general non-geodesically-convex setting.
[ "['Hoang Phuc Hau Luu' 'Hanlin Yu' 'Bernardo Williams' 'Petrus Mikkola'\n 'Marcelo Hartmann' 'Kai Puolamäki' 'Arto Klami']" ]
null
null
2406.00503
null
null
http://arxiv.org/pdf/2406.00503v3
2024-06-16T16:33:43Z
2024-06-01T17:22:00Z
Schrödinger Bridge with Quadratic State Cost is Exactly Solvable
Schr"odinger bridge is a diffusion process that steers a given distribution to another in a prescribed time while minimizing the effort to do so. It can be seen as the stochastic dynamical version of the optimal mass transport, and has growing applications in generative diffusion models and stochastic optimal control. In this work, we propose a regularized variant of the Schr"odinger bridge with a quadratic state cost-to-go that incentivizes the optimal sample paths to stay close to a nominal level. Unlike the conventional Schr"odinger bridge, the regularization induces a state-dependent rate of killing and creation of probability mass, and its solution requires determining the Markov kernel of a reaction-diffusion partial differential equation. We derive this Markov kernel in closed form. Our solution recovers the heat kernel in the vanishing regularization (i.e., diffusion without reaction) limit, thereby recovering the solution of the conventional Schr"odinger bridge. Our results enable the use of dynamic Sinkhorn recursion for computing the Schr"odinger bridge with a quadratic state cost-to-go, which would otherwise be challenging to use in this setting. We deduce properties of the new kernel and explain its connections with certain exactly solvable models in quantum mechanics.
[ "['Alexis M. H. Teter' 'Wenqing Wang' 'Abhishek Halder']" ]
null
null
2406.00509
null
null
http://arxiv.org/pdf/2406.00509v1
2024-06-01T17:31:06Z
2024-06-01T17:31:06Z
Empirical influence functions to understand the logic of fine-tuning
Understanding the process of learning in neural networks is crucial for improving their performance and interpreting their behavior. This can be approximately understood by asking how a model's output is influenced when we fine-tune on a new training sample. There are desiderata for such influences, such as decreasing influence with semantic distance, sparseness, noise invariance, transitive causality, and logical consistency. Here we use the empirical influence measured using fine-tuning to demonstrate how individual training samples affect outputs. We show that these desiderata are violated for both for simple convolutional networks and for a modern LLM. We also illustrate how prompting can partially rescue this failure. Our paper presents an efficient and practical way of quantifying how well neural networks learn from fine-tuning stimuli. Our results suggest that popular models cannot generalize or perform logic in the way they appear to.
[ "['Jordan K. Matelsky' 'Lyle Ungar' 'Konrad P. Kording']" ]
null
null
2406.00518
null
null
http://arxiv.org/pdf/2406.00518v1
2024-06-01T18:00:01Z
2024-06-01T18:00:01Z
Learning to Play Air Hockey with Model-Based Deep Reinforcement Learning
In the context of addressing the Robot Air Hockey Challenge 2023, we investigate the applicability of model-based deep reinforcement learning to acquire a policy capable of autonomously playing air hockey. Our agents learn solely from sparse rewards while incorporating self-play to iteratively refine their behaviour over time. The robotic manipulator is interfaced using continuous high-level actions for position-based control in the Cartesian plane while having partial observability of the environment with stochastic transitions. We demonstrate that agents are prone to overfitting when trained solely against a single playstyle, highlighting the importance of self-play for generalization to novel strategies of unseen opponents. Furthermore, the impact of the imagination horizon is explored in the competitive setting of the highly dynamic game of air hockey, with longer horizons resulting in more stable learning and better overall performance.
[ "['Andrej Orsula']" ]
null
null
2406.00519
null
null
http://arxiv.org/pdf/2406.00519v1
2024-06-01T18:01:03Z
2024-06-01T18:01:03Z
Learning Discrete Concepts in Latent Hierarchical Models
Learning concepts from natural high-dimensional data (e.g., images) holds potential in building human-aligned and interpretable machine learning models. Despite its encouraging prospect, formalization and theoretical insights into this crucial task are still lacking. In this work, we formalize concepts as discrete latent causal variables that are related via a hierarchical causal model that encodes different abstraction levels of concepts embedded in high-dimensional data (e.g., a dog breed and its eye shapes in natural images). We formulate conditions to facilitate the identification of the proposed causal model, which reveals when learning such concepts from unsupervised data is possible. Our conditions permit complex causal hierarchical structures beyond latent trees and multi-level directed acyclic graphs in prior work and can handle high-dimensional, continuous observed variables, which is well-suited for unstructured data modalities such as images. We substantiate our theoretical claims with synthetic data experiments. Further, we discuss our theory's implications for understanding the underlying mechanisms of latent diffusion models and provide corresponding empirical evidence for our theoretical insights.
[ "['Lingjing Kong' 'Guangyi Chen' 'Biwei Huang' 'Eric P. Xing' 'Yuejie Chi'\n 'Kun Zhang']" ]
null
null
2406.00524
null
null
http://arxiv.org/pdf/2406.00524v1
2024-06-01T18:23:58Z
2024-06-01T18:23:58Z
Adaptive boosting with dynamic weight adjustment
Adaptive Boosting with Dynamic Weight Adjustment is an enhancement of the traditional Adaptive boosting commonly known as AdaBoost, a powerful ensemble learning technique. Adaptive Boosting with Dynamic Weight Adjustment technique improves the efficiency and accuracy by dynamically updating the weights of the instances based on prediction error where the weights are updated in proportion to the error rather than updating weights uniformly as we do in traditional Adaboost. Adaptive Boosting with Dynamic Weight Adjustment performs better than Adaptive Boosting as it can handle more complex data relations, allowing our model to handle imbalances and noise better, leading to more accurate and balanced predictions. The proposed model provides a more flexible and effective approach for boosting, particularly in challenging classification tasks.
[ "['Vamsi Sai Ranga Sri Harsha Mangina']" ]
null
null
2406.00529
null
null
http://arxiv.org/pdf/2406.00529v1
2024-06-01T18:43:43Z
2024-06-01T18:43:43Z
On the Use of Anchoring for Training Vision Models
Anchoring is a recent, architecture-agnostic principle for training deep neural networks that has been shown to significantly improve uncertainty estimation, calibration, and extrapolation capabilities. In this paper, we systematically explore anchoring as a general protocol for training vision models, providing fundamental insights into its training and inference processes and their implications for generalization and safety. Despite its promise, we identify a critical problem in anchored training that can lead to an increased risk of learning undesirable shortcuts, thereby limiting its generalization capabilities. To address this, we introduce a new anchored training protocol that employs a simple regularizer to mitigate this issue and significantly enhances generalization. We empirically evaluate our proposed approach across datasets and architectures of varying scales and complexities, demonstrating substantial performance gains in generalization and safety metrics compared to the standard training protocol.
[ "['Vivek Narayanaswamy' 'Kowshik Thopalli' 'Rushil Anirudh' 'Yamen Mubarka'\n 'Wesam Sakla' 'Jayaraman J. Thiagarajan']" ]
null
null
2406.00532
null
null
http://arxiv.org/pdf/2406.00532v1
2024-06-01T18:50:03Z
2024-06-01T18:50:03Z
Breast Cancer Diagnosis: A Comprehensive Exploration of Explainable Artificial Intelligence (XAI) Techniques
Breast cancer (BC) stands as one of the most common malignancies affecting women worldwide, necessitating advancements in diagnostic methodologies for better clinical outcomes. This article provides a comprehensive exploration of the application of Explainable Artificial Intelligence (XAI) techniques in the detection and diagnosis of breast cancer. As Artificial Intelligence (AI) technologies continue to permeate the healthcare sector, particularly in oncology, the need for transparent and interpretable models becomes imperative to enhance clinical decision-making and patient care. This review discusses the integration of various XAI approaches, such as SHAP, LIME, Grad-CAM, and others, with machine learning and deep learning models utilized in breast cancer detection and classification. By investigating the modalities of breast cancer datasets, including mammograms, ultrasounds and their processing with AI, the paper highlights how XAI can lead to more accurate diagnoses and personalized treatment plans. It also examines the challenges in implementing these techniques and the importance of developing standardized metrics for evaluating XAI's effectiveness in clinical settings. Through detailed analysis and discussion, this article aims to highlight the potential of XAI in bridging the gap between complex AI models and practical healthcare applications, thereby fostering trust and understanding among medical professionals and improving patient outcomes.
[ "['Samita Bai' 'Sidra Nasir' 'Rizwan Ahmed Khan' 'Sheeraz Arif'\n 'Alexandre Meyer' 'Hubert Konik']" ]
null
null
2406.00535
null
null
http://arxiv.org/pdf/2406.00535v2
2024-06-29T14:14:04Z
2024-06-01T19:07:25Z
Causal Contrastive Learning for Counterfactual Regression Over Time
Estimating treatment effects over time holds significance in various domains, including precision medicine, epidemiology, economy, and marketing. This paper introduces a unique approach to counterfactual regression over time, emphasizing long-term predictions. Distinguishing itself from existing models like Causal Transformer, our approach highlights the efficacy of employing RNNs for long-term forecasting, complemented by Contrastive Predictive Coding (CPC) and Information Maximization (InfoMax). Emphasizing efficiency, we avoid the need for computationally expensive transformers. Leveraging CPC, our method captures long-term dependencies in the presence of time-varying confounders. Notably, recent models have disregarded the importance of invertible representation, compromising identification assumptions. To remedy this, we employ the InfoMax principle, maximizing a lower bound of mutual information between sequence data and its representation. Our method achieves state-of-the-art counterfactual estimation results using both synthetic and real-world data, marking the pioneering incorporation of Contrastive Predictive Encoding in causal inference.
[ "['Mouad El Bouchattaoui' 'Myriam Tami' 'Benoit Lepetit'\n 'Paul-Henry Cournède']" ]
null
null
2406.00539
null
null
http://arxiv.org/pdf/2406.00539v1
2024-06-01T19:34:48Z
2024-06-01T19:34:48Z
CONFINE: Conformal Prediction for Interpretable Neural Networks
Deep neural networks exhibit remarkable performance, yet their black-box nature limits their utility in fields like healthcare where interpretability is crucial. Existing explainability approaches often sacrifice accuracy and lack quantifiable measures of prediction uncertainty. In this study, we introduce Conformal Prediction for Interpretable Neural Networks (CONFINE), a versatile framework that generates prediction sets with statistically robust uncertainty estimates instead of point predictions to enhance model transparency and reliability. CONFINE not only provides example-based explanations and confidence estimates for individual predictions but also boosts accuracy by up to 3.6%. We define a new metric, correct efficiency, to evaluate the fraction of prediction sets that contain precisely the correct label and show that CONFINE achieves correct efficiency of up to 3.3% higher than the original accuracy, matching or exceeding prior methods. CONFINE's marginal and class-conditional coverages attest to its validity across tasks spanning medical image classification to language understanding. Being adaptable to any pre-trained classifier, CONFINE marks a significant advance towards transparent and trustworthy deep learning applications in critical domains.
[ "['Linhui Huang' 'Sayeri Lala' 'Niraj K. Jha']" ]
null
null
2406.00544
null
null
http://arxiv.org/pdf/2406.00544v1
2024-06-01T19:51:29Z
2024-06-01T19:51:29Z
Leveraging Knowlegde Graphs for Interpretable Feature Generation
The quality of Machine Learning (ML) models strongly depends on the input data, as such Feature Engineering (FE) is often required in ML. In addition, with the proliferation of ML-powered systems, especially in critical contexts, the need for interpretability and explainability becomes increasingly important. Since manual FE is time-consuming and requires case specific knowledge, we propose KRAFT, an AutoFE framework that leverages a knowledge graph to guide the generation of interpretable features. Our hybrid AI approach combines a neural generator to transform raw features through a series of transformations and a knowledge-based reasoner to evaluate features interpretability using Description Logics (DL). The generator is trained through Deep Reinforcement Learning (DRL) to maximize the prediction accuracy and the interpretability of the generated features. Extensive experiments on real datasets demonstrate that KRAFT significantly improves accuracy while ensuring a high level of interpretability.
[ "['Mohamed Bouadi' 'Arta Alavi' 'Salima Benbernou' 'Mourad Ouziri']" ]
null
null
2406.00548
null
null
http://arxiv.org/pdf/2406.00548v1
2024-06-01T20:12:54Z
2024-06-01T20:12:54Z
LIDAO: Towards Limited Interventions for Debiasing (Large) Language Models
Large language models (LLMs) have achieved impressive performance on various natural language generation tasks. Nonetheless, they suffer from generating negative and harmful contents that are biased against certain demographic groups (e.g., female), raising severe fairness concerns. As remedies, prior works intervened the generation by removing attitude or demographic information, inevitably degrading the generation quality and resulting in notable textit{fairness-fluency} trade-offs. However, it is still under-explored to what extent the fluency textit{has to} be affected in order to achieve a desired level of fairness. In this work, we conduct the first formal study from an information-theoretic perspective. We show that previous approaches are excessive for debiasing and propose LIDAO, a general framework to debias a (L)LM at a better fluency provably. We further robustify LIDAO in adversarial scenarios, where a carefully-crafted prompt may stimulate LLMs exhibiting instruction-following abilities to generate texts with fairness issue appears only when the prompt is also taken into account. Experiments on three LMs ranging from 0.7B to 7B parameters demonstrate the superiority of our method.
[ "['Tianci Liu' 'Haoyu Wang' 'Shiyang Wang' 'Yu Cheng' 'Jing Gao']" ]
null
null
2406.00551
null
null
http://arxiv.org/pdf/2406.00551v1
2024-06-01T20:46:40Z
2024-06-01T20:46:40Z
Strategic Linear Contextual Bandits
Motivated by the phenomenon of strategic agents gaming a recommender system to maximize the number of times they are recommended to users, we study a strategic variant of the linear contextual bandit problem, where the arms can strategically misreport their privately observed contexts to the learner. We treat the algorithm design problem as one of mechanism design under uncertainty and propose the Optimistic Grim Trigger Mechanism (OptGTM) that incentivizes the agents (i.e., arms) to report their contexts truthfully while simultaneously minimizing regret. We also show that failing to account for the strategic nature of the agents results in linear regret. However, a trade-off between mechanism design and regret minimization appears to be unavoidable. More broadly, this work aims to provide insight into the intersection of online learning and mechanism design.
[ "['Thomas Kleine Buening' 'Aadirupa Saha' 'Christos Dimitrakakis'\n 'Haifeng Xu']" ]
null
null
2406.00552
null
null
http://arxiv.org/pdf/2406.00552v2
2024-06-08T05:52:08Z
2024-06-01T21:07:24Z
Graph Neural Network Training Systems: A Performance Comparison of Full-Graph and Mini-Batch
Graph Neural Networks (GNNs) have gained significant attention in recent years due to their ability to learn representations of graph structured data. Two common methods for training GNNs are mini-batch training and full-graph training. Since these two methods require different training pipelines and systems optimizations, two separate categories of GNN training systems emerged, each tailored for one method. Works that introduce systems belonging to a particular category predominantly compare them with other systems within the same category, offering limited or no comparison with systems from the other category. Some prior work also justifies its focus on one specific training method by arguing that it achieves higher accuracy than the alternative. The literature, however, has incomplete and contradictory evidence in this regard. In this paper, we provide a comprehensive empirical comparison of full-graph and mini-batch GNN training systems to get a clearer picture of the state of the art in the field. We find that the mini-batch training systems we consider consistently converge faster than the full-graph training ones across multiple datasets, GNN models, and system configurations, with speedups between 2.4x - 15.2x. We also find that both training techniques converge to similar accuracy values, so comparing systems across the two categories in terms of time-to-accuracy is a sound approach.
[ "['Saurabh Bajaj' 'Hui Guan' 'Marco Serafini']" ]
null
null
2406.00561
null
null
http://arxiv.org/pdf/2406.00561v1
2024-06-01T21:54:01Z
2024-06-01T21:54:01Z
Learning to Approximate Particle Smoothing Trajectories via Diffusion Generative Models
Learning dynamical systems from sparse observations is critical in numerous fields, including biology, finance, and physics. Even if tackling such problems is standard in general information fusion, it remains challenging for contemporary machine learning models, such as diffusion models. We introduce a method that integrates conditional particle filtering with ancestral sampling and diffusion models, enabling the generation of realistic trajectories that align with observed data. Our approach uses a smoother based on iterating a conditional particle filter with ancestral sampling to first generate plausible trajectories matching observed marginals, and learns the corresponding diffusion model. This approach provides both a generative method for high-quality, smoothed trajectories under complex constraints, and an efficient approximation of the particle smoothing distribution for classical tracking problems. We demonstrate the approach in time-series generation and interpolation tasks, including vehicle tracking and single-cell RNA sequencing data.
[ "['Ella Tamir' 'Arno Solin']" ]
null
null
2406.00566
null
null
http://arxiv.org/pdf/2406.00566v1
2024-06-01T22:23:51Z
2024-06-01T22:23:51Z
An Unsupervised Approach for Periodic Source Detection in Time Series
Detection of periodic patterns of interest within noisy time series data plays a critical role in various tasks, spanning from health monitoring to behavior analysis. Existing learning techniques often rely on labels or clean versions of signals for detecting the periodicity, and those employing self-supervised learning methods are required to apply proper augmentations, which is already challenging for time series and can result in collapse -- all representations collapse to a single point due to strong augmentations. In this work, we propose a novel method to detect the periodicity in time series without the need for any labels or requiring tailored positive or negative data generation mechanisms with specific augmentations. We mitigate the collapse issue by ensuring the learned representations retain information from the original samples without imposing any random variance constraints on the batch. Our experiments in three time series tasks against state-of-the-art learning methods show that the proposed approach consistently outperforms prior works, achieving performance improvements of more than 45--50%, showing its effectiveness. Code: https://github.com/eth-siplab/Unsupervised_Periodicity_Detection
[ "['Berken Utku Demirel' 'Christian Holz']" ]
null
null
2406.00569
null
null
http://arxiv.org/pdf/2406.00569v1
2024-06-01T22:40:31Z
2024-06-01T22:40:31Z
Redefining Contributions: Shapley-Driven Federated Learning
Federated learning (FL) has emerged as a pivotal approach in machine learning, enabling multiple participants to collaboratively train a global model without sharing raw data. While FL finds applications in various domains such as healthcare and finance, it is challenging to ensure global model convergence when participants do not contribute equally and/or honestly. To overcome this challenge, principled mechanisms are required to evaluate the contributions made by individual participants in the FL setting. Existing solutions for contribution assessment rely on general accuracy evaluation, often failing to capture nuanced dynamics and class-specific influences. This paper proposes a novel contribution assessment method called ShapFed for fine-grained evaluation of participant contributions in FL. Our approach uses Shapley values from cooperative game theory to provide a granular understanding of class-specific influences. Based on ShapFed, we introduce a weighted aggregation method called ShapFed-WA, which outperforms conventional federated averaging, especially in class-imbalanced scenarios. Personalizing participant updates based on their contributions further enhances collaborative fairness by delivering differentiated models commensurate with the participant contributions. Experiments on CIFAR-10, Chest X-Ray, and Fed-ISIC2019 datasets demonstrate the effectiveness of our approach in improving utility, efficiency, and fairness in FL systems. The code can be found at https://github.com/tnurbek/shapfed.
[ "['Nurbek Tastan' 'Samar Fares' 'Toluwani Aremu' 'Samuel Horvath'\n 'Karthik Nandakumar']" ]
null
null
2406.00570
null
null
http://arxiv.org/pdf/2406.00570v1
2024-06-01T22:55:33Z
2024-06-01T22:55:33Z
A Gaussian Process-based Streaming Algorithm for Prediction of Time Series With Regimes and Outliers
Online prediction of time series under regime switching is a widely studied problem in the literature, with many celebrated approaches. Using the non-parametric flexibility of Gaussian processes, the recently proposed INTEL algorithm provides a product of experts approach to online prediction of time series under possible regime switching, including the special case of outliers. This is achieved by adaptively combining several candidate models, each reporting their predictive distribution at time $t$. However, the INTEL algorithm uses a finite context window approximation to the predictive distribution, the computation of which scales cubically with the maximum lag, or otherwise scales quartically with exact predictive distributions. We introduce LINTEL, which uses the exact filtering distribution at time $t$ with constant-time updates, making the time complexity of the streaming algorithm optimal. We additionally note that the weighting mechanism of INTEL is better suited to a mixture of experts approach, and propose a fusion policy based on arithmetic averaging for LINTEL. We show experimentally that our proposed approach is over five times faster than INTEL under reasonable settings with better quality predictions.
[ "['Daniel Waxman' 'Petar M. Djurić']" ]
null
null
2406.00573
null
null
http://arxiv.org/pdf/2406.00573v1
2024-06-01T23:32:29Z
2024-06-01T23:32:29Z
VOICE: Variance of Induced Contrastive Explanations to quantify Uncertainty in Neural Network Interpretability
In this paper, we visualize and quantify the predictive uncertainty of gradient-based post hoc visual explanations for neural networks. Predictive uncertainty refers to the variability in the network predictions under perturbations to the input. Visual post hoc explainability techniques highlight features within an image to justify a network's prediction. We theoretically show that existing evaluation strategies of visual explanatory techniques partially reduce the predictive uncertainty of neural networks. This analysis allows us to construct a plug in approach to visualize and quantify the remaining predictive uncertainty of any gradient-based explanatory technique. We show that every image, network, prediction, and explanatory technique has a unique uncertainty. The proposed uncertainty visualization and quantification yields two key observations. Firstly, oftentimes under incorrect predictions, explanatory techniques are uncertain about the same features that they are attributing the predictions to, thereby reducing the trustworthiness of the explanation. Secondly, objective metrics of an explanation's uncertainty, empirically behave similarly to epistemic uncertainty. We support these observations on two datasets, four explanatory techniques, and six neural network architectures. The code is available at https://github.com/olivesgatech/VOICE-Uncertainty.
[ "['Mohit Prabhushankar' 'Ghassan AlRegib']" ]
null
null
2406.00578
null
null
http://arxiv.org/pdf/2406.00578v1
2024-06-02T00:00:00Z
2024-06-02T00:00:00Z
ContextFlow++: Generalist-Specialist Flow-based Generative Models with Mixed-Variable Context Encoding
Normalizing flow-based generative models have been widely used in applications where the exact density estimation is of major importance. Recent research proposes numerous methods to improve their expressivity. However, conditioning on a context is largely overlooked area in the bijective flow research. Conventional conditioning with the vector concatenation is limited to only a few flow types. More importantly, this approach cannot support a practical setup where a set of context-conditioned (specialist) models are trained with the fixed pretrained general-knowledge (generalist) model. We propose ContextFlow++ approach to overcome these limitations using an additive conditioning with explicit generalist-specialist knowledge decoupling. Furthermore, we support discrete contexts by the proposed mixed-variable architecture with context encoders. Particularly, our context encoder for discrete variables is a surjective flow from which the context-conditioned continuous variables are sampled. Our experiments on rotated MNIST-R, corrupted CIFAR-10C, real-world ATM predictive maintenance and SMAP unsupervised anomaly detection benchmarks show that the proposed ContextFlow++ offers faster stable training and achieves higher performance metrics. Our code is publicly available at https://github.com/gudovskiy/contextflow.
[ "['Denis Gudovskiy' 'Tomoyuki Okuno' 'Yohei Nakata']" ]
null
null
2406.00588
null
null
http://arxiv.org/pdf/2406.00588v1
2024-06-02T01:46:58Z
2024-06-02T01:46:58Z
Generalization Bound and New Algorithm for Clean-Label Backdoor Attack
The generalization bound is a crucial theoretical tool for assessing the generalizability of learning methods and there exist vast literatures on generalizability of normal learning, adversarial learning, and data poisoning. Unlike other data poison attacks, the backdoor attack has the special property that the poisoned triggers are contained in both the training set and the test set and the purpose of the attack is two-fold. To our knowledge, the generalization bound for the backdoor attack has not been established. In this paper, we fill this gap by deriving algorithm-independent generalization bounds in the clean-label backdoor attack scenario. Precisely, based on the goals of backdoor attack, we give upper bounds for the clean sample population errors and the poison population errors in terms of the empirical error on the poisoned training dataset. Furthermore, based on the theoretical result, a new clean-label backdoor attack is proposed that computes the poisoning trigger by combining adversarial noise and indiscriminate poison. We show its effectiveness in a variety of settings.
[ "['Lijia Yu' 'Shuang Liu' 'Yibo Miao' 'Xiao-Shan Gao' 'Lijun Zhang']" ]
null
null
2406.00596
null
null
http://arxiv.org/pdf/2406.00596v1
2024-06-02T02:30:10Z
2024-06-02T02:30:10Z
Multi-variable Adversarial Time-Series Forecast Model
Short-term industrial enterprises power system forecasting is an important issue for both load control and machine protection. Scientists focus on load forecasting but ignore other valuable electric-meters which should provide guidance of power system protection. We propose a new framework, multi-variable adversarial time-series forecasting model, which regularizes Long Short-term Memory (LSTM) models via an adversarial process. The novel model forecasts all variables (may in different type, such as continue variables, category variables, etc.) in power system at the same time and helps trade-off process between forecasting accuracy of single variable and variable-variable relations. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. The predict results of electricity consumption of industrial enterprises by multi-variable adversarial time-series forecasting model show that the proposed approach is able to achieve better prediction accuracy. We also applied this model to real industrial enterprises power system data we gathered from several large industrial enterprises via advanced power monitors, and got impressed forecasting results.
[ "['Xiaoqiao Chen']" ]
null
null
2406.00599
null
null
http://arxiv.org/pdf/2406.00599v1
2024-06-02T03:11:31Z
2024-06-02T03:11:31Z
Robust Fair Clustering with Group Membership Uncertainty Sets
We study the canonical fair clustering problem where each cluster is constrained to have close to population-level representation of each group. Despite significant attention, the salient issue of having incomplete knowledge about the group membership of each point has been superficially addressed. In this paper, we consider a setting where errors exist in the assigned group memberships. We introduce a simple and interpretable family of error models that require a small number of parameters to be given by the decision maker. We then present an algorithm for fair clustering with provable robustness guarantees. Our framework enables the decision maker to trade off between the robustness and the clustering quality. Unlike previous work, our algorithms are backed by worst-case theoretical guarantees. Finally, we empirically verify the performance of our algorithm on real world datasets and show its superior performance over existing baselines.
[ "['Sharmila Duppala' 'Juan Luque' 'John P. Dickerson' 'Seyed A. Esmaeili']" ]
null
null
2406.00611
null
null
http://arxiv.org/pdf/2406.00611v1
2024-06-02T04:01:08Z
2024-06-02T04:01:08Z
DISCRET: Synthesizing Faithful Explanations For Treatment Effect Estimation
Designing faithful yet accurate AI models is challenging, particularly in the field of individual treatment effect estimation (ITE). ITE prediction models deployed in critical settings such as healthcare should ideally be (i) accurate, and (ii) provide faithful explanations. However, current solutions are inadequate: state-of-the-art black-box models do not supply explanations, post-hoc explainers for black-box models lack faithfulness guarantees, and self-interpretable models greatly compromise accuracy. To address these issues, we propose DISCRET, a self-interpretable ITE framework that synthesizes faithful, rule-based explanations for each sample. A key insight behind DISCRET is that explanations can serve dually as database queries to identify similar subgroups of samples. We provide a novel RL algorithm to efficiently synthesize these explanations from a large search space. We evaluate DISCRET on diverse tasks involving tabular, image, and text data. DISCRET outperforms the best self-interpretable models and has accuracy comparable to the best black-box models while providing faithful explanations. DISCRET is available at https://github.com/wuyinjun-1993/DISCRET-ICML2024.
[ "['Yinjun Wu' 'Mayank Keoliya' 'Kan Chen' 'Neelay Velingker' 'Ziyang Li'\n 'Emily J Getzen' 'Qi Long' 'Mayur Naik' 'Ravi B Parikh' 'Eric Wong']" ]
null
null
2406.00614
null
null
http://arxiv.org/pdf/2406.00614v1
2024-06-02T04:31:30Z
2024-06-02T04:31:30Z
Efficient Monte Carlo Tree Search via On-the-Fly State-Conditioned Action Abstraction
Monte Carlo Tree Search (MCTS) has showcased its efficacy across a broad spectrum of decision-making problems. However, its performance often degrades under vast combinatorial action space, especially where an action is composed of multiple sub-actions. In this work, we propose an action abstraction based on the compositional structure between a state and sub-actions for improving the efficiency of MCTS under a factored action space. Our method learns a latent dynamics model with an auxiliary network that captures sub-actions relevant to the transition on the current state, which we call state-conditioned action abstraction. Notably, it infers such compositional relationships from high-dimensional observations without the known environment model. During the tree traversal, our method constructs the state-conditioned action abstraction for each node on-the-fly, reducing the search space by discarding the exploration of redundant sub-actions. Experimental results demonstrate the superior sample efficiency of our method compared to vanilla MuZero, which suffers from expansive action space.
[ "['Yunhyeok Kwak' 'Inwoo Hwang' 'Dooyoung Kim' 'Sanghack Lee'\n 'Byoung-Tak Zhang']" ]
null
null
2406.00615
null
null
http://arxiv.org/pdf/2406.00615v1
2024-06-02T04:33:52Z
2024-06-02T04:33:52Z
Making Recommender Systems More Knowledgeable: A Framework to Incorporate Side Information
Session-based recommender systems typically focus on using only the triplet (user_id, timestamp, item_id) to make predictions of users' next actions. In this paper, we aim to utilize side information to help recommender systems catch patterns and signals otherwise undetectable. Specifically, we propose a general framework for incorporating item-specific side information into the recommender system to enhance its performance without much modification on the original model architecture. Experimental results on several models and datasets prove that with side information, our recommender system outperforms state-of-the-art models by a considerable margin and converges much faster. Additionally, we propose a new type of loss to regularize the attention mechanism used by recommender systems and evaluate its influence on model performance. Furthermore, through analysis, we put forward a few insights on potential further improvements.
[ "['Yukun Jiang' 'Leo Guo' 'Xinyi Chen' 'Jing Xi Liu']" ]
null
null
2406.00619
null
null
http://arxiv.org/pdf/2406.00619v1
2024-06-02T05:41:25Z
2024-06-02T05:41:25Z
A Multi-Graph Convolutional Neural Network Model for Short-Term Prediction of Turning Movements at Signalized Intersections
Traffic flow forecasting is a crucial first step in intelligent and proactive traffic management. Traffic flow parameters are volatile and uncertain, making traffic flow forecasting a difficult task if the appropriate forecasting model is not used. Additionally, the non-Euclidean data structure of traffic flow parameters is challenging to analyze from both spatial and temporal perspectives. State-of-the-art deep learning approaches use pure convolution, recurrent neural networks, and hybrid methods to achieve this objective efficiently. However, many of the approaches in the literature rely on complex architectures that can be difficult to train. This complexity also adds to the black-box nature of deep learning. This study introduces a novel deep learning architecture, referred to as the multigraph convolution neural network (MGCNN), for turning movement prediction at intersections. The proposed architecture combines a multigraph structure, built to model temporal variations in traffic data, with a spectral convolution operation to support modeling the spatial variations in traffic data over the graphs. The proposed model was tested using twenty days of flow and traffic control data collected from an arterial in downtown Chattanooga, TN, with ten signalized intersections. The model's ability to perform short-term predictions over 1, 2, 3, 4, and 5 minutes into the future was evaluated against four baseline state-of-the-art models. The results showed that our proposed model is superior to the other baseline models in predicting turning movements with a mean squared error (MSE) of 0.9
[ "['Jewel Rana Palit' 'Osama A Osman']" ]
null
null
2406.00628
null
null
http://arxiv.org/pdf/2406.00628v1
2024-06-02T06:10:31Z
2024-06-02T06:10:31Z
Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models
Large language models (LLMs) have revolutionized how we interact with machines. However, this technological advancement has been paralleled by the emergence of "Mallas," malicious services operating underground that exploit LLMs for nefarious purposes. Such services create malware, phishing attacks, and deceptive websites, escalating the cyber security threats landscape. This paper delves into the proliferation of Mallas by examining the use of various pre-trained language models and their efficiency and vulnerabilities when misused. Building on a dataset from the Common Vulnerabilities and Exposures (CVE) program, it explores fine-tuning methodologies to generate code and explanatory text related to identified vulnerabilities. This research aims to shed light on the operational strategies and exploitation techniques of Mallas, leading to the development of more secure and trustworthy AI applications. The paper concludes by emphasizing the need for further research, enhanced safeguards, and ethical guidelines to mitigate the risks associated with the malicious application of LLMs.
[ "['Garrett Crumrine' 'Izzat Alsmadi' 'Jesus Guerrero' 'Yuvaraj Munian']" ]
null
null
2406.00630
null
null
http://arxiv.org/pdf/2406.00630v1
2024-06-02T06:19:25Z
2024-06-02T06:19:25Z
On Non-asymptotic Theory of Recurrent Neural Networks in Temporal Point Processes
Temporal point process (TPP) is an important tool for modeling and predicting irregularly timed events across various domains. Recently, the recurrent neural network (RNN)-based TPPs have shown practical advantages over traditional parametric TPP models. However, in the current literature, it remains nascent in understanding neural TPPs from theoretical viewpoints. In this paper, we establish the excess risk bounds of RNN-TPPs under many well-known TPP settings. We especially show that an RNN-TPP with no more than four layers can achieve vanishing generalization errors. Our technical contributions include the characterization of the complexity of the multi-layer RNN class, the construction of $tanh$ neural networks for approximating dynamic event intensity functions, and the truncation technique for alleviating the issue of unbounded event sequences. Our results bridge the gap between TPP's application and neural network theory.
[ "['Zhiheng Chen' 'Guanhua Fang' 'Wen Yu']" ]
null
null
2406.00633
null
null
http://arxiv.org/pdf/2406.00633v2
2024-06-16T20:45:19Z
2024-06-02T06:36:46Z
Improving GFlowNets for Text-to-Image Diffusion Alignment
Diffusion models have become the de-facto approach for generating visual data, which are trained to match the distribution of the training dataset. In addition, we also want to control generation to fulfill desired properties such as alignment to a text description, which can be specified with a black-box reward function. Prior works fine-tune pretrained diffusion models to achieve this goal through reinforcement learning-based algorithms. Nonetheless, they suffer from issues including slow credit assignment as well as low quality in their generated samples. In this work, we explore techniques that do not directly maximize the reward but rather generate high-reward images with relatively high probability -- a natural scenario for the framework of generative flow networks (GFlowNets). To this end, we propose the Diffusion Alignment with GFlowNet (DAG) algorithm to post-train diffusion models with black-box property functions. Extensive experiments on Stable Diffusion and various reward specifications corroborate that our method could effectively align large-scale text-to-image diffusion models with given reward information.
[ "['Dinghuai Zhang' 'Yizhe Zhang' 'Jiatao Gu' 'Ruixiang Zhang'\n 'Josh Susskind' 'Navdeep Jaitly' 'Shuangfei Zhai']" ]
null
null
2406.00645
null
null
http://arxiv.org/pdf/2406.00645v2
2024-06-05T00:05:23Z
2024-06-02T07:20:08Z
FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning
In this work, we investigate how to leverage pre-trained visual-language models (VLM) for online Reinforcement Learning (RL). In particular, we focus on sparse reward tasks with pre-defined textual task descriptions. We first identify the problem of reward misalignment when applying VLM as a reward in RL tasks. To address this issue, we introduce a lightweight fine-tuning method, named Fuzzy VLM reward-aided RL (FuRL), based on reward alignment and relay RL. Specifically, we enhance the performance of SAC/DrQ baseline agents on sparse reward tasks by fine-tuning VLM representations and using relay RL to avoid local minima. Extensive experiments on the Meta-world benchmark tasks demonstrate the efficacy of the proposed method. Code is available at: https://github.com/fuyw/FuRL.
[ "['Yuwei Fu' 'Haichao Zhang' 'Di Wu' 'Wei Xu' 'Benoit Boulet']" ]
null
null
2406.00655
null
null
http://arxiv.org/pdf/2406.00655v1
2024-06-02T07:56:30Z
2024-06-02T07:56:30Z
Generalized Exponentiated Gradient Algorithms and Their Application to On-Line Portfolio Selection
This paper introduces a novel family of generalized exponentiated gradient (EG) updates derived from an Alpha-Beta divergence regularization function. Collectively referred to as EGAB, the proposed updates belong to the category of multiplicative gradient algorithms for positive data and demonstrate considerable flexibility by controlling iteration behavior and performance through three hyperparameters: $alpha$, $beta$, and the learning rate $eta$. To enforce a unit $l_1$ norm constraint for nonnegative weight vectors within generalized EGAB algorithms, we develop two slightly distinct approaches. One method exploits scale-invariant loss functions, while the other relies on gradient projections onto the feasible domain. As an illustration of their applicability, we evaluate the proposed updates in addressing the online portfolio selection problem (OLPS) using gradient-based methods. Here, they not only offer a unified perspective on the search directions of various OLPS algorithms (including the standard exponentiated gradient and diverse mean-reversion strategies), but also facilitate smooth interpolation and extension of these updates due to the flexibility in hyperparameter selection. Simulation results confirm that the adaptability of these generalized gradient updates can effectively enhance the performance for some portfolios, particularly in scenarios involving transaction costs.
[ "['Andrzej Cichocki' 'Sergio Cruces' 'Auxiliadora Sarmiento'\n 'Toshihisa Tanaka']" ]
null
null
2406.00661
null
null
http://arxiv.org/pdf/2406.00661v1
2024-06-02T08:11:35Z
2024-06-02T08:11:35Z
Bridging Multicalibration and Out-of-distribution Generalization Beyond Covariate Shift
We establish a new model-agnostic optimization framework for out-of-distribution generalization via multicalibration, a criterion that ensures a predictor is calibrated across a family of overlapping groups. Multicalibration is shown to be associated with robustness of statistical inference under covariate shift. We further establish a link between multicalibration and robustness for prediction tasks both under and beyond covariate shift. We accomplish this by extending multicalibration to incorporate grouping functions that consider covariates and labels jointly. This leads to an equivalence of the extended multicalibration and invariance, an objective for robust learning in existence of concept shift. We show a linear structure of the grouping function class spanned by density ratios, resulting in a unifying framework for robust learning by designing specific grouping functions. We propose MC-Pseudolabel, a post-processing algorithm to achieve both extended multicalibration and out-of-distribution generalization. The algorithm, with lightweight hyperparameters and optimization through a series of supervised regression steps, achieves superior performance on real-world datasets with distribution shift.
[ "['Jiayun Wu' 'Jiashuo Liu' 'Peng Cui' 'Zhiwei Steven Wu']" ]
null
null
2406.00663
null
null
http://arxiv.org/pdf/2406.00663v1
2024-06-02T08:13:12Z
2024-06-02T08:13:12Z
SimSAM: Zero-shot Medical Image Segmentation via Simulated Interaction
The recently released Segment Anything Model (SAM) has shown powerful zero-shot segmentation capabilities through a semi-automatic annotation setup in which the user can provide a prompt in the form of clicks or bounding boxes. There is growing interest around applying this to medical imaging, where the cost of obtaining expert annotations is high, privacy restrictions may limit sharing of patient data, and model generalisation is often poor. However, there are large amounts of inherent uncertainty in medical images, due to unclear object boundaries, low-contrast media, and differences in expert labelling style. Currently, SAM is known to struggle in a zero-shot setting to adequately annotate the contours of the structure of interest in medical images, where the uncertainty is often greatest, thus requiring significant manual correction. To mitigate this, we introduce textbf{Sim}ulated Interaction for textbf{S}egment textbf{A}nything textbf{M}odel (textsc{textbf{SimSAM}}), an approach that leverages simulated user interaction to generate an arbitrary number of candidate masks, and uses a novel aggregation approach to output the most compatible mask. Crucially, our method can be used during inference directly on top of SAM, without any additional training requirement. Quantitatively, we evaluate our method across three publicly available medical imaging datasets, and find that our approach leads to up to a 15.5% improvement in contour segmentation accuracy compared to zero-shot SAM. Our code is available at url{https://github.com/BenjaminTowle/SimSAM}.
[ "['Benjamin Towle' 'Xin Chen' 'Ke Zhou']" ]