categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.00667 | null | null | http://arxiv.org/pdf/2406.00667v1 | 2024-06-02T08:29:23Z | 2024-06-02T08:29:23Z | An Early Investigation into the Utility of Multimodal Large Language
Models in Medical Imaging | Recent developments in multimodal large language models (MLLMs) have spurred significant interest in their potential applications across various medical imaging domains. On the one hand, there is a temptation to use these generative models to synthesize realistic-looking medical image data, while on the other hand, the ability to identify synthetic image data in a pool of data is also significantly important. In this study, we explore the potential of the Gemini (textit{gemini-1.0-pro-vision-latest}) and GPT-4V (gpt-4-vision-preview) models for medical image analysis using two modalities of medical image data. Utilizing synthetic and real imaging data, both Gemini AI and GPT-4V are first used to classify real versus synthetic images, followed by an interpretation and analysis of the input images. Experimental results demonstrate that both Gemini and GPT-4 could perform some interpretation of the input images. In this specific experiment, Gemini was able to perform slightly better than the GPT-4V on the classification task. In contrast, responses associated with GPT-4V were mostly generic in nature. Our early investigation presented in this work provides insights into the potential of MLLMs to assist with the classification and interpretation of retinal fundoscopy and lung X-ray images. We also identify key limitations associated with the early investigation study on MLLMs for specialized tasks in medical image analysis. | [
"['Sulaiman Khan' 'Md. Rafiul Biswas' 'Alina Murad' 'Hazrat Ali'\n 'Zubair Shah']"
] |
null | null | 2406.00681 | null | null | http://arxiv.org/pdf/2406.00681v1 | 2024-06-02T09:32:28Z | 2024-06-02T09:32:28Z | Learning Multimodal Behaviors from Scratch with Diffusion Policy
Gradient | Deep reinforcement learning (RL) algorithms typically parameterize the policy as a deep network that outputs either a deterministic action or a stochastic one modeled as a Gaussian distribution, hence restricting learning to a single behavioral mode. Meanwhile, diffusion models emerged as a powerful framework for multimodal learning. However, the use of diffusion policies in online RL is hindered by the intractability of policy likelihood approximation, as well as the greedy objective of RL methods that can easily skew the policy to a single mode. This paper presents Deep Diffusion Policy Gradient (DDiffPG), a novel actor-critic algorithm that learns from scratch multimodal policies parameterized as diffusion models while discovering and maintaining versatile behaviors. DDiffPG explores and discovers multiple modes through off-the-shelf unsupervised clustering combined with novelty-based intrinsic motivation. DDiffPG forms a multimodal training batch and utilizes mode-specific Q-learning to mitigate the inherent greediness of the RL objective, ensuring the improvement of the diffusion policy across all modes. Our approach further allows the policy to be conditioned on mode-specific embeddings to explicitly control the learned modes. Empirical studies validate DDiffPG's capability to master multimodal behaviors in complex, high-dimensional continuous control tasks with sparse rewards, also showcasing proof-of-concept dynamic online replanning when navigating mazes with unseen obstacles. | [
"['Zechu Li' 'Rickmer Krohn' 'Tao Chen' 'Anurag Ajay' 'Pulkit Agrawal'\n 'Georgia Chalvatzaki']"
] |
null | null | 2406.00685 | null | null | http://arxiv.org/pdf/2406.00685v1 | 2024-06-02T09:43:34Z | 2024-06-02T09:43:34Z | Improving Accuracy-robustness Trade-off via Pixel Reweighted Adversarial
Training | Adversarial training (AT) trains models using adversarial examples (AEs), which are natural images modified with specific perturbations to mislead the model. These perturbations are constrained by a predefined perturbation budget $epsilon$ and are equally applied to each pixel within an image. However, in this paper, we discover that not all pixels contribute equally to the accuracy on AEs (i.e., robustness) and accuracy on natural images (i.e., accuracy). Motivated by this finding, we propose Pixel-reweighted AdveRsarial Training (PART), a new framework that partially reduces $epsilon$ for less influential pixels, guiding the model to focus more on key regions that affect its outputs. Specifically, we first use class activation mapping (CAM) methods to identify important pixel regions, then we keep the perturbation budget for these regions while lowering it for the remaining regions when generating AEs. In the end, we use these pixel-reweighted AEs to train a model. PART achieves a notable improvement in accuracy without compromising robustness on CIFAR-10, SVHN and TinyImagenet-200, justifying the necessity to allocate distinct weights to different pixel regions in robust classification. | [
"['Jiacheng Zhang' 'Feng Liu' 'Dawei Zhou' 'Jingfeng Zhang' 'Tongliang Liu']"
] |
null | null | 2406.00695 | null | null | http://arxiv.org/pdf/2406.00695v1 | 2024-06-02T10:17:54Z | 2024-06-02T10:17:54Z | Discovering an interpretable mathematical expression for a full
wind-turbine wake with artificial intelligence enhanced symbolic regression | The rapid expansion of wind power worldwide underscores the critical significance of engineering-focused analytical wake models in both the design and operation of wind farms. These theoretically-derived ana lytical wake models have limited predictive capabilities, particularly in the near-wake region close to the turbine rotor, due to assumptions that do not hold. Knowledge discovery methods can bridge these gaps by extracting insights, adjusting for theoretical assumptions, and developing accurate models for physical processes. In this study, we introduce a genetic symbolic regression (SR) algorithm to discover an interpretable mathematical expression for the mean velocity deficit throughout the wake, a previously unavailable insight. By incorporating a double Gaussian distribution into the SR algorithm as domain knowledge and designing a hierarchical equation structure, the search space is reduced, thus efficiently finding a concise, physically informed, and robust wake model. The proposed mathematical expression (equation) can predict the wake velocity deficit at any location in the full-wake region with high precision and stability. The model's effectiveness and practicality are validated through experimental data and high-fidelity numerical simulations. | [
"['Ding Wang' 'Yuntian Chen' 'Shiyi Chen']"
] |
null | null | 2406.00704 | null | null | http://arxiv.org/pdf/2406.00704v1 | 2024-06-02T10:52:48Z | 2024-06-02T10:52:48Z | An Optimized Toolbox for Advanced Image Processing with Tsetlin Machine
Composites | The Tsetlin Machine (TM) has achieved competitive results on several image classification benchmarks, including MNIST, K-MNIST, F-MNIST, and CIFAR-2. However, color image classification is arguably still in its infancy for TMs, with CIFAR-10 being a focal point for tracking progress. Over the past few years, TM's CIFAR-10 accuracy has increased from around 61% in 2020 to 75.1% in 2023 with the introduction of Drop Clause. In this paper, we leverage the recently proposed TM Composites architecture and introduce a range of TM Specialists that use various image processing techniques. These include Canny edge detection, Histogram of Oriented Gradients, adaptive mean thresholding, adaptive Gaussian thresholding, Otsu's thresholding, color thermometers, and adaptive color thermometers. In addition, we conduct a rigorous hyperparameter search, where we uncover optimal hyperparameters for several of the TM Specialists. The result is a toolbox that provides new state-of-the-art results on CIFAR-10 for TMs with an accuracy of 82.8%. In conclusion, our toolbox of TM Specialists forms a foundation for new TM applications and a landmark for further research on TM Composites in image analysis. | [
"['Ylva Grønningsæter' 'Halvor S. Smørvik' 'Ole-Christoffer Granmo']"
] |
null | null | 2406.00713 | null | null | http://arxiv.org/pdf/2406.00713v1 | 2024-06-02T11:32:28Z | 2024-06-02T11:32:28Z | Logistic Variational Bayes Revisited | Variational logistic regression is a popular method for approximate Bayesian inference seeing wide-spread use in many areas of machine learning including: Bayesian optimization, reinforcement learning and multi-instance learning to name a few. However, due to the intractability of the Evidence Lower Bound, authors have turned to the use of Monte Carlo, quadrature or bounds to perform inference, methods which are costly or give poor approximations to the true posterior. In this paper we introduce a new bound for the expectation of softplus function and subsequently show how this can be applied to variational logistic regression and Gaussian process classification. Unlike other bounds, our proposal does not rely on extending the variational family, or introducing additional parameters to ensure the bound is tight. In fact, we show that this bound is tighter than the state-of-the-art, and that the resulting variational posterior achieves state-of-the-art performance, whilst being significantly faster to compute than Monte-Carlo methods. | [
"['Michael Komodromos' 'Marina Evangelou' 'Sarah Filippi']"
] |
null | null | 2406.00734 | null | null | http://arxiv.org/pdf/2406.00734v2 | 2024-07-03T04:30:01Z | 2024-06-02T12:51:48Z | GLADformer: A Mixed Perspective for Graph-level Anomaly Detection | Graph-Level Anomaly Detection (GLAD) aims to distinguish anomalous graphs within a graph dataset. However, current methods are constrained by their receptive fields, struggling to learn global features within the graphs. Moreover, most contemporary methods are based on spatial domain and lack exploration of spectral characteristics. In this paper, we propose a multi-perspective hybrid graph-level anomaly detector namely GLADformer, consisting of two key modules. Specifically, we first design a Graph Transformer module with global spectrum enhancement, which ensures balanced and resilient parameter distributions by fusing global features and spectral distribution characteristics. Furthermore, to uncover local anomalous attributes, we customize a band-pass spectral GNN message passing module that further enhances the model's generalization capability. Through comprehensive experiments on ten real-world datasets from multiple domains, we validate the effectiveness and robustness of GLADformer. This demonstrates that GLADformer outperforms current state-of-the-art models in graph-level anomaly detection, particularly in effectively capturing global anomaly representations and spectral characteristics. | [
"['Fan Xu' 'Nan Wang' 'Hao Wu' 'Xuezhi Wen' 'Dalin Zhang' 'Siyang Lu'\n 'Binyong Li' 'Wei Gong' 'Hai Wan' 'Xibin Zhao']"
] |
null | null | 2406.00735 | null | null | http://arxiv.org/pdf/2406.00735v1 | 2024-06-02T12:59:54Z | 2024-06-02T12:59:54Z | Full-Atom Peptide Design based on Multi-modal Flow Matching | Peptides, short chains of amino acid residues, play a vital role in numerous biological processes by interacting with other target molecules, offering substantial potential in drug discovery. In this work, we present PepFlow, the first multi-modal deep generative model grounded in the flow-matching framework for the design of full-atom peptides that target specific protein receptors. Drawing inspiration from the crucial roles of residue backbone orientations and side-chain dynamics in protein-peptide interactions, we characterize the peptide structure using rigid backbone frames within the $mathrm{SE}(3)$ manifold and side-chain angles on high-dimensional tori. Furthermore, we represent discrete residue types in the peptide sequence as categorical distributions on the probability simplex. By learning the joint distributions of each modality using derived flows and vector fields on corresponding manifolds, our method excels in the fine-grained design of full-atom peptides. Harnessing the multi-modal paradigm, our approach adeptly tackles various tasks such as fix-backbone sequence design and side-chain packing through partial sampling. Through meticulously crafted experiments, we demonstrate that PepFlow exhibits superior performance in comprehensive benchmarks, highlighting its significant potential in computational peptide design and analysis. | [
"['Jiahan Li' 'Chaoran Cheng' 'Zuofan Wu' 'Ruihan Guo' 'Shitong Luo'\n 'Zhizhou Ren' 'Jian Peng' 'Jianzhu Ma']"
] |
null | null | 2406.00738 | null | null | http://arxiv.org/pdf/2406.00738v2 | 2024-06-07T20:38:51Z | 2024-06-02T13:13:46Z | Global Rewards in Restless Multi-Armed Bandits | Restless multi-armed bandits (RMAB) extend multi-armed bandits so pulling an arm impacts future states. Despite the success of RMABs, a key limiting assumption is the separability of rewards into a sum across arms. We address this deficiency by proposing restless-multi-armed bandit with global rewards (RMAB-G), a generalization of RMABs to global non-separable rewards. To solve RMAB-G, we develop the Linear- and Shapley-Whittle indices, which extend Whittle indices from RMABs to RMAB-Gs. We prove approximation bounds but also point out how these indices could fail when reward functions are highly non-linear. To overcome this, we propose two sets of adaptive policies: the first computes indices iteratively, and the second combines indices with Monte-Carlo Tree Search (MCTS). Empirically, we demonstrate that our proposed policies outperform baselines and index-based policies with synthetic data and real-world data from food rescue. | [
"['Naveen Raman' 'Zheyuan Ryan Shi' 'Fei Fang']"
] |
null | null | 2406.00741 | null | null | http://arxiv.org/pdf/2406.00741v1 | 2024-06-02T13:28:57Z | 2024-06-02T13:28:57Z | Learning to Play 7 Wonders Duel Without Human Supervision | This paper introduces ZeusAI, an artificial intelligence system developed to play the board game 7 Wonders Duel. Inspired by the AlphaZero reinforcement learning algorithm, ZeusAI relies on a combination of Monte Carlo Tree Search and a Transformer Neural Network to learn the game without human supervision. ZeusAI competes at the level of top human players, develops both known and novel strategies, and allows us to test rule variants to improve the game's balance. This work demonstrates how AI can help in understanding and enhancing board games. | [
"['Giovanni Paolini' 'Lorenzo Moreschini' 'Francesco Veneziano'\n 'Alessandro Iraci']"
] |
null | null | 2406.00748 | null | null | http://arxiv.org/pdf/2406.00748v1 | 2024-06-02T14:01:55Z | 2024-06-02T14:01:55Z | Augmenting the FedProx Algorithm by Minimizing Convergence | The Internet of Things has experienced significant growth and has become an integral part of various industries. This expansion has given rise to the Industrial IoT initiative where industries are utilizing IoT technology to enhance communication and connectivity through innovative solutions such as data analytics and cloud computing. However this widespread adoption of IoT is demanding of algorithms that provide better efficiency for the same training environment without speed being a factor. In this paper we present a novel approach called G Federated Proximity. Building upon the existing FedProx technique our implementation introduces slight modifications to enhance its efficiency and effectiveness. By leveraging FTL our proposed system aims to improve the accuracy of model obtained after the training dataset with the help of normalization techniques such that it performs better on real time devices and heterogeneous networks Our results indicate a significant increase in the throughput of approximately 90% better convergence compared to existing model performance. | [
"['Anomitra Sarkar' 'Lavanya Vajpayee']"
] |
null | null | 2406.00750 | null | null | http://arxiv.org/pdf/2406.00750v1 | 2024-06-02T14:07:50Z | 2024-06-02T14:07:50Z | Freeplane: Unlocking Free Lunch in Triplane-Based Sparse-View
Reconstruction Models | Creating 3D assets from single-view images is a complex task that demands a deep understanding of the world. Recently, feed-forward 3D generative models have made significant progress by training large reconstruction models on extensive 3D datasets, with triplanes being the preferred 3D geometry representation. However, effectively utilizing the geometric priors of triplanes, while minimizing artifacts caused by generated inconsistent multi-view images, remains a challenge. In this work, we present textbf{Fre}quency modulattextbf{e}d tritextbf{plane} (textbf{Freeplane}), a simple yet effective method to improve the generation quality of feed-forward models without additional training. We first analyze the role of triplanes in feed-forward methods and find that the inconsistent multi-view images introduce high-frequency artifacts on triplanes, leading to low-quality 3D meshes. Based on this observation, we propose strategically filtering triplane features and combining triplanes before and after filtering to produce high-quality textured meshes. These techniques incur no additional cost and can be seamlessly integrated into pre-trained feed-forward models to enhance their robustness against the inconsistency of generated multi-view images. Both qualitative and quantitative results demonstrate that our method improves the performance of feed-forward models by simply modulating triplanes. All you need is to modulate the triplanes during inference. | [
"['Wenqiang Sun' 'Zhengyi Wang' 'Shuo Chen' 'Yikai Wang' 'Zilong Chen'\n 'Jun Zhu' 'Jun Zhang']"
] |
null | null | 2406.00755 | null | null | http://arxiv.org/pdf/2406.00755v1 | 2024-06-02T14:16:24Z | 2024-06-02T14:16:24Z | Evaluating Mathematical Reasoning of Large Language Models: A Focus on
Error Identification and Correction | The rapid advancement of Large Language Models (LLMs) in the realm of mathematical reasoning necessitates comprehensive evaluations to gauge progress and inspire future directions. Existing assessments predominantly focus on problem-solving from the examinee perspective, overlooking a dual perspective of examiner regarding error identification and correction. From the examiner perspective, we define four evaluation tasks for error identification and correction along with a new dataset with annotated error types and steps. We also design diverse prompts to thoroughly evaluate eleven representative LLMs. Our principal findings indicate that GPT-4 outperforms all models, while open-source model LLaMA-2-7B demonstrates comparable abilities to closed-source models GPT-3.5 and Gemini Pro. Notably, calculation error proves the most challenging error type. Moreover, prompting LLMs with the error types can improve the average correction accuracy by 47.9%. These results reveal potential directions for developing the mathematical reasoning abilities of LLMs. Our code and dataset is available on https://github.com/LittleCirc1e/EIC. | [
"['Xiaoyuan Li' 'Wenjie Wang' 'Moxin Li' 'Junrong Guo' 'Yang Zhang'\n 'Fuli Feng']"
] |
null | null | 2406.00761 | null | null | http://arxiv.org/pdf/2406.00761v1 | 2024-06-02T14:33:49Z | 2024-06-02T14:33:49Z | Shared-unique Features and Task-aware Prioritized Sampling on Multi-task
Reinforcement Learning | We observe that current state-of-the-art (SOTA) methods suffer from the performance imbalance issue when performing multi-task reinforcement learning (MTRL) tasks. While these methods may achieve impressive performance on average, they perform extremely poorly on a few tasks. To address this, we propose a new and effective method called STARS, which consists of two novel strategies: a shared-unique feature extractor and task-aware prioritized sampling. First, the shared-unique feature extractor learns both shared and task-specific features to enable better synergy of knowledge between different tasks. Second, the task-aware sampling strategy is combined with the prioritized experience replay for efficient learning on tasks with poor performance. The effectiveness and stability of our STARS are verified through experiments on the mainstream Meta-World benchmark. From the results, our STARS statistically outperforms current SOTA methods and alleviates the performance imbalance issue. Besides, we visualize the learned features to support our claims and enhance the interpretability of STARS. | [
"['Po-Shao Lin' 'Jia-Fong Yeh' 'Yi-Ting Chen' 'Winston H. Hsu']"
] |
null | null | 2406.00764 | null | null | http://arxiv.org/pdf/2406.00764v1 | 2024-06-02T14:43:56Z | 2024-06-02T14:43:56Z | IENE: Identifying and Extrapolating the Node Environment for
Out-of-Distribution Generalization on Graphs | Due to the performance degradation of graph neural networks (GNNs) under distribution shifts, the work on out-of-distribution (OOD) generalization on graphs has received widespread attention. A novel perspective involves distinguishing potential confounding biases from different environments through environmental identification, enabling the model to escape environmentally-sensitive correlations and maintain stable performance under distribution shifts. However, in graph data, confounding factors not only affect the generation process of node features but also influence the complex interaction between nodes. We observe that neglecting either aspect of them will lead to a decrease in performance. In this paper, we propose IENE, an OOD generalization method on graphs based on node-level environmental identification and extrapolation techniques. It strengthens the model's ability to extract invariance from two granularities simultaneously, leading to improved generalization. Specifically, to identify invariance in features, we utilize the disentangled information bottleneck framework to achieve mutual promotion between node-level environmental estimation and invariant feature learning. Furthermore, we extrapolate topological environments through graph augmentation techniques to identify structural invariance. We implement the conceptual method with specific algorithms and provide theoretical analysis and proofs for our approach. Extensive experimental evaluations on two synthetic and four real-world OOD datasets validate the superiority of IENE, which outperforms existing techniques and provides a flexible framework for enhancing the generalization of GNNs. | [
"['Haoran Yang' 'Xiaobing Pei' 'Kai Yuan']"
] |
null | null | 2406.00766 | null | null | http://arxiv.org/pdf/2406.00766v1 | 2024-06-02T14:57:00Z | 2024-06-02T14:57:00Z | Scaling Tractable Probabilistic Circuits: A Systems Perspective | Probabilistic Circuits (PCs) are a general framework for tractable deep generative models, which support exact and efficient probabilistic inference on their learned distributions. Recent modeling and training advancements have enabled their application to complex real-world tasks. However, the time and memory inefficiency of existing PC implementations hinders further scaling up. This paper proposes PyJuice, a general GPU implementation design for PCs that improves prior art in several regards. Specifically, PyJuice is 1-2 orders of magnitude faster than existing systems (including very recent ones) at training large-scale PCs. Moreover, PyJuice consumes 2-5x less GPU memory, which enables us to train larger models. At the core of our system is a compilation process that converts a PC into a compact representation amenable to efficient block-based parallelization, which significantly reduces IO and makes it possible to leverage Tensor Cores available in modern GPUs. Empirically, PyJuice can be used to improve state-of-the-art PCs trained on image (e.g., ImageNet32) and language (e.g., WikiText, CommonGen) datasets. We further establish a new set of baselines on natural image and language datasets by benchmarking existing PC structures but with much larger sizes and more training epochs, with the hope of incentivizing future research. Code is available at https://github.com/Tractables/pyjuice. | [
"['Anji Liu' 'Kareem Ahmed' 'Guy Van den Broeck']"
] |
null | null | 2406.00773 | null | null | http://arxiv.org/pdf/2406.00773v2 | 2024-06-06T10:08:22Z | 2024-06-02T15:20:59Z | Diffusion Tuning: Transferring Diffusion Models via Chain of Forgetting | Diffusion models have significantly advanced the field of generative modeling. However, training a diffusion model is computationally expensive, creating a pressing need to adapt off-the-shelf diffusion models for downstream generation tasks. Current fine-tuning methods focus on parameter-efficient transfer learning but overlook the fundamental transfer characteristics of diffusion models. In this paper, we investigate the transferability of diffusion models and observe a monotonous chain of forgetting trend of transferability along the reverse process. Based on this observation and novel theoretical insights, we present Diff-Tuning, a frustratingly simple transfer approach that leverages the chain of forgetting tendency. Diff-Tuning encourages the fine-tuned model to retain the pre-trained knowledge at the end of the denoising chain close to the generated data while discarding the other noise side. We conduct comprehensive experiments to evaluate Diff-Tuning, including the transfer of pre-trained Diffusion Transformer models to eight downstream generations and the adaptation of Stable Diffusion to five control conditions with ControlNet. Diff-Tuning achieves a 26% improvement over standard fine-tuning and enhances the convergence speed of ControlNet by 24%. Notably, parameter-efficient transfer learning techniques for diffusion models can also benefit from Diff-Tuning. | [
"['Jincheng Zhong' 'Xingzhuo Guo' 'Jiaxiang Dong' 'Mingsheng Long']"
] |
null | null | 2406.00775 | null | null | http://arxiv.org/pdf/2406.00775v1 | 2024-06-02T15:26:52Z | 2024-06-02T15:26:52Z | Constrained Adaptive Attack: Effective Adversarial Attack Against Deep
Neural Networks for Tabular Data | State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings. However, the robustness of these models remains scarcely explored. Contrary to computer vision, there are no effective attacks to properly evaluate the adversarial robustness of deep tabular models due to intrinsic properties of tabular data, such as categorical features, immutability, and feature relationship constraints. To fill this gap, we first propose CAPGD, a gradient attack that overcomes the failures of existing gradient attacks with adaptive mechanisms. This new attack does not require parameter tuning and further degrades the accuracy, up to 81% points compared to the previous gradient attacks. Second, we design CAA, an efficient evasion attack that combines our CAPGD attack and MOEVA, the best search-based attack. We demonstrate the effectiveness of our attacks on five architectures and four critical use cases. Our empirical study demonstrates that CAA outperforms all existing attacks in 17 over the 20 settings, and leads to a drop in the accuracy by up to 96.1% points and 21.9% points compared to CAPGD and MOEVA respectively while being up to five times faster than MOEVA. Given the effectiveness and efficiency of our new attacks, we argue that they should become the minimal test for any new defense or robust architectures in tabular machine learning. | [
"['Thibault Simonetto' 'Salah Ghamizi' 'Maxime Cordy']"
] |
null | null | 2406.00778 | null | null | http://arxiv.org/pdf/2406.00778v1 | 2024-06-02T15:35:45Z | 2024-06-02T15:35:45Z | Bayesian Joint Additive Factor Models for Multiview Learning | It is increasingly common in a wide variety of applied settings to collect data of multiple different types on the same set of samples. Our particular focus in this article is on studying relationships between such multiview features and responses. A motivating application arises in the context of precision medicine where multi-omics data are collected to correlate with clinical outcomes. It is of interest to infer dependence within and across views while combining multimodal information to improve the prediction of outcomes. The signal-to-noise ratio can vary substantially across views, motivating more nuanced statistical tools beyond standard late and early fusion. This challenge comes with the need to preserve interpretability, select features, and obtain accurate uncertainty quantification. We propose a joint additive factor regression model (JAFAR) with a structured additive design, accounting for shared and view-specific components. We ensure identifiability via a novel dependent cumulative shrinkage process (D-CUSP) prior. We provide an efficient implementation via a partially collapsed Gibbs sampler and extend our approach to allow flexible feature and outcome distributions. Prediction of time-to-labor onset from immunome, metabolome, and proteome data illustrates performance gains against state-of-the-art competitors. Our open-source software (R package) is available at https://github.com/niccoloanceschi/jafar. | [
"['Niccolo Anceschi' 'Federico Ferrari' 'David B. Dunson' 'Himel Mallick']"
] |
null | null | 2406.00779 | null | null | http://arxiv.org/pdf/2406.00779v1 | 2024-06-02T15:42:03Z | 2024-06-02T15:42:03Z | Differentiation of Multi-objective Data-driven Decision Pipeline | Real-world scenarios frequently involve multi-objective data-driven optimization problems, characterized by unknown problem coefficients and multiple conflicting objectives. Traditional two-stage methods independently apply a machine learning model to estimate problem coefficients, followed by invoking a solver to tackle the predicted optimization problem. The independent use of optimization solvers and prediction models may lead to suboptimal performance due to mismatches between their objectives. Recent efforts have focused on end-to-end training of predictive models that use decision loss derived from the downstream optimization problem. However, these methods have primarily focused on single-objective optimization problems, thus limiting their applicability. We aim to propose a multi-objective decision-focused approach to address this gap. In order to better align with the inherent properties of multi-objective optimization problems, we propose a set of novel loss functions. These loss functions are designed to capture the discrepancies between predicted and true decision problems, considering solution space, objective space, and decision quality, named landscape loss, Pareto set loss, and decision loss, respectively. Our experimental results demonstrate that our proposed method significantly outperforms traditional two-stage methods and most current decision-focused methods. | [
"['Peng Li' 'Lixia Wu' 'Chaoqun Feng' 'Haoyuan Hu' 'Lei Fu' 'Jieping Ye']"
] |
null | null | 2406.00793 | null | null | http://arxiv.org/pdf/2406.00793v1 | 2024-06-02T16:20:30Z | 2024-06-02T16:20:30Z | Is In-Context Learning in Large Language Models Bayesian? A Martingale
Perspective | In-context learning (ICL) has emerged as a particularly remarkable characteristic of Large Language Models (LLM): given a pretrained LLM and an observed dataset, LLMs can make predictions for new data points from the same distribution without fine-tuning. Numerous works have postulated ICL as approximately Bayesian inference, rendering this a natural hypothesis. In this work, we analyse this hypothesis from a new angle through the martingale property, a fundamental requirement of a Bayesian learning system for exchangeable data. We show that the martingale property is a necessary condition for unambiguous predictions in such scenarios, and enables a principled, decomposed notion of uncertainty vital in trustworthy, safety-critical systems. We derive actionable checks with corresponding theory and test statistics which must hold if the martingale property is satisfied. We also examine if uncertainty in LLMs decreases as expected in Bayesian learning when more data is observed. In three experiments, we provide evidence for violations of the martingale property, and deviations from a Bayesian scaling behaviour of uncertainty, falsifying the hypothesis that ICL is Bayesian. | [
"['Fabian Falck' 'Ziyu Wang' 'Chris Holmes']"
] |
null | null | 2406.00800 | null | null | http://arxiv.org/pdf/2406.00800v1 | 2024-06-02T17:00:02Z | 2024-06-02T17:00:02Z | MagR: Weight Magnitude Reduction for Enhancing Post-Training
Quantization | In this paper, we present a simple optimization-based preprocessing technique called Weight Magnitude Reduction (MagR) to improve the performance of post-training quantization. For each linear layer, we adjust the pre-trained floating-point weights by solving an $ell_infty$-regularized optimization problem. This process greatly diminishes the maximum magnitude of the weights and smooths out outliers, while preserving the layer's output. The preprocessed weights are centered more towards zero, which facilitates the subsequent quantization process. To implement MagR, we address the $ell_infty$-regularization by employing an efficient proximal gradient descent algorithm. Unlike existing preprocessing methods that involve linear transformations and subsequent post-processing steps, which can introduce significant overhead at inference time, MagR functions as a non-linear transformation, eliminating the need for any additional post-processing. This ensures that MagR introduces no overhead whatsoever during inference. Our experiments demonstrate that MagR achieves state-of-the-art performance on the Llama family of models. For example, we achieve a Wikitext2 perplexity of 5.95 on the LLaMA2-70B model for per-channel INT2 weight quantization without incurring any inference overhead. | [
"['Aozhong Zhang' 'Naigang Wang' 'Yanxia Deng' 'Xin Li' 'Zi Yang'\n 'Penghang Yin']"
] |
null | null | 2406.00801 | null | null | http://arxiv.org/pdf/2406.00801v2 | 2024-07-14T08:37:14Z | 2024-06-02T17:01:44Z | Ensemble Deep Random Vector Functional Link Neural Network Based on
Fuzzy Inference System | The ensemble deep random vector functional link (edRVFL) neural network has demonstrated the ability to address the limitations of conventional artificial neural networks. However, since edRVFL generates features for its hidden layers through random projection, it can potentially lose intricate features or fail to capture certain non-linear features in its base models (hidden layers). To enhance the feature learning capabilities of edRVFL, we propose a novel edRVFL based on fuzzy inference system (edRVFL-FIS). The proposed edRVFL-FIS leverages the capabilities of two emerging domains, namely deep learning and ensemble approaches, with the intrinsic IF-THEN properties of fuzzy inference system (FIS) and produces rich feature representation to train the ensemble model. Each base model of the proposed edRVFL-FIS encompasses two key feature augmentation components: a) unsupervised fuzzy layer features and b) supervised defuzzified features. The edRVFL-FIS model incorporates diverse clustering methods (R-means, K-means, Fuzzy C-means) to establish fuzzy layer rules, resulting in three model variations (edRVFL-FIS-R, edRVFL-FIS-K, edRVFL-FIS-C) with distinct fuzzified features and defuzzified features. Within the framework of edRVFL-FIS, each base model utilizes the original, hidden layer and defuzzified features to make predictions. Experimental results, statistical tests, discussions and analyses conducted across UCI and NDC datasets consistently demonstrate the superior performance of all variations of the proposed edRVFL-FIS model over baseline models. The source codes of the proposed models are available at https://github.com/mtanveer1/edRVFL-FIS. | [
"['M. Sajid' 'M. Tanveer' 'P. N. Suganthan']"
] |
null | null | 2406.00805 | null | null | http://arxiv.org/pdf/2406.00805v1 | 2024-06-02T17:08:56Z | 2024-06-02T17:08:56Z | Extrapolability Improvement of Machine Learning-Based Evapotranspiration
Models via Domain-Adversarial Neural Networks | Machine learning-based hydrological prediction models, despite their high accuracy, face limitations in extrapolation capabilities when applied globally due to uneven data distribution. This study integrates Domain-Adversarial Neural Networks (DANN) to improve the geographical adaptability of evapotranspiration (ET) models. By employing DANN, we aim to mitigate distributional discrepancies between different sites, significantly enhancing the model's extrapolation capabilities. Our results show that DANN improves ET prediction accuracy with an average increase in the Kling-Gupta Efficiency (KGE) of 0.2 to 0.3 compared to the traditional Leave-One-Out (LOO) method. DANN is particularly effective for isolated sites and transition zones between biomes, reducing data distribution discrepancies and avoiding low-accuracy predictions. By leveraging information from data-rich areas, DANN enhances the reliability of global-scale ET products, especially in ungauged regions. This study highlights the potential of domain adaptation techniques to improve the extrapolation and generalization capabilities of machine learning models in hydrological studies. | [
"['Haiyang Shi']"
] |
null | null | 2406.00806 | null | null | http://arxiv.org/pdf/2406.00806v1 | 2024-06-02T17:09:48Z | 2024-06-02T17:09:48Z | Envisioning Outlier Exposure by Large Language Models for
Out-of-Distribution Detection | Detecting out-of-distribution (OOD) samples is essential when deploying machine learning models in open-world scenarios. Zero-shot OOD detection, requiring no training on in-distribution (ID) data, has been possible with the advent of vision-language models like CLIP. Existing methods build a text-based classifier with only closed-set labels. However, this largely restricts the inherent capability of CLIP to recognize samples from large and open label space. In this paper, we propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to Envision potential Outlier Exposure, termed EOE, without access to any actual OOD data. Owing to better adaptation to open-world scenarios, EOE can be generalized to different tasks, including far, near, and fine-grained OOD detection. Technically, we design (1) LLM prompts based on visual similarity to generate potential outlier class labels specialized for OOD detection, as well as (2) a new score function based on potential outlier penalty to distinguish hard OOD samples effectively. Empirically, EOE achieves state-of-the-art performance across different OOD tasks and can be effectively scaled to the ImageNet-1K dataset. The code is publicly available at: https://github.com/tmlr-group/EOE. | [
"['Chentao Cao' 'Zhun Zhong' 'Zhanke Zhou' 'Yang Liu' 'Tongliang Liu'\n 'Bo Han']"
] |
null | null | 2406.00809 | null | null | http://arxiv.org/pdf/2406.00809v1 | 2024-06-02T17:18:41Z | 2024-06-02T17:18:41Z | Graph Neural Preconditioners for Iterative Solutions of Sparse Linear
Systems | Preconditioning is at the heart of iterative solutions of large, sparse linear systems of equations in scientific disciplines. Several algebraic approaches, which access no information beyond the matrix itself, are widely studied and used, but ill-conditioned matrices remain very challenging. We take a machine learning approach and propose using graph neural networks as a general-purpose preconditioner. They show attractive performance for ill-conditioned problems, in part because they better approximate the matrix inverse from appropriately generated training data. Empirical evaluation on over 800 matrices suggests that the construction time of these graph neural preconditioners (GNPs) is more predictable than other widely used ones, such as ILU and AMG, while the execution time is faster than using a Krylov method as the preconditioner, such as in inner-outer GMRES. GNPs have a strong potential for solving large-scale, challenging algebraic problems arising from not only partial differential equations, but also economics, statistics, graph, and optimization, to name a few. | [
"['Jie Chen']"
] |
null | null | 2406.00812 | null | null | http://arxiv.org/pdf/2406.00812v2 | 2024-06-08T10:28:48Z | 2024-06-02T17:26:27Z | Covariance-Adaptive Sequential Black-box Optimization for Diffusion
Targeted Generation | Diffusion models have demonstrated great potential in generating high-quality content for images, natural language, protein domains, etc. However, how to perform user-preferred targeted generation via diffusion models with only black-box target scores of users remains challenging. To address this issue, we first formulate the fine-tuning of the targeted reserve-time stochastic differential equation (SDE) associated with a pre-trained diffusion model as a sequential black-box optimization problem. Furthermore, we propose a novel covariance-adaptive sequential optimization algorithm to optimize cumulative black-box scores under unknown transition dynamics. Theoretically, we prove a $O(frac{d^2}{sqrt{T}})$ convergence rate for cumulative convex functions without smooth and strongly convex assumptions. Empirically, experiments on both numerical test problems and target-guided 3D-molecule generation tasks show the superior performance of our method in achieving better target scores. | [
"['Yueming Lyu' 'Kim Yong Tan' 'Yew Soon Ong' 'Ivor W. Tsang']"
] |
null | null | 2406.00814 | null | null | http://arxiv.org/pdf/2406.00814v1 | 2024-06-02T17:29:42Z | 2024-06-02T17:29:42Z | Expected Possession Value of Control and Duel Actions for Soccer
Player's Skills Estimation | Estimation of football players' skills is one of the key tasks in sports analytics. This paper introduces multiple extensions to a widely used model, expected possession value (EPV), to address some key challenges such as selection problem. First, we assign greater weights to events occurring immediately prior to the shot rather than those preceding them (decay effect). Second, our model incorporates possession risk more accurately by considering the decay effect and effective playing time. Third, we integrate the assessment of individual player ability to win aerial and ground duels. Using the extended EPV model, we predict this metric for various football players for the upcoming season, particularly taking into account the strength of their opponents. | [
"['Andrei Shelopugin']"
] |
null | null | 2406.00816 | null | null | http://arxiv.org/pdf/2406.00816v1 | 2024-06-02T17:43:19Z | 2024-06-02T17:43:19Z | Invisible Backdoor Attacks on Diffusion Models | In recent years, diffusion models have achieved remarkable success in the realm of high-quality image generation, garnering increased attention. This surge in interest is paralleled by a growing concern over the security threats associated with diffusion models, largely attributed to their susceptibility to malicious exploitation. Notably, recent research has brought to light the vulnerability of diffusion models to backdoor attacks, enabling the generation of specific target images through corresponding triggers. However, prevailing backdoor attack methods rely on manually crafted trigger generation functions, often manifesting as discernible patterns incorporated into input noise, thus rendering them susceptible to human detection. In this paper, we present an innovative and versatile optimization framework designed to acquire invisible triggers, enhancing the stealthiness and resilience of inserted backdoors. Our proposed framework is applicable to both unconditional and conditional diffusion models, and notably, we are the pioneers in demonstrating the backdooring of diffusion models within the context of text-guided image editing and inpainting pipelines. Moreover, we also show that the backdoors in the conditional generation can be directly applied to model watermarking for model ownership verification, which further boosts the significance of the proposed framework. Extensive experiments on various commonly used samplers and datasets verify the efficacy and stealthiness of the proposed framework. Our code is publicly available at https://github.com/invisibleTriggerDiffusion/invisible_triggers_for_diffusion. | [
"['Sen Li' 'Junchi Ma' 'Minhao Cheng']"
] |
null | null | 2406.00823 | null | null | http://arxiv.org/pdf/2406.00823v1 | 2024-06-02T18:11:47Z | 2024-06-02T18:11:47Z | Lasso Bandit with Compatibility Condition on Optimal Arm | We consider a stochastic sparse linear bandit problem where only a sparse subset of context features affects the expected reward function, i.e., the unknown reward parameter has sparse structure. In the existing Lasso bandit literature, the compatibility conditions together with additional diversity conditions on the context features are imposed to achieve regret bounds that only depend logarithmically on the ambient dimension $d$. In this paper, we demonstrate that even without the additional diversity assumptions, the compatibility condition only on the optimal arm is sufficient to derive a regret bound that depends logarithmically on $d$, and our assumption is strictly weaker than those used in the lasso bandit literature under the single parameter setting. We propose an algorithm that adapts the forced-sampling technique and prove that the proposed algorithm achieves $O(text{poly}log dT)$ regret under the margin condition. To our knowledge, the proposed algorithm requires the weakest assumptions among Lasso bandit algorithms under a single parameter setting that achieve $O(text{poly}log dT)$ regret. Through the numerical experiments, we confirm the superior performance of our proposed algorithm. | [
"['Harin Lee' 'Taehyun Hwang' 'Min-hwan Oh']"
] |
null | null | 2406.00826 | null | null | http://arxiv.org/pdf/2406.00826v1 | 2024-06-02T18:19:19Z | 2024-06-02T18:19:19Z | Learning-Based Verification of Stochastic Dynamical Systems with Neural
Network Policies | We consider the verification of neural network policies for reach-avoid control tasks in stochastic dynamical systems. We use a verification procedure that trains another neural network, which acts as a certificate proving that the policy satisfies the task. For reach-avoid tasks, it suffices to show that this certificate network is a reach-avoid supermartingale (RASM). As our main contribution, we significantly accelerate algorithmic approaches for verifying that a neural network is indeed a RASM. The main bottleneck of these approaches is the discretization of the state space of the dynamical system. The following two key contributions allow us to use a coarser discretization than existing approaches. First, we present a novel and fast method to compute tight upper bounds on Lipschitz constants of neural networks based on weighted norms. We further improve these bounds on Lipschitz constants based on the characteristics of the certificate network. Second, we integrate an efficient local refinement scheme that dynamically refines the state space discretization where necessary. Our empirical evaluation shows the effectiveness of our approach for verifying neural network policies in several benchmarks and trained with different reinforcement learning algorithms. | [
"['Thom Badings' 'Wietze Koops' 'Sebastian Junges' 'Nils Jansen']"
] |
null | null | 2406.00832 | null | null | http://arxiv.org/pdf/2406.00832v2 | 2024-06-05T05:23:40Z | 2024-06-02T18:42:57Z | BoNBoN Alignment for Large Language Models and the Sweetness of
Best-of-n Sampling | This paper concerns the problem of aligning samples from large language models to human preferences using best-of-$n$ sampling, where we draw $n$ samples, rank them, and return the best one. We consider two fundamental problems. First: what is the relationship between best-of-$n$ and approaches to alignment that train LLMs to output samples with a high expected reward (e.g., RLHF or DPO)? To answer this, we embed both the best-of-$n$ distribution and the sampling distributions learned by alignment procedures in a common class of tiltings of the base LLM distribution. We then show that, within this class, best-of-$n$ is essentially optimal in terms of the trade-off between win-rate against the base model vs KL distance from the base model. That is, best-of-$n$ is the best choice of alignment distribution if the goal is to maximize win rate. However, best-of-$n$ requires drawing $n$ samples for each inference, a substantial cost. To avoid this, the second problem we consider is how to fine-tune a LLM to mimic the best-of-$n$ sampling distribution. We derive BoNBoN Alignment to achieve this by exploiting the special structure of the best-of-$n$ distribution. Experiments show that BoNBoN alignment yields substantial improvements in producing a model that is preferred to the base policy while minimally affecting off-target aspects. | [
"['Lin Gui' 'Cristina Gârbacea' 'Victor Veitch']"
] |
null | null | 2406.00843 | null | null | http://arxiv.org/pdf/2406.00843v1 | 2024-06-02T19:35:38Z | 2024-06-02T19:35:38Z | Diffusion-Inspired Quantum Noise Mitigation in Parameterized Quantum
Circuits | Parameterized Quantum Circuits (PQCs) have been acknowledged as a leading strategy to utilize near-term quantum advantages in multiple problems, including machine learning and combinatorial optimization. When applied to specific tasks, the parameters in the quantum circuits are trained to minimize the target function. Although there have been comprehensive studies to improve the performance of the PQCs on practical tasks, the errors caused by the quantum noise downgrade the performance when running on real quantum computers. In particular, when the quantum state is transformed through multiple quantum circuit layers, the effect of the quantum noise happens cumulatively and becomes closer to the maximally mixed state or complete noise. This paper studies the relationship between the quantum noise and the diffusion model. Then, we propose a novel diffusion-inspired learning approach to mitigate the quantum noise in the PQCs and reduce the error for specific tasks. Through our experiments, we illustrate the efficiency of the learning strategy and achieve state-of-the-art performance on classification tasks in the quantum noise scenarios. | [
"['Hoang-Quan Nguyen' 'Xuan Bac Nguyen' 'Samuel Yen-Chi Chen'\n 'Hugh Churchill' 'Nicholas Borys' 'Samee U. Khan' 'Khoa Luu']"
] |
null | null | 2406.00846 | null | null | http://arxiv.org/pdf/2406.00846v2 | 2024-06-12T19:21:23Z | 2024-06-02T19:50:05Z | Local Methods with Adaptivity via Scaling | The rapid development of machine learning and deep learning has introduced increasingly complex optimization challenges that must be addressed. Indeed, training modern, advanced models has become difficult to implement without leveraging multiple computing nodes in a distributed environment. Distributed optimization is also fundamental to emerging fields such as federated learning. Specifically, there is a need to organize the training process to minimize the time lost due to communication. A widely used and extensively researched technique to mitigate the communication bottleneck involves performing local training before communication. This approach is the focus of our paper. Concurrently, adaptive methods that incorporate scaling, notably led by Adam, have gained significant popularity in recent years. Therefore, this paper aims to merge the local training technique with the adaptive approach to develop efficient distributed learning methods. We consider the classical Local SGD method and enhance it with a scaling feature. A crucial aspect is that the scaling is described generically, allowing us to analyze various approaches, including Adam, RMSProp, and OASIS, in a unified manner. In addition to theoretical analysis, we validate the performance of our methods in practice by training a neural network. | [
"['Savelii Chezhegov' 'Sergey Skorik' 'Nikolas Khachaturov'\n 'Danil Shalagin' 'Aram Avetisyan' 'Aleksandr Beznosikov' 'Martin Takáč'\n 'Yaroslav Kholodov' 'Alexander Gasnikov']"
] |
null | null | 2406.00853 | null | null | http://arxiv.org/pdf/2406.00853v2 | 2024-07-08T16:15:08Z | 2024-06-02T20:18:40Z | A Tutorial on Doubly Robust Learning for Causal Inference | Doubly robust learning offers a robust framework for causal inference from observational data by integrating propensity score and outcome modeling. Despite its theoretical appeal, practical adoption remains limited due to perceived complexity and inaccessible software. This tutorial aims to demystify doubly robust methods and demonstrate their application using the EconML package. We provide an introduction to causal inference, discuss the principles of outcome modeling and propensity scores, and illustrate the doubly robust approach through simulated case studies. By simplifying the methodology and offering practical coding examples, we intend to make doubly robust learning accessible to researchers and practitioners in data science and statistics. | [
"['Hlynur Davíð Hlynsson']"
] |
null | null | 2406.00855 | null | null | http://arxiv.org/pdf/2406.00855v1 | 2024-06-02T20:22:22Z | 2024-06-02T20:22:22Z | LinkLogic: A New Method and Benchmark for Explainable Knowledge Graph
Predictions | While there are a plethora of methods for link prediction in knowledge graphs, state-of-the-art approaches are often black box, obfuscating model reasoning and thereby limiting the ability of users to make informed decisions about model predictions. Recently, methods have emerged to generate prediction explanations for Knowledge Graph Embedding models, a widely-used class of methods for link prediction. The question then becomes, how well do these explanation systems work? To date this has generally been addressed anecdotally, or through time-consuming user research. In this work, we present an in-depth exploration of a simple link prediction explanation method we call LinkLogic, that surfaces and ranks explanatory information used for the prediction. Importantly, we construct the first-ever link prediction explanation benchmark, based on family structures present in the FB13 dataset. We demonstrate the use of this benchmark as a rich evaluation sandbox, probing LinkLogic quantitatively and qualitatively to assess the fidelity, selectivity and relevance of the generated explanations. We hope our work paves the way for more holistic and empirical assessment of knowledge graph prediction explanation methods in the future. | [
"['Niraj Kumar-Singh' 'Gustavo Polleti' 'Saee Paliwal'\n 'Rachel Hodos-Nkhereanye']"
] |
null | null | 2406.00856 | null | null | http://arxiv.org/pdf/2406.00856v1 | 2024-06-02T20:22:38Z | 2024-06-02T20:22:38Z | DistilDIRE: A Small, Fast, Cheap and Lightweight Diffusion Synthesized
Deepfake Detection | A dramatic influx of diffusion-generated images has marked recent years, posing unique challenges to current detection technologies. While the task of identifying these images falls under binary classification, a seemingly straightforward category, the computational load is significant when employing the "reconstruction then compare" technique. This approach, known as DIRE (Diffusion Reconstruction Error), not only identifies diffusion-generated images but also detects those produced by GANs, highlighting the technique's broad applicability. To address the computational challenges and improve efficiency, we propose distilling the knowledge embedded in diffusion models to develop rapid deepfake detection models. Our approach, aimed at creating a small, fast, cheap, and lightweight diffusion synthesized deepfake detector, maintains robust performance while significantly reducing operational demands. Maintaining performance, our experimental results indicate an inference speed 3.2 times faster than the existing DIRE framework. This advance not only enhances the practicality of deploying these systems in real-world settings but also paves the way for future research endeavors that seek to leverage diffusion model knowledge. | [
"['Yewon Lim' 'Changyeon Lee' 'Aerin Kim' 'Oren Etzioni']"
] |
null | null | 2406.00868 | null | null | http://arxiv.org/pdf/2406.00868v1 | 2024-06-02T21:05:23Z | 2024-06-02T21:05:23Z | Dual Policy Reinforcement Learning for Real-time Rebalancing in
Bike-sharing Systems | Bike-sharing systems play a crucial role in easing traffic congestion and promoting healthier lifestyles. However, ensuring their reliability and user acceptance requires effective strategies for rebalancing bikes. This study introduces a novel approach to address the real-time rebalancing problem with a fleet of vehicles. It employs a dual policy reinforcement learning algorithm that decouples inventory and routing decisions, enhancing realism and efficiency compared to previous methods where both decisions were made simultaneously. We first formulate the inventory and routing subproblems as a multi-agent Markov Decision Process within a continuous time framework. Subsequently, we propose a DQN-based dual policy framework to jointly estimate the value functions, minimizing the lost demand. To facilitate learning, a comprehensive simulator is applied to operate under a first-arrive-first-serve rule, which enables the computation of immediate rewards across diverse demand scenarios. We conduct extensive experiments on various datasets generated from historical real-world data, affected by both temporal and weather factors. Our proposed algorithm demonstrates significant performance improvements over previous baseline methods. It offers valuable practical insights for operators and further explores the incorporation of reinforcement learning into real-world dynamic programming problems, paving the way for more intelligent and robust urban mobility solutions. | [
"['Jiaqi Liang' 'Defeng Liu' 'Sanjay Dominik Jena' 'Andrea Lodi'\n 'Thibaut Vidal']"
] |
null | null | 2406.00873 | null | null | http://arxiv.org/pdf/2406.00873v2 | 2024-06-30T12:12:23Z | 2024-06-02T21:40:13Z | Scaffold Splits Overestimate Virtual Screening Performance | Virtual Screening (VS) of vast compound libraries guided by Artificial Intelligence (AI) models is a highly productive approach to early drug discovery. Data splitting is crucial for better benchmarking of such AI models. Traditional random data splits produce similar molecules between training and test sets, conflicting with the reality of VS libraries which mostly contain structurally distinct compounds. Scaffold split, grouping molecules by shared core structure, is widely considered to reflect this real-world scenario. However, here we show that the scaffold split also overestimates VS performance. The reason is that molecules with different chemical scaffolds are often similar, which hence introduces unrealistically high similarities between training molecules and test molecules following a scaffold split. Our study examined three representative AI models on 60 NCI-60 datasets, each with approximately 30,000 to 50,000 molecules tested on a different cancer cell line. Each dataset was split with three methods: scaffold, Butina clustering and the more accurate Uniform Manifold Approximation and Projection (UMAP) clustering. Regardless of the model, model performance is much worse with UMAP splits from the results of the 2100 models trained and evaluated for each algorithm and split. These robust results demonstrate the need for more realistic data splits to tune, compare, and select models for VS. For the same reason, avoiding the scaffold split is also recommended for other molecular property prediction problems. The code to reproduce these results is available at https://github.com/ScaffoldSplitsOverestimateVS | [
"['Qianrong Guo' 'Saiveth Hernandez-Hernandez' 'Pedro J Ballester']"
] |
null | null | 2406.00877 | null | null | http://arxiv.org/pdf/2406.00877v1 | 2024-06-02T21:57:32Z | 2024-06-02T21:57:32Z | Evidence of Learned Look-Ahead in a Chess-Playing Neural Network | Do neural networks learn to implement algorithms such as look-ahead or search "in the wild"? Or do they rely purely on collections of simple heuristics? We present evidence of learned look-ahead in the policy network of Leela Chess Zero, the currently strongest neural chess engine. We find that Leela internally represents future optimal moves and that these representations are crucial for its final output in certain board states. Concretely, we exploit the fact that Leela is a transformer that treats every chessboard square like a token in language models, and give three lines of evidence (1) activations on certain squares of future moves are unusually important causally; (2) we find attention heads that move important information "forward and backward in time," e.g., from squares of future moves to squares of earlier ones; and (3) we train a simple probe that can predict the optimal move 2 turns ahead with 92% accuracy (in board states where Leela finds a single best line). These findings are an existence proof of learned look-ahead in neural networks and might be a step towards a better understanding of their capabilities. | [
"['Erik Jenner' 'Shreyas Kapur' 'Vasil Georgiev' 'Cameron Allen'\n 'Scott Emmons' 'Stuart Russell']"
] |
null | null | 2406.00879 | null | null | http://arxiv.org/pdf/2406.00879v1 | 2024-06-02T21:58:54Z | 2024-06-02T21:58:54Z | Quantum Equilibrium Propagation: Gradient-Descent Training of Quantum
Systems | Equilibrium propagation (EP) is a training framework for energy-based systems, i.e. systems whose physics minimizes an energy function. EP has been explored in various classical physical systems such as resistor networks, elastic networks, the classical Ising model and coupled phase oscillators. A key advantage of EP is that it achieves gradient descent on a cost function using the physics of the system to extract the weight gradients, making it a candidate for the development of energy-efficient processors for machine learning. We extend EP to quantum systems, where the energy function that is minimized is the mean energy functional (expectation value of the Hamiltonian), whose minimum is the ground state of the Hamiltonian. As examples, we study the settings of the transverse-field Ising model and the quantum harmonic oscillator network -- quantum analogues of the Ising model and elastic network. | [
"['Benjamin Scellier']"
] |
null | null | 2406.00889 | null | null | http://arxiv.org/pdf/2406.00889v1 | 2024-06-02T23:16:00Z | 2024-06-02T23:16:00Z | Reservoir History Matching of the Norne field with generative exotic
priors and a coupled Mixture of Experts -- Physics Informed Neural Operator
Forward Model | We developed a novel reservoir characterization workflow that addresses reservoir history matching by coupling a physics-informed neural operator (PINO) forward model with a mixture of experts' approach, termed cluster classify regress (CCR). The inverse modelling is achieved via an adaptive Regularized Ensemble Kalman inversion (aREKI) method, ideal for rapid inverse uncertainty quantification during history matching. We parametrize unknown permeability and porosity fields for non-Gaussian posterior measures using a variational convolution autoencoder and a denoising diffusion implicit model (DDIM) exotic priors. The CCR works as a supervised model with the PINO surrogate to replicate nonlinear Peaceman well equations. The CCR's flexibility allows any independent machine-learning algorithm for each stage. The PINO reservoir surrogate's loss function is derived from supervised data loss and losses from the initial conditions and residual of the governing black oil PDE. The PINO-CCR surrogate outputs pressure, water, and gas saturations, along with oil, water, and gas production rates. The methodology was compared to a standard numerical black oil simulator for a waterflooding case on the Norne field, showing similar outputs. This PINO-CCR surrogate was then used in the aREKI history matching workflow, successfully recovering the unknown permeability, porosity and fault multiplier, with simulations up to 6000 times faster than conventional methods. Training the PINO-CCR surrogate on an NVIDIA H100 with 80G memory takes about 5 hours for 100 samples of the Norne field. This workflow is suitable for ensemble-based approaches, where posterior density sampling, given an expensive likelihood evaluation, is desirable for uncertainty quantification. | [
"['Clement Etienam' 'Yang Juntao' 'Oleg Ovcharenko' 'Issam Said']"
] |
null | null | 2406.00894 | null | null | http://arxiv.org/pdf/2406.00894v1 | 2024-06-02T23:24:30Z | 2024-06-02T23:24:30Z | Pretrained Hybrids with MAD Skills | While Transformers underpin modern large language models (LMs), there is a growing list of alternative architectures with new capabilities, promises, and tradeoffs. This makes choosing the right LM architecture challenging. Recently-proposed $textit{hybrid architectures}$ seek a best-of-all-worlds approach that reaps the benefits of all architectures. Hybrid design is difficult for two reasons: it requires manual expert-driven search, and new hybrids must be trained from scratch. We propose $textbf{Manticore}$, a framework that addresses these challenges. Manticore $textit{automates the design of hybrid architectures}$ while reusing pretrained models to create $textit{pretrained}$ hybrids. Our approach augments ideas from differentiable Neural Architecture Search (NAS) by incorporating simple projectors that translate features between pretrained blocks from different architectures. We then fine-tune hybrids that combine pretrained models from different architecture families -- such as the GPT series and Mamba -- end-to-end. With Manticore, we enable LM selection without training multiple models, the construction of pretrained hybrids from existing pretrained models, and the ability to $textit{program}$ pretrained hybrids to have certain capabilities. Manticore hybrids outperform existing manually-designed hybrids, achieve strong performance on Long Range Arena (LRA) tasks, and can improve on pretrained transformers and state space models. | [
"['Nicholas Roberts' 'Samuel Guo' 'Zhiqi Gao'\n 'Satya Sai Srinath Namburi GNVV' 'Sonia Cromp' 'Chengjun Wu'\n 'Chengyu Duan' 'Frederic Sala']"
] |
null | null | 2406.00901 | null | null | http://arxiv.org/pdf/2406.00901v1 | 2024-06-02T23:51:43Z | 2024-06-02T23:51:43Z | Robust Multi-Modal Speech In-Painting: A Sequence-to-Sequence Approach | The process of reconstructing missing parts of speech audio from context is called speech in-painting. Human perception of speech is inherently multi-modal, involving both audio and visual (AV) cues. In this paper, we introduce and study a sequence-to-sequence (seq2seq) speech in-painting model that incorporates AV features. Our approach extends AV speech in-painting techniques to scenarios where both audio and visual data may be jointly corrupted. To achieve this, we employ a multi-modal training paradigm that boosts the robustness of our model across various conditions involving acoustic and visual distortions. This makes our distortion-aware model a plausible solution for real-world challenging environments. We compare our method with existing transformer-based and recurrent neural network-based models, which attempt to reconstruct missing speech gaps ranging from a few milliseconds to over a second. Our experimental results demonstrate that our novel seq2seq architecture outperforms the state-of-the-art transformer solution by 38.8% in terms of enhancing speech quality and 7.14% in terms of improving speech intelligibility. We exploit a multi-task learning framework that simultaneously performs lip-reading (transcribing video components to text) while reconstructing missing parts of the associated speech. | [
"['Mahsa Kadkhodaei Elyaderani' 'Shahram Shirani']"
] |
null | null | 2406.00907 | null | null | http://arxiv.org/pdf/2406.00907v2 | 2024-06-06T01:46:22Z | 2024-06-03T00:30:23Z | DDA: Dimensionality Driven Augmentation Search for Contrastive Learning
in Laparoscopic Surgery | Self-supervised learning (SSL) has potential for effective representation learning in medical imaging, but the choice of data augmentation is critical and domain-specific. It remains uncertain if general augmentation policies suit surgical applications. In this work, we automate the search for suitable augmentation policies through a new method called Dimensionality Driven Augmentation Search (DDA). DDA leverages the local dimensionality of deep representations as a proxy target, and differentiably searches for suitable data augmentation policies in contrastive learning. We demonstrate the effectiveness and efficiency of DDA in navigating a large search space and successfully identifying an appropriate data augmentation policy for laparoscopic surgery. We systematically evaluate DDA across three laparoscopic image classification and segmentation tasks, where it significantly improves over existing baselines. Furthermore, DDA's optimised set of augmentations provides insight into domain-specific dependencies when applying contrastive learning in medical applications. For example, while hue is an effective augmentation for natural images, it is not advantageous for laparoscopic images. | [
"['Yuning Zhou' 'Henry Badgery' 'Matthew Read' 'James Bailey'\n 'Catherine E. Davey']"
] |
null | null | 2406.00918 | null | null | http://arxiv.org/pdf/2406.00918v1 | 2024-06-03T01:04:50Z | 2024-06-03T01:04:50Z | Assessing the Adversarial Security of Perceptual Hashing Algorithms | Perceptual hashing algorithms (PHAs) are utilized extensively for identifying illegal online content. Given their crucial role in sensitive applications, understanding their security strengths and weaknesses is critical. This paper compares three major PHAs deployed widely in practice: PhotoDNA, PDQ, and NeuralHash, and assesses their robustness against three typical attacks: normal image editing attacks, malicious adversarial attacks, and hash inversion attacks. Contrary to prevailing studies, this paper reveals that these PHAs exhibit resilience to black-box adversarial attacks when realistic constraints regarding the distortion and query budget are applied, attributed to the unique property of random hash variations. Moreover, this paper illustrates that original images can be reconstructed from the hash bits, raising significant privacy concerns. By comprehensively exposing their security vulnerabilities, this paper contributes to the ongoing efforts aimed at enhancing the security of PHAs for effective deployment. | [
"['Jordan Madden' 'Moxanki Bhavsar' 'Lhamo Dorje' 'Xiaohua Li']"
] |
null | null | 2406.00920 | null | null | http://arxiv.org/pdf/2406.00920v1 | 2024-06-03T01:13:19Z | 2024-06-03T01:13:19Z | Demystifying SGD with Doubly Stochastic Gradients | Optimization objectives in the form of a sum of intractable expectations are rising in importance (e.g., diffusion models, variational autoencoders, and many more), a setting also known as "finite sum with infinite data." For these problems, a popular strategy is to employ SGD with doubly stochastic gradients (doubly SGD): the expectations are estimated using the gradient estimator of each component, while the sum is estimated by subsampling over these estimators. Despite its popularity, little is known about the convergence properties of doubly SGD, except under strong assumptions such as bounded variance. In this work, we establish the convergence of doubly SGD with independent minibatching and random reshuffling under general conditions, which encompasses dependent component gradient estimators. In particular, for dependent estimators, our analysis allows fined-grained analysis of the effect correlations. As a result, under a per-iteration computational budget of $b times m$, where $b$ is the minibatch size and $m$ is the number of Monte Carlo samples, our analysis suggests where one should invest most of the budget in general. Furthermore, we prove that random reshuffling (RR) improves the complexity dependence on the subsampling noise. | [
"['Kyurae Kim' 'Joohwan Ko' 'Yi-An Ma' 'Jacob R. Gardner']"
] |
null | null | 2406.00924 | null | null | http://arxiv.org/pdf/2406.00924v1 | 2024-06-03T01:34:34Z | 2024-06-03T01:34:34Z | Faster Diffusion-based Sampling with Randomized Midpoints: Sequential
and Parallel | In recent years, there has been a surge of interest in proving discretization bounds for diffusion models. These works show that for essentially any data distribution, one can approximately sample in polynomial time given a sufficiently accurate estimate of its score functions at different noise levels. In this work, we propose a new discretization scheme for diffusion models inspired by Shen and Lee's randomized midpoint method for log-concave sampling~cite{ShenL19}. We prove that this approach achieves the best known dimension dependence for sampling from arbitrary smooth distributions in total variation distance ($widetilde O(d^{5/12})$ compared to $widetilde O(sqrt{d})$ from prior work). We also show that our algorithm can be parallelized to run in only $widetilde O(log^2 d)$ parallel rounds, constituting the first provable guarantees for parallel sampling with diffusion models. As a byproduct of our methods, for the well-studied problem of log-concave sampling in total variation distance, we give an algorithm and simple analysis achieving dimension dependence $widetilde O(d^{5/12})$ compared to $widetilde O(sqrt{d})$ from prior work. | [
"['Shivam Gupta' 'Linda Cai' 'Sitan Chen']"
] |
null | null | 2406.00943 | null | null | http://arxiv.org/pdf/2406.00943v1 | 2024-06-03T02:56:11Z | 2024-06-03T02:56:11Z | State Space Models on Temporal Graphs: A First-Principles Study | Over the past few years, research on deep graph learning has shifted from static graphs to temporal graphs in response to real-world complex systems that exhibit dynamic behaviors. In practice, temporal graphs are formalized as an ordered sequence of static graph snapshots observed at discrete time points. Sequence models such as RNNs or Transformers have long been the predominant backbone networks for modeling such temporal graphs. Yet, despite the promising results, RNNs struggle with long-range dependencies, while transformers are burdened by quadratic computational complexity. Recently, state space models (SSMs), which are framed as discretized representations of an underlying continuous-time linear dynamical system, have garnered substantial attention and achieved breakthrough advancements in independent sequence modeling. In this work, we undertake a principled investigation that extends SSM theory to temporal graphs by integrating structural information into the online approximation objective via the adoption of a Laplacian regularization term. The emergent continuous-time system introduces novel algorithmic challenges, thereby necessitating our development of GraphSSM, a graph state space model for modeling the dynamics of temporal graphs. Extensive experimental results demonstrate the effectiveness of our GraphSSM framework across various temporal graph benchmarks. | [
"['Jintang Li' 'Ruofan Wu' 'Xinzhou Jin' 'Boqun Ma' 'Liang Chen'\n 'Zibin Zheng']"
] |
null | null | 2406.00956 | null | null | http://arxiv.org/pdf/2406.00956v1 | 2024-06-03T03:16:25Z | 2024-06-03T03:16:25Z | Improving Segment Anything on the Fly: Auxiliary Online Learning and
Adaptive Fusion for Medical Image Segmentation | The current variants of the Segment Anything Model (SAM), which include the original SAM and Medical SAM, still lack the capability to produce sufficiently accurate segmentation for medical images. In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions. These rectifications typically entail manual or semi-manual corrections employing state-of-the-art annotation tools. Motivated by this process, we introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time. We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images. To improve the effectiveness and efficiency of online learning when integrated with large-scale vision models like SAM, we propose a new method called Auxiliary Online Learning (AuxOL). AuxOL creates and applies a small auxiliary model (specialist) in conjunction with SAM (generalist), entails adaptive online-batch and adaptive segmentation fusion. Experiments conducted on eight datasets covering four medical imaging modalities validate the effectiveness of the proposed method. Our work proposes and validates a new, practical, and effective approach for enhancing SA on downstream segmentation tasks (e.g., medical image segmentation). | [
"['Tianyu Huang' 'Tao Zhou' 'Weidi Xie' 'Shuo Wang' 'Qi Dou' 'Yizhe Zhang']"
] |
null | null | 2406.00958 | null | null | http://arxiv.org/pdf/2406.00958v1 | 2024-06-03T03:22:18Z | 2024-06-03T03:22:18Z | Navigating Conflicting Views: Harnessing Trust for Learning | Resolving conflicts is essential to make the decisions of multi-view classification more reliable. Much research has been conducted on learning consistent informative representations among different views, assuming that all views are identically important and strictly aligned. However, real-world multi-view data may not always conform to these assumptions, as some views may express distinct information. To address this issue, we develop a computational trust-based discounting method to enhance the existing trustworthy framework in scenarios where conflicts between different views may arise. Its belief fusion process considers the trustworthiness of predictions made by individual views via an instance-wise probability-sensitive trust discounting mechanism. We evaluate our method on six real-world datasets, using Top-1 Accuracy, AUC-ROC for Uncertainty-Aware Prediction, Fleiss' Kappa, and a new metric called Multi-View Agreement with Ground Truth that takes into consideration the ground truth labels. The experimental results show that computational trust can effectively resolve conflicts, paving the way for more reliable multi-view classification models in real-world applications. | [
"['Jueqing Lu' 'Lan Du' 'Wray Buntine' 'Myong Chol Jung' 'Joanna Dipnall'\n 'Belinda Gabbe']"
] |
null | null | 2406.00973 | null | null | http://arxiv.org/pdf/2406.00973v1 | 2024-06-03T04:03:24Z | 2024-06-03T04:03:24Z | Cold-start Recommendation by Personalized Embedding Region Elicitation | Rating elicitation is a success element for recommender systems to perform well at cold-starting, in which the systems need to recommend items to a newly arrived user with no prior knowledge about the user's preference. Existing elicitation methods employ a fixed set of items to learn the user's preference and then infer the users' preferences on the remaining items. Using a fixed seed set can limit the performance of the recommendation system since the seed set is unlikely optimal for all new users with potentially diverse preferences. This paper addresses this challenge using a 2-phase, personalized elicitation scheme. First, the elicitation scheme asks users to rate a small set of popular items in a ``burn-in'' phase. Second, it sequentially asks the user to rate adaptive items to refine the preference and the user's representation. Throughout the process, the system represents the user's embedding value not by a point estimate but by a region estimate. The value of information obtained by asking the user's rating on an item is quantified by the distance from the region center embedding space that contains with high confidence the true embedding value of the user. Finally, the recommendations are successively generated by considering the preference region of the user. We show that each subproblem in the elicitation scheme can be efficiently implemented. Further, we empirically demonstrate the effectiveness of the proposed method against existing rating-elicitation methods on several prominent datasets. | [
"['Hieu Trung Nguyen' 'Duy Nguyen' 'Khoa Doan' 'Viet Anh Nguyen']"
] |
null | null | 2406.00987 | null | null | http://arxiv.org/pdf/2406.00987v1 | 2024-06-03T04:48:45Z | 2024-06-03T04:48:45Z | Enhancing Fairness in Unsupervised Graph Anomaly Detection through
Disentanglement | Graph anomaly detection (GAD) is increasingly crucial in various applications, ranging from financial fraud detection to fake news detection. However, current GAD methods largely overlook the fairness problem, which might result in discriminatory decisions skewed toward certain demographic groups defined on sensitive attributes (e.g., gender, religion, ethnicity, etc.). This greatly limits the applicability of these methods in real-world scenarios in light of societal and ethical restrictions. To address this critical gap, we make the first attempt to integrate fairness with utility in GAD decision-making. Specifically, we devise a novel DisEntangle-based FairnEss-aware aNomaly Detection framework on the attributed graph, named DEFEND. DEFEND first introduces disentanglement in GNNs to capture informative yet sensitive-irrelevant node representations, effectively reducing societal bias inherent in graph representation learning. Besides, to alleviate discriminatory bias in evaluating anomalous nodes, DEFEND adopts a reconstruction-based anomaly detection, which concentrates solely on node attributes without incorporating any graph structure. Additionally, given the inherent association between input and sensitive attributes, DEFEND constrains the correlation between the reconstruction error and the predicted sensitive attributes. Our empirical evaluations on real-world datasets reveal that DEFEND performs effectively in GAD and significantly enhances fairness compared to state-of-the-art baselines. To foster reproducibility, our code is available at https://github.com/AhaChang/DEFEND. | [
"['Wenjing Chang' 'Kay Liu' 'Philip S. Yu' 'Jianjun Yu']"
] |
null | null | 2406.00990 | null | null | http://arxiv.org/pdf/2406.00990v1 | 2024-06-03T04:53:20Z | 2024-06-03T04:53:20Z | Constraint-Aware Diffusion Models for Trajectory Optimization | The diffusion model has shown success in generating high-quality and diverse solutions to trajectory optimization problems. However, diffusion models with neural networks inevitably make prediction errors, which leads to constraint violations such as unmet goals or collisions. This paper presents a novel constraint-aware diffusion model for trajectory optimization. We introduce a novel hybrid loss function for training that minimizes the constraint violation of diffusion samples compared to the groundtruth while recovering the original data distribution. Our model is demonstrated on tabletop manipulation and two-car reach-avoid problems, outperforming traditional diffusion models in minimizing constraint violations while generating samples close to locally optimal solutions. | [
"['Anjian Li' 'Zihan Ding' 'Adji Bousso Dieng' 'Ryne Beeson']"
] |
null | null | 2406.00998 | null | null | http://arxiv.org/pdf/2406.00998v1 | 2024-06-03T05:14:32Z | 2024-06-03T05:14:32Z | Distributional Refinement Network: Distributional Forecasting via Deep
Learning | A key task in actuarial modelling involves modelling the distributional properties of losses. Classic (distributional) regression approaches like Generalized Linear Models (GLMs; Nelder and Wedderburn, 1972) are commonly used, but challenges remain in developing models that can (i) allow covariates to flexibly impact different aspects of the conditional distribution, (ii) integrate developments in machine learning and AI to maximise the predictive power while considering (i), and, (iii) maintain a level of interpretability in the model to enhance trust in the model and its outputs, which is often compromised in efforts pursuing (i) and (ii). We tackle this problem by proposing a Distributional Refinement Network (DRN), which combines an inherently interpretable baseline model (such as GLMs) with a flexible neural network-a modified Deep Distribution Regression (DDR; Li et al., 2019) method. Inspired by the Combined Actuarial Neural Network (CANN; Schelldorfer and W{''u}thrich, 2019), our approach flexibly refines the entire baseline distribution. As a result, the DRN captures varying effects of features across all quantiles, improving predictive performance while maintaining adequate interpretability. Using both synthetic and real-world data, we demonstrate the DRN's superior distributional forecasting capacity. The DRN has the potential to be a powerful distributional regression model in actuarial science and beyond. | [
"['Benjamin Avanzi' 'Eric Dong' 'Patrick J. Laub' 'Bernard Wong']"
] |
null | null | 2406.00999 | null | null | http://arxiv.org/pdf/2406.00999v1 | 2024-06-03T05:15:04Z | 2024-06-03T05:15:04Z | Seeing the Forest through the Trees: Data Leakage from Partial
Transformer Gradients | Recent studies have shown that distributed machine learning is vulnerable to gradient inversion attacks, where private training data can be reconstructed by analyzing the gradients of the models shared in training. Previous attacks established that such reconstructions are possible using gradients from all parameters in the entire models. However, we hypothesize that most of the involved modules, or even their sub-modules, are at risk of training data leakage, and we validate such vulnerabilities in various intermediate layers of language models. Our extensive experiments reveal that gradients from a single Transformer layer, or even a single linear component with 0.54% parameters, are susceptible to training data leakage. Additionally, we show that applying differential privacy on gradients during training offers limited protection against the novel vulnerability of data disclosure. | [
"['Weijun Li' 'Qiongkai Xu' 'Mark Dras']"
] |
null | null | 2406.01012 | null | null | http://arxiv.org/pdf/2406.01012v1 | 2024-06-03T05:46:52Z | 2024-06-03T05:46:52Z | Attention-based Iterative Decomposition for Tensor Product
Representation | In recent research, Tensor Product Representation (TPR) is applied for the systematic generalization task of deep neural networks by learning the compositional structure of data. However, such prior works show limited performance in discovering and representing the symbolic structure from unseen test data because their decomposition to the structural representations was incomplete. In this work, we propose an Attention-based Iterative Decomposition (AID) module designed to enhance the decomposition operations for the structured representations encoded from the sequential input data with TPR. Our AID can be easily adapted to any TPR-based model and provides enhanced systematic decomposition through a competitive attention mechanism between input features and structured representations. In our experiments, AID shows effectiveness by significantly improving the performance of TPR-based prior works on the series of systematic generalization tasks. Moreover, in the quantitative and qualitative evaluations, AID produces more compositional and well-bound structural representations than other works. | [
"['Taewon Park' 'Inchul Choi' 'Minho Lee']"
] |
null | null | 2406.01013 | null | null | http://arxiv.org/pdf/2406.01013v2 | 2024-06-18T20:53:08Z | 2024-06-03T05:46:53Z | Scalable Ensembling For Mitigating Reward Overoptimisation | Reinforcement Learning from Human Feedback (RLHF) has enabled significant advancements within language modeling for powerful, instruction-following models. However, the alignment of these models remains a pressing challenge as the policy tends to overfit the learned ``proxy" reward model past an inflection point of utility as measured by a ``gold" reward model that is more performant -- a phenomenon known as overoptimisation. Prior work has mitigated this issue by computing a pessimistic statistic over an ensemble of reward models, which is common in Offline Reinforcement Learning but incredibly costly for language models with high memory requirements, making such approaches infeasible for sufficiently large models. To this end, we propose using a shared encoder but separate linear heads. We find this leads to similar performance as the full ensemble while allowing tremendous savings in memory and time required for training for models of similar size. | [
"['Ahmed M. Ahmed' 'Rafael Rafailov' 'Stepan Sharkov' 'Xuechen Li'\n 'Sanmi Koyejo']"
] |
null | null | 2406.01018 | null | null | http://arxiv.org/pdf/2406.01018v1 | 2024-06-03T05:56:02Z | 2024-06-03T05:56:02Z | Accent Conversion in Text-To-Speech Using Multi-Level VAE and
Adversarial Training | With rapid globalization, the need to build inclusive and representative speech technology cannot be overstated. Accent is an important aspect of speech that needs to be taken into consideration while building inclusive speech synthesizers. Inclusive speech technology aims to erase any biases towards specific groups, such as people of certain accent. We note that state-of-the-art Text-to-Speech (TTS) systems may currently not be suitable for all people, regardless of their background, as they are designed to generate high-quality voices without focusing on accent. In this paper, we propose a TTS model that utilizes a Multi-Level Variational Autoencoder with adversarial learning to address accented speech synthesis and conversion in TTS, with a vision for more inclusive systems in the future. We evaluate the performance through both objective metrics and subjective listening tests. The results show an improvement in accent conversion ability compared to the baseline. | [
"['Jan Melechovsky' 'Ambuj Mehrish' 'Berrak Sisman' 'Dorien Herremans']"
] |
null | null | 2406.01027 | null | null | http://arxiv.org/pdf/2406.01027v1 | 2024-06-03T06:21:53Z | 2024-06-03T06:21:53Z | PRICE: A Pretrained Model for Cross-Database Cardinality Estimation | Cardinality estimation (CardEst) is essential for optimizing query execution plans. Recent ML-based CardEst methods achieve high accuracy but face deployment challenges due to high preparation costs and lack of transferability across databases. In this paper, we propose PRICE, a PRetrained multI-table CardEst model, which addresses these limitations. PRICE takes low-level but transferable features w.r.t. data distributions and query information and elegantly applies self-attention models to learn meta-knowledge to compute cardinality in any database. It is generally applicable to any unseen new database to attain high estimation accuracy, while its preparation cost is as little as the basic one-dimensional histogram-based CardEst methods. Moreover, PRICE can be finetuned to further enhance its performance on any specific database. We pretrained PRICE using 30 diverse datasets, completing the process in about 5 hours with a resulting model size of only about 40MB. Evaluations show that PRICE consistently outperforms existing methods, achieving the highest estimation accuracy on several unseen databases and generating faster execution plans with lower overhead. After finetuning with a small volume of databasespecific queries, PRICE could even find plans very close to the optimal ones. Meanwhile, PRICE is generally applicable to different settings such as data updates, data scaling, and query workload shifts. We have made all of our data and codes publicly available at https://github.com/StCarmen/PRICE. | [
"['Tianjing Zeng' 'Junwei Lan' 'Jiahong Ma' 'Wenqing Wei' 'Rong Zhu'\n 'Pengfei Li' 'Bolin Ding' 'Defu Lian' 'Zhewei Wei' 'Jingren Zhou']"
] |
null | null | 2406.01032 | null | null | http://arxiv.org/pdf/2406.01032v1 | 2024-06-03T06:33:51Z | 2024-06-03T06:33:51Z | LLM and GNN are Complementary: Distilling LLM for Multimodal Graph
Learning | Recent progress in Graph Neural Networks (GNNs) has greatly enhanced the ability to model complex molecular structures for predicting properties. Nevertheless, molecular data encompasses more than just graph structures, including textual and visual information that GNNs do not handle well. To bridge this gap, we present an innovative framework that utilizes multimodal molecular data to extract insights from Large Language Models (LLMs). We introduce GALLON (Graph Learning from Large Language Model Distillation), a framework that synergizes the capabilities of LLMs and GNNs by distilling multimodal knowledge into a unified Multilayer Perceptron (MLP). This method integrates the rich textual and visual data of molecules with the structural analysis power of GNNs. Extensive experiments reveal that our distilled MLP model notably improves the accuracy and efficiency of molecular property predictions. | [
"['Junjie Xu' 'Zongyu Wu' 'Minhua Lin' 'Xiang Zhang' 'Suhang Wang']"
] |
null | null | 2406.01033 | null | null | http://arxiv.org/pdf/2406.01033v1 | 2024-06-03T06:35:11Z | 2024-06-03T06:35:11Z | Generalized Jersey Number Recognition Using Multi-task Learning With
Orientation-guided Weight Refinement | Jersey number recognition (JNR) has always been an important task in sports analytics. Improving recognition accuracy remains an ongoing challenge because images are subject to blurring, occlusion, deformity, and low resolution. Recent research has addressed these problems using number localization and optical character recognition. Some approaches apply player identification schemes to image sequences, ignoring the impact of human body rotation angles on jersey digit identification. Accurately predicting the number of jersey digits by using a multi-task scheme to recognize each individual digit enables more robust results. Based on the above considerations, this paper proposes a multi-task learning method called the angle-digit refine scheme (ADRS), which combines human body orientation angles and digit number clues to recognize athletic jersey numbers. Based on our experimental results, our approach increases inference information, significantly improving prediction accuracy. Compared to state-of-the-art methods, which can only handle a single type of sport, the proposed method produces a more diverse and practical JNR application. The incorporation of diverse types of team sports such as soccer, football, basketball, volleyball, and baseball into our dataset contributes greatly to generalized JNR in sports analytics. Our accuracy achieves 64.07% on Top-1 and 89.97% on Top-2, with corresponding F1 scores of 67.46% and 90.64%, respectively. | [
"['Yung-Hui Lin' 'Yu-Wen Chang' 'Huang-Chia Shih' 'Takahiro Ogawa']"
] |
null | null | 2406.01047 | null | null | http://arxiv.org/pdf/2406.01047v1 | 2024-06-03T06:55:26Z | 2024-06-03T06:55:26Z | An Advanced Reinforcement Learning Framework for Online Scheduling of
Deferrable Workloads in Cloud Computing | Efficient resource utilization and perfect user experience usually conflict with each other in cloud computing platforms. Great efforts have been invested in increasing resource utilization but trying not to affect users' experience for cloud computing platforms. In order to better utilize the remaining pieces of computing resources spread over the whole platform, deferrable jobs are provided with a discounted price to users. For this type of deferrable jobs, users are allowed to submit jobs that will run for a specific uninterrupted duration in a flexible range of time in the future with a great discount. With these deferrable jobs to be scheduled under the remaining capacity after deploying those on-demand jobs, it remains a challenge to achieve high resource utilization and meanwhile shorten the waiting time for users as much as possible in an online manner. In this paper, we propose an online deferrable job scheduling method called textit{Online Scheduling for DEferrable jobs in Cloud} (OSDEC{}), where a deep reinforcement learning model is adopted to learn the scheduling policy, and several auxiliary tasks are utilized to provide better state representations and improve the performance of the model. With the integrated reinforcement learning framework, the proposed method can well plan the deployment schedule and achieve a short waiting time for users while maintaining a high resource utilization for the platform. The proposed method is validated on a public dataset and shows superior performance. | [
"['Hang Dong' 'Liwen Zhu' 'Zhao Shan' 'Bo Qiao' 'Fangkai Yang' 'Si Qin'\n 'Chuan Luo' 'Qingwei Lin' 'Yuwen Yang' 'Gurpreet Virdi'\n 'Saravan Rajmohan' 'Dongmei Zhang' 'Thomas Moscibroda']"
] |
null | null | 2406.01054 | null | null | http://arxiv.org/pdf/2406.01054v1 | 2024-06-03T07:08:27Z | 2024-06-03T07:08:27Z | Confidence-Based Task Prediction in Continual Disease Classification
Using Probability Distribution | Deep learning models are widely recognized for their effectiveness in identifying medical image findings in disease classification. However, their limitations become apparent in the dynamic and ever-changing clinical environment, characterized by the continuous influx of newly annotated medical data from diverse sources. In this context, the need for continual learning becomes particularly paramount, not only to adapt to evolving medical scenarios but also to ensure the privacy of healthcare data. In our research, we emphasize the utilization of a network comprising expert classifiers, where a new expert classifier is added each time a new task is introduced. We present CTP, a task-id predictor that utilizes confidence scores, leveraging the probability distribution (logits) of the classifier to accurately determine the task-id at inference time. Logits are adjusted to ensure that classifiers yield a high-entropy distribution for data associated with tasks other than their own. By defining a noise region in the distribution and computing confidence scores, CTP achieves superior performance when compared to other relevant continual learning methods. Additionally, the performance of CTP can be further improved by providing it with a continuum of data at the time of inference. | [
"['Tanvi Verma' 'Lukas Schwemer' 'Mingrui Tan' 'Fei Gao' 'Yong Liu'\n 'Huazhu Fu']"
] |
null | null | 2406.01056 | null | null | http://arxiv.org/pdf/2406.01056v1 | 2024-06-03T07:10:15Z | 2024-06-03T07:10:15Z | Virtual avatar generation models as world navigators | We introduce SABR-CLIMB, a novel video model simulating human movement in rock climbing environments using a virtual avatar. Our diffusion transformer predicts the sample instead of noise in each diffusion step and ingests entire videos to output complete motion sequences. By leveraging a large proprietary dataset, NAV-22M, and substantial computational resources, we showcase a proof of concept for a system to train general-purpose virtual avatars for complex tasks in robotics, sports, and healthcare. | [
"['Sai Mandava']"
] |
null | null | 2406.01065 | null | null | http://arxiv.org/pdf/2406.01065v1 | 2024-06-03T07:28:57Z | 2024-06-03T07:28:57Z | Causal prompting model-based offline reinforcement learning | Model-based offline Reinforcement Learning (RL) allows agents to fully utilise pre-collected datasets without requiring additional or unethical explorations. However, applying model-based offline RL to online systems presents challenges, primarily due to the highly suboptimal (noise-filled) and diverse nature of datasets generated by online systems. To tackle these issues, we introduce the Causal Prompting Reinforcement Learning (CPRL) framework, designed for highly suboptimal and resource-constrained online scenarios. The initial phase of CPRL involves the introduction of the Hidden-Parameter Block Causal Prompting Dynamic (Hip-BCPD) to model environmental dynamics. This approach utilises invariant causal prompts and aligns hidden parameters to generalise to new and diverse online users. In the subsequent phase, a single policy is trained to address multiple tasks through the amalgamation of reusable skills, circumventing the need for training from scratch. Experiments conducted across datasets with varying levels of noise, including simulation-based and real-world offline datasets from the Dnurse APP, demonstrate that our proposed method can make robust decisions in out-of-distribution and noisy environments, outperforming contemporary algorithms. Additionally, we separately verify the contributions of Hip-BCPDs and the skill-reuse strategy to the robustness of performance. We further analyse the visualised structure of Hip-BCPD and the interpretability of sub-skills. We released our source code and the first ever real-world medical dataset for precise medical decision-making tasks. | [
"['Xuehui Yu' 'Yi Guan' 'Rujia Shen' 'Xin Li' 'Chen Tang' 'Jingchi Jiang']"
] |
null | null | 2406.01066 | null | null | http://arxiv.org/pdf/2406.01066v1 | 2024-06-03T07:32:05Z | 2024-06-03T07:32:05Z | Topology-Aware Dynamic Reweighting for Distribution Shifts on Graph | Graph Neural Networks (GNNs) are widely used for node classification tasks but often fail to generalize when training and test nodes come from different distributions, limiting their practicality. To overcome this, recent approaches adopt invariant learning techniques from the out-of-distribution (OOD) generalization field, which seek to establish stable prediction methods across environments. However, the applicability of these invariant assumptions to graph data remains unverified, and such methods often lack solid theoretical support. In this work, we introduce the Topology-Aware Dynamic Reweighting (TAR) framework, which dynamically adjusts sample weights through gradient flow in the geometric Wasserstein space during training. Instead of relying on strict invariance assumptions, we prove that our method is able to provide distributional robustness, thereby enhancing the out-of-distribution generalization performance on graph data. By leveraging the inherent graph structure, TAR effectively addresses distribution shifts. Our framework's superiority is demonstrated through standard testing on four graph OOD datasets and three class-imbalanced node classification datasets, exhibiting marked improvements over existing methods. | [
"['Weihuang Zheng' 'Jiashuo Liu' 'Jiaxing Li' 'Jiayun Wu' 'Peng Cui'\n 'Youyong Kong']"
] |
null | null | 2406.01071 | null | null | http://arxiv.org/pdf/2406.01071v1 | 2024-06-03T07:44:08Z | 2024-06-03T07:44:08Z | Visual Car Brand Classification by Implementing a Synthetic Image
Dataset Creation Pipeline | Recent advancements in machine learning, particularly in deep learning and object detection, have significantly improved performance in various tasks, including image classification and synthesis. However, challenges persist, particularly in acquiring labeled data that accurately represents specific use cases. In this work, we propose an automatic pipeline for generating synthetic image datasets using Stable Diffusion, an image synthesis model capable of producing highly realistic images. We leverage YOLOv8 for automatic bounding box detection and quality assessment of synthesized images. Our contributions include demonstrating the feasibility of training image classifiers solely on synthetic data, automating the image generation pipeline, and describing the computational requirements for our approach. We evaluate the usability of different modes of Stable Diffusion and achieve a classification accuracy of 75%. | [
"['Jan Lippemeier' 'Stefanie Hittmeyer' 'Oliver Niehörster'\n 'Markus Lange-Hegermann']"
] |
null | null | 2406.01076 | null | null | http://arxiv.org/pdf/2406.01076v1 | 2024-06-03T07:53:38Z | 2024-06-03T07:53:38Z | Estimating Canopy Height at Scale | We propose a framework for global-scale canopy height estimation based on satellite data. Our model leverages advanced data preprocessing techniques, resorts to a novel loss function designed to counter geolocation inaccuracies inherent in the ground-truth height measurements, and employs data from the Shuttle Radar Topography Mission to effectively filter out erroneous labels in mountainous regions, enhancing the reliability of our predictions in those areas. A comparison between predictions and ground-truth labels yields an MAE / RMSE of 2.43 / 4.73 (meters) overall and 4.45 / 6.72 (meters) for trees taller than five meters, which depicts a substantial improvement compared to existing global-scale maps. The resulting height map as well as the underlying framework will facilitate and enhance ecological analyses at a global scale, including, but not limited to, large-scale forest and biomass monitoring. | [
"['Jan Pauls' 'Max Zimmer' 'Una M. Kelly' 'Martin Schwartz'\n 'Sassan Saatchi' 'Philippe Ciais' 'Sebastian Pokutta' 'Martin Brandt'\n 'Fabian Gieseke']"
] |
null | null | 2406.01080 | null | null | http://arxiv.org/pdf/2406.01080v1 | 2024-06-03T07:59:10Z | 2024-06-03T07:59:10Z | No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning | Federated learning allows several clients to train one machine learning model jointly without sharing private data, providing privacy protection. However, traditional federated learning is vulnerable to poisoning attacks, which can not only decrease the model performance, but also implant malicious backdoors. In addition, direct submission of local model parameters can also lead to the privacy leakage of the training dataset. In this paper, we aim to build a privacy-preserving and Byzantine-robust federated learning scheme to provide an environment with no vandalism (NoV) against attacks from malicious participants. Specifically, we construct a model filter for poisoned local models, protecting the global model from data and model poisoning attacks. This model filter combines zero-knowledge proofs to provide further privacy protection. Then, we adopt secret sharing to provide verifiable secure aggregation, removing malicious clients that disrupting the aggregation process. Our formal analysis proves that NoV can protect data privacy and weed out Byzantine attackers. Our experiments illustrate that NoV can effectively address data and model poisoning attacks, including PGD, and outperforms other related schemes. | [
"['Zhibo Xing' 'Zijian Zhang' \"Zi'ang Zhang\" 'Jiamou Liu' 'Liehuang Zhu'\n 'Giovanni Russello']"
] |
null | null | 2406.01086 | null | null | http://arxiv.org/pdf/2406.01086v1 | 2024-06-03T08:12:32Z | 2024-06-03T08:12:32Z | Effective Subset Selection Through The Lens of Neural Network Pruning | Having large amounts of annotated data significantly impacts the effectiveness of deep neural networks. However, the annotation task can be very expensive in some domains, such as medical data. Thus, it is important to select the data to be annotated wisely, which is known as the subset selection problem. We investigate the relationship between subset selection and neural network pruning, which is more widely studied, and establish a correspondence between them. Leveraging insights from network pruning, we propose utilizing the norm criterion of neural network features to improve subset selection methods. We empirically validate our proposed strategy on various networks and datasets, demonstrating enhanced accuracy. This shows the potential of employing pruning tools for subset selection. | [
"['Noga Bar' 'Raja Giryes']"
] |
null | null | 2406.01096 | null | null | http://arxiv.org/abs/2406.01096v1 | 2024-06-03T08:31:35Z | 2024-06-03T08:31:35Z | Synergizing Unsupervised and Supervised Learning: A Hybrid Approach for
Accurate Natural Language Task Modeling | While supervised learning models have shown remarkable performance in various natural language processing (NLP) tasks, their success heavily relies on the availability of large-scale labeled datasets, which can be costly and time-consuming to obtain. Conversely, unsupervised learning techniques can leverage abundant unlabeled text data to learn rich representations, but they do not directly optimize for specific NLP tasks. This paper presents a novel hybrid approach that synergizes unsupervised and supervised learning to improve the accuracy of NLP task modeling. While supervised models excel at specific tasks, they rely on large labeled datasets. Unsupervised techniques can learn rich representations from abundant unlabeled text but don't directly optimize for tasks. Our methodology integrates an unsupervised module that learns representations from unlabeled corpora (e.g., language models, word embeddings) and a supervised module that leverages these representations to enhance task-specific models. We evaluate our approach on text classification and named entity recognition (NER), demonstrating consistent performance gains over supervised baselines. For text classification, contextual word embeddings from a language model pretrain a recurrent or transformer-based classifier. For NER, word embeddings initialize a BiLSTM sequence labeler. By synergizing techniques, our hybrid approach achieves SOTA results on benchmark datasets, paving the way for more data-efficient and robust NLP systems. | [
"['Wrick Talukdar' 'Anjanava Biswas']"
] |
null | null | 2406.01098 | null | null | http://arxiv.org/pdf/2406.01098v1 | 2024-06-03T08:33:42Z | 2024-06-03T08:33:42Z | Learning Decision Trees and Forests with Algorithmic Recourse | This paper proposes a new algorithm for learning accurate tree-based models while ensuring the existence of recourse actions. Algorithmic Recourse (AR) aims to provide a recourse action for altering the undesired prediction result given by a model. Typical AR methods provide a reasonable action by solving an optimization task of minimizing the required effort among executable actions. In practice, however, such actions do not always exist for models optimized only for predictive performance. To alleviate this issue, we formulate the task of learning an accurate classification tree under the constraint of ensuring the existence of reasonable actions for as many instances as possible. Then, we propose an efficient top-down greedy algorithm by leveraging the adversarial training techniques. We also show that our proposed algorithm can be applied to the random forest, which is known as a popular framework for learning tree ensembles. Experimental results demonstrated that our method successfully provided reasonable actions to more instances than the baselines without significantly degrading accuracy and computational efficiency. | [
"['Kentaro Kanamori' 'Takuya Takagi' 'Ken Kobayashi' 'Yuichi Ike']"
] |
null | null | 2406.01099 | null | null | http://arxiv.org/pdf/2406.01099v2 | 2024-06-12T06:51:00Z | 2024-06-03T08:34:32Z | Deep reinforcement learning for weakly coupled MDP's with continuous
actions | This paper introduces the Lagrange Policy for Continuous Actions (LPCA), a reinforcement learning algorithm specifically designed for weakly coupled MDP problems with continuous action spaces. LPCA addresses the challenge of resource constraints dependent on continuous actions by introducing a Lagrange relaxation of the weakly coupled MDP problem within a neural network framework for Q-value computation. This approach effectively decouples the MDP, enabling efficient policy learning in resource-constrained environments. We present two variations of LPCA: LPCA-DE, which utilizes differential evolution for global optimization, and LPCA-Greedy, a method that incrementally and greadily selects actions based on Q-value gradients. Comparative analysis against other state-of-the-art techniques across various settings highlight LPCA's robustness and efficiency in managing resource allocation while maximizing rewards. | [
"['Francisco Robledo' 'Urtzi Ayesta' 'Konstantin Avrachenkov']"
] |
null | null | 2406.01103 | null | null | http://arxiv.org/pdf/2406.01103v1 | 2024-06-03T08:39:15Z | 2024-06-03T08:39:15Z | Advancing DRL Agents in Commercial Fighting Games: Training,
Integration, and Agent-Human Alignment | Deep Reinforcement Learning (DRL) agents have demonstrated impressive success in a wide range of game genres. However, existing research primarily focuses on optimizing DRL competence rather than addressing the challenge of prolonged player interaction. In this paper, we propose a practical DRL agent system for fighting games named Sh=ukai, which has been successfully deployed to Naruto Mobile, a popular fighting game with over 100 million registered users. Sh=ukai quantifies the state to enhance generalizability, introducing Heterogeneous League Training (HELT) to achieve balanced competence, generalizability, and training efficiency. Furthermore, Sh=ukai implements specific rewards to align the agent's behavior with human expectations. Sh=ukai's ability to generalize is demonstrated by its consistent competence across all characters, even though it was trained on only 13% of them. Additionally, HELT exhibits a remarkable 22% improvement in sample efficiency. Sh=ukai serves as a valuable training partner for players in Naruto Mobile, enabling them to enhance their abilities and skills. | [
"['Chen Zhang' 'Qiang He' 'Zhou Yuan' 'Elvis S. Liu' 'Hong Wang'\n 'Jian Zhao' 'Yang Wang']"
] |
null | null | 2406.01114 | null | null | http://arxiv.org/pdf/2406.01114v1 | 2024-06-03T08:46:17Z | 2024-06-03T08:46:17Z | Globally Interpretable Classifiers via Boolean Formulas with Dynamic
Propositions | Interpretability and explainability are among the most important challenges of modern artificial intelligence, being mentioned even in various legislative sources. In this article, we develop a method for extracting immediately human interpretable classifiers from tabular data. The classifiers are given in the form of short Boolean formulas built with propositions that can either be directly extracted from categorical attributes or dynamically computed from numeric ones. Our method is implemented using Answer Set Programming. We investigate seven datasets and compare our results to ones obtainable by state-of-the-art classifiers for tabular data, namely, XGBoost and random forests. Over all datasets, the accuracies obtainable by our method are similar to the reference methods. The advantage of our classifiers in all cases is that they are very short and immediately human intelligible as opposed to the black-box nature of the reference methods. | [
"['Reijo Jaakkola' 'Tomi Janhunen' 'Antti Kuusisto'\n 'Masood Feyzbakhsh Rankooh' 'Miikka Vilander']"
] |
null | null | 2406.01115 | null | null | http://arxiv.org/pdf/2406.01115v1 | 2024-06-03T08:48:49Z | 2024-06-03T08:48:49Z | Cohort Squeeze: Beyond a Single Communication Round per Cohort in
Cross-Device Federated Learning | Virtually all federated learning (FL) methods, including FedAvg, operate in the following manner: i) an orchestrating server sends the current model parameters to a cohort of clients selected via certain rule, ii) these clients then independently perform a local training procedure (e.g., via SGD or Adam) using their own training data, and iii) the resulting models are shipped to the server for aggregation. This process is repeated until a model of suitable quality is found. A notable feature of these methods is that each cohort is involved in a single communication round with the server only. In this work we challenge this algorithmic design primitive and investigate whether it is possible to ``squeeze more juice" out of each cohort than what is possible in a single communication round. Surprisingly, we find that this is indeed the case, and our approach leads to up to 74% reduction in the total communication cost needed to train a FL model in the cross-device setting. Our method is based on a novel variant of the stochastic proximal point method (SPPM-AS) which supports a large collection of client sampling procedures some of which lead to further gains when compared to classical client selection approaches. | [
"['Kai Yi' 'Timur Kharisov' 'Igor Sokolov' 'Peter Richtárik']"
] |
null | null | 2406.01116 | null | null | http://arxiv.org/pdf/2406.01116v1 | 2024-06-03T08:52:06Z | 2024-06-03T08:52:06Z | Accelerating Heterogeneous Federated Learning with Closed-form
Classifiers | Federated Learning (FL) methods often struggle in highly statistically heterogeneous settings. Indeed, non-IID data distributions cause client drift and biased local solutions, particularly pronounced in the final classification layer, negatively impacting convergence speed and accuracy. To address this issue, we introduce Federated Recursive Ridge Regression (Fed3R). Our method fits a Ridge Regression classifier computed in closed form leveraging pre-trained features. Fed3R is immune to statistical heterogeneity and is invariant to the sampling order of the clients. Therefore, it proves particularly effective in cross-device scenarios. Furthermore, it is fast and efficient in terms of communication and computation costs, requiring up to two orders of magnitude fewer resources than the competitors. Finally, we propose to leverage the Fed3R parameters as an initialization for a softmax classifier and subsequently fine-tune the model using any FL algorithm (Fed3R with Fine-Tuning, Fed3R+FT). Our findings also indicate that maintaining a fixed classifier aids in stabilizing the training and learning more discriminative features in cross-device settings. Official website: https://fed-3r.github.io/. | [
"['Eros Fanì' 'Raffaello Camoriano' 'Barbara Caputo' 'Marco Ciccone']"
] |
null | null | 2406.01124 | null | null | http://arxiv.org/pdf/2406.01124v3 | 2024-06-28T07:54:19Z | 2024-06-03T09:10:42Z | Latent Logic Tree Extraction for Event Sequence Explanation from LLMs | Modern high-stakes systems, such as healthcare or robotics, often generate vast streaming event sequences. Our goal is to design an efficient, plug-and-play tool to elicit logic tree-based explanations from Large Language Models (LLMs) to provide customized insights into each observed event sequence. Built on the temporal point process model for events, our method employs the likelihood function as a score to evaluate generated logic trees. We propose an amortized Expectation-Maximization (EM) learning framework and treat the logic tree as latent variables. In the E-step, we evaluate the posterior distribution over the latent logic trees using an LLM prior and the likelihood of the observed event sequences. LLM provides a high-quality prior for the latent logic trees, however, since the posterior is built over a discrete combinatorial space, we cannot get the closed-form solution. We propose to generate logic tree samples from the posterior using a learnable GFlowNet, which is a diversity-seeking generator for structured discrete variables. The M-step employs the generated logic rules to approximate marginalization over the posterior, facilitating the learning of model parameters and refining the tunable LLM prior parameters. In the online setting, our locally built, lightweight model will iteratively extract the most relevant rules from LLMs for each sequence using only a few iterations. Empirical demonstrations showcase the promising performance and adaptability of our framework. | [
"['Zitao Song' 'Chao Yang' 'Chaojie Wang' 'Bo An' 'Shuang Li']"
] |
null | null | 2406.01130 | null | null | http://arxiv.org/pdf/2406.01130v1 | 2024-06-03T09:17:35Z | 2024-06-03T09:17:35Z | SAVA: Scalable Learning-Agnostic Data Valuation | Selecting suitable data for training machine learning models is crucial since large, web-scraped, real datasets contain noisy artifacts that affect the quality and relevance of individual data points. These artifacts will impact the performance and generalization of the model. We formulate this problem as a data valuation task, assigning a value to data points in the training set according to how similar or dissimilar they are to a clean and curated validation set. Recently, LAVA (Just et al. 2023) successfully demonstrated the use of optimal transport (OT) between a large noisy training dataset and a clean validation set, to value training data efficiently, without the dependency on model performance. However, the LAVA algorithm requires the whole dataset as an input, this limits its application to large datasets. Inspired by the scalability of stochastic (gradient) approaches which carry out computations on batches of data points instead of the entire dataset, we analogously propose SAVA, a scalable variant of LAVA with its computation on batches of data points. Intuitively, SAVA follows the same scheme as LAVA which leverages the hierarchically defined OT for data valuation. However, while LAVA processes the whole dataset, SAVA divides the dataset into batches of data points, and carries out the OT problem computation on those batches. We perform extensive experiments, to demonstrate that SAVA can scale to large datasets with millions of data points and doesn't trade off data valuation performance. | [
"['Samuel Kessler' 'Tam Le' 'Vu Nguyen']"
] |
null | null | 2406.01136 | null | null | http://arxiv.org/pdf/2406.01136v2 | 2024-06-04T09:02:14Z | 2024-06-03T09:27:57Z | Towards Practical Single-shot Motion Synthesis | Despite the recent advances in the so-called "cold start" generation from text prompts, their needs in data and computing resources, as well as the ambiguities around intellectual property and privacy concerns pose certain counterarguments for their utility. An interesting and relatively unexplored alternative has been the introduction of unconditional synthesis from a single sample, which has led to interesting generative applications. In this paper we focus on single-shot motion generation and more specifically on accelerating the training time of a Generative Adversarial Network (GAN). In particular, we tackle the challenge of GAN's equilibrium collapse when using mini-batch training by carefully annealing the weights of the loss functions that prevent mode collapse. Additionally, we perform statistical analysis in the generator and discriminator models to identify correlations between training stages and enable transfer learning. Our improved GAN achieves competitive quality and diversity on the Mixamo benchmark when compared to the original GAN architecture and a single-shot diffusion model, while being up to x6.8 faster in training time from the former and x1.75 from the latter. Finally, we demonstrate the ability of our improved GAN to mix and compose motion with a single forward pass. Project page available at https://moverseai.github.io/single-shot. | [
"['Konstantinos Roditakis' 'Spyridon Thermos' 'Nikolaos Zioulis']"
] |
null | null | 2406.01149 | null | null | http://arxiv.org/pdf/2406.01149v1 | 2024-06-03T09:43:24Z | 2024-06-03T09:43:24Z | Agnostic Learning of Mixed Linear Regressions with EM and AM Algorithms | Mixed linear regression is a well-studied problem in parametric statistics and machine learning. Given a set of samples, tuples of covariates and labels, the task of mixed linear regression is to find a small list of linear relationships that best fit the samples. Usually it is assumed that the label is generated stochastically by randomly selecting one of two or more linear functions, applying this chosen function to the covariates, and potentially introducing noise to the result. In that situation, the objective is to estimate the ground-truth linear functions up to some parameter error. The popular expectation maximization (EM) and alternating minimization (AM) algorithms have been previously analyzed for this. In this paper, we consider the more general problem of agnostic learning of mixed linear regression from samples, without such generative models. In particular, we show that the AM and EM algorithms, under standard conditions of separability and good initialization, lead to agnostic learning in mixed linear regression by converging to the population loss minimizers, for suitably defined loss functions. In some sense, this shows the strength of AM and EM algorithms that converges to ``optimal solutions'' even in the absence of realizable generative models. | [
"['Avishek Ghosh' 'Arya Mazumdar']"
] |
null | null | 2406.01150 | null | null | http://arxiv.org/pdf/2406.01150v1 | 2024-06-03T09:44:10Z | 2024-06-03T09:44:10Z | Looking Backward: Retrospective Backward Synthesis for Goal-Conditioned
GFlowNets | Generative Flow Networks (GFlowNets) are amortized sampling methods for learning a stochastic policy to sequentially generate compositional objects with probabilities proportional to their rewards. GFlowNets exhibit a remarkable ability to generate diverse sets of high-reward objects, in contrast to standard return maximization reinforcement learning approaches, which often converge to a single optimal solution. Recent works have arisen for learning goal-conditioned GFlowNets to acquire various useful properties, aiming to train a single GFlowNet capable of achieving different goals as the task specifies. However, training a goal-conditioned GFlowNet poses critical challenges due to extremely sparse rewards, which is further exacerbated in large state spaces. In this work, we propose a novel method named Retrospective Backward Synthesis (RBS) to address these challenges. Specifically, RBS synthesizes a new backward trajectory based on the backward policy in GFlowNets to enrich training trajectories with enhanced quality and diversity, thereby efficiently solving the sparse reward problem. Extensive empirical results show that our method improves sample efficiency by a large margin and outperforms strong baselines on various standard evaluation benchmarks. | [
"['Haoran He' 'Can Chang' 'Huazhe Xu' 'Ling Pan']"
] |
null | null | 2406.01157 | null | null | http://arxiv.org/pdf/2406.01157v1 | 2024-06-03T09:51:25Z | 2024-06-03T09:51:25Z | Quantum consistent neural/tensor networks for photonic circuits with
strongly/weakly entangled states | Modern quantum optical systems such as photonic quantum computers and quantum imaging devices require great precision in their designs and implementations in the hope to realistically exploit entanglement and reach a real quantum advantage. The theoretical and experimental explorations and validations of these systems are greatly dependent on the precision of our classical simulations. However, as Hilbert spaces increases, traditional computational methods used to design and optimize these systems encounter hard limitations due to the quantum curse of dimensionally. To address this challenge, we propose an approach based on neural and tensor networks to approximate the exact unitary evolution of closed entangled systems in a precise, efficient and quantum consistent manner. By training the networks with a reasonably small number of examples of quantum dynamics, we enable efficient parameter estimation in larger Hilbert spaces, offering an interesting solution for a great deal of quantum metrology problems. | [
"['Nicolas Allegra']"
] |
null | null | 2406.01162 | null | null | http://arxiv.org/pdf/2406.01162v1 | 2024-06-03T09:55:56Z | 2024-06-03T09:55:56Z | Conditional Gumbel-Softmax for constrained feature selection with
application to node selection in wireless sensor networks | In this paper, we introduce Conditional Gumbel-Softmax as a method to perform end-to-end learning of the optimal feature subset for a given task and deep neural network (DNN) model, while adhering to certain pairwise constraints between the features. We do this by conditioning the selection of each feature in the subset on another feature. We demonstrate how this approach can be used to select the task-optimal nodes composing a wireless sensor network (WSN) while ensuring that none of the nodes that require communication between one another have too large of a distance between them, limiting the required power spent on this communication. We validate this approach on an emulated Wireless Electroencephalography (EEG) Sensor Network (WESN) solving a motor execution task. We analyze how the performance of the WESN varies as the constraints are made more stringent and how well the Conditional Gumbel-Softmax performs in comparison with a heuristic, greedy selection method. While the application focus of this paper is on wearable brain-computer interfaces, the proposed methodology is generic and can readily be applied to node deployment in wireless sensor networks and constrained feature selection in other applications as well. | [
"['Thomas Strypsteen' 'Alexander Bertrand']"
] |
null | null | 2406.01163 | null | null | http://arxiv.org/pdf/2406.01163v2 | 2024-06-04T09:06:20Z | 2024-06-03T09:57:18Z | When to Sense and Control? A Time-adaptive Approach for Continuous-Time
RL | Reinforcement learning (RL) excels in optimizing policies for discrete-time Markov decision processes (MDP). However, various systems are inherently continuous in time, making discrete-time MDPs an inexact modeling choice. In many applications, such as greenhouse control or medical treatments, each interaction (measurement or switching of action) involves manual intervention and thus is inherently costly. Therefore, we generally prefer a time-adaptive approach with fewer interactions with the system. In this work, we formalize an RL framework, Time-adaptive Control & Sensing (TaCoS), that tackles this challenge by optimizing over policies that besides control predict the duration of its application. Our formulation results in an extended MDP that any standard RL algorithm can solve. We demonstrate that state-of-the-art RL algorithms trained on TaCoS drastically reduce the interaction amount over their discrete-time counterpart while retaining the same or improved performance, and exhibiting robustness over discretization frequency. Finally, we propose OTaCoS, an efficient model-based algorithm for our setting. We show that OTaCoS enjoys sublinear regret for systems with sufficiently smooth dynamics and empirically results in further sample-efficiency gains. | [
"['Lenart Treven' 'Bhavya Sukhija' 'Yarden As' 'Florian Dörfler'\n 'Andreas Krause']"
] |
null | null | 2406.01175 | null | null | http://arxiv.org/pdf/2406.01175v2 | 2024-06-04T09:29:27Z | 2024-06-03T10:14:32Z | NeoRL: Efficient Exploration for Nonepisodic RL | We study the problem of nonepisodic reinforcement learning (RL) for nonlinear dynamical systems, where the system dynamics are unknown and the RL agent has to learn from a single trajectory, i.e., without resets. We propose Nonepisodic Optimistic RL (NeoRL), an approach based on the principle of optimism in the face of uncertainty. NeoRL uses well-calibrated probabilistic models and plans optimistically w.r.t. the epistemic uncertainty about the unknown dynamics. Under continuity and bounded energy assumptions on the system, we provide a first-of-its-kind regret bound of $setO(beta_T sqrt{T Gamma_T})$ for general nonlinear systems with Gaussian process dynamics. We compare NeoRL to other baselines on several deep RL environments and empirically demonstrate that NeoRL achieves the optimal average cost while incurring the least regret. | [
"['Bhavya Sukhija' 'Lenart Treven' 'Florian Dörfler' 'Stelian Coros'\n 'Andreas Krause']"
] |
null | null | 2406.01178 | null | null | http://arxiv.org/pdf/2406.01178v1 | 2024-06-03T10:21:00Z | 2024-06-03T10:21:00Z | Deep Reinforcement Learning Behavioral Mode Switching Using Optimal
Control Based on a Latent Space Objective | In this work, we use optimal control to change the behavior of a deep reinforcement learning policy by optimizing directly in the policy's latent space. We hypothesize that distinct behavioral patterns, termed behavioral modes, can be identified within certain regions of a deep reinforcement learning policy's latent space, meaning that specific actions or strategies are preferred within these regions. We identify these behavioral modes using latent space dimension-reduction with ac*{pacmap}. Using the actions generated by the optimal control procedure, we move the system from one behavioral mode to another. We subsequently utilize these actions as a filter for interpreting the neural network policy. The results show that this approach can impose desired behavioral modes in the policy, demonstrated by showing how a failed episode can be made successful and vice versa using the lunar lander reinforcement learning environment. | [
"['Sindre Benjamin Remman' 'Bjørn Andreas Kristiansen'\n 'Anastasios M. Lekkas']"
] |
null | null | 2406.01183 | null | null | http://arxiv.org/pdf/2406.01183v1 | 2024-06-03T10:39:12Z | 2024-06-03T10:39:12Z | Automatic Input Feature Relevance via Spectral Neural Networks | Working with high-dimensional data is a common practice, in the field of machine learning. Identifying relevant input features is thus crucial, so as to obtain compact dataset more prone for effective numerical handling. Further, by isolating pivotal elements that form the basis of decision making, one can contribute to elaborate on - ex post - models' interpretability, so far rather elusive. Here, we propose a novel method to estimate the relative importance of the input components for a Deep Neural Network. This is achieved by leveraging on a spectral re-parametrization of the optimization process. Eigenvalues associated to input nodes provide in fact a robust proxy to gauge the relevance of the supplied entry features. Unlike existing techniques, the spectral features ranking is carried out automatically, as a byproduct of the network training. The technique is successfully challenged against both synthetic and real data. | [
"['Lorenzo Chicchi' 'Lorenzo Buffoni' 'Diego Febbe' 'Lorenzo Giambagli'\n 'Raffaele Marino' 'Duccio Fanelli']"
] |
null | null | 2406.01187 | null | null | http://arxiv.org/pdf/2406.01187v1 | 2024-06-03T10:49:34Z | 2024-06-03T10:49:34Z | Patch-Based Encoder-Decoder Architecture for Automatic Transmitted Light
to Fluorescence Imaging Transition: Contribution to the LightMyCells
Challenge | Automatic prediction of fluorescently labeled organelles from label-free transmitted light input images is an important, yet difficult task. The traditional way to obtain fluorescence images is related to performing biochemical labeling which is time-consuming and costly. Therefore, an automatic algorithm to perform the task based on the label-free transmitted light microscopy could be strongly beneficial. The importance of the task motivated researchers from the France-BioImaging to organize the LightMyCells challenge where the goal is to propose an algorithm that automatically predicts the fluorescently labeled nucleus, mitochondria, tubulin, and actin, based on the input consisting of bright field, phase contrast, or differential interference contrast microscopic images. In this work, we present the contribution of the AGHSSO team based on a carefully prepared and trained encoder-decoder deep neural network that achieves a considerable score in the challenge, being placed among the best-performing teams. | [
"['Marek Wodzinski' 'Henning Müller']"
] |
null | null | 2406.01189 | null | null | http://arxiv.org/pdf/2406.01189v2 | 2024-06-04T07:58:32Z | 2024-06-03T10:51:43Z | MultiMax: Sparse and Multi-Modal Attention Learning | SoftMax is a ubiquitous ingredient of modern machine learning algorithms. It maps an input vector onto a probability simplex and reweights the input by concentrating the probability mass at large entries. Yet, as a smooth approximation to the Argmax function, a significant amount of probability mass is distributed to other, residual entries, leading to poor interpretability and noise. Although sparsity can be achieved by a family of SoftMax variants, they often require an alternative loss function and do not preserve multi-modality. We show that this trade-off between multi-modality and sparsity limits the expressivity of SoftMax as well as its variants. We provide a solution to this tension between objectives by proposing a piece-wise differentiable function, termed MultiMax, which adaptively modulates the output distribution according to input entry range. Through comprehensive analysis and evaluation, we show that MultiMax successfully produces a distribution that supresses irrelevant entries while preserving multimodality, with benefits in image classification, language modeling and machine translation. The code is available at https://github.com/ZhouYuxuanYX/MultiMax. | [
"['Yuxuan Zhou' 'Mario Fritz' 'Margret Keuper']"
] |
null | null | 2406.01191 | null | null | http://arxiv.org/pdf/2406.01191v1 | 2024-06-03T10:53:45Z | 2024-06-03T10:53:45Z | S-CycleGAN: Semantic Segmentation Enhanced CT-Ultrasound Image-to-Image
Translation for Robotic Ultrasonography | Ultrasound imaging is pivotal in various medical diagnoses due to its non-invasive nature and safety. In clinical practice, the accuracy and precision of ultrasound image analysis are critical. Recent advancements in deep learning are showing great capacity of processing medical images. However, the data hungry nature of deep learning and the shortage of high-quality ultrasound image training data suppress the development of deep learning based ultrasound analysis methods. To address these challenges, we introduce an advanced deep learning model, dubbed S-CycleGAN, which generates high-quality synthetic ultrasound images from computed tomography (CT) data. This model incorporates semantic discriminators within a CycleGAN framework to ensure that critical anatomical details are preserved during the style transfer process. The synthetic images produced are used to augment training datasets for semantic segmentation models and robot-assisted ultrasound scanning system development, enhancing their ability to accurately parse real ultrasound imagery. | [
"['Yuhan Song' 'Nak Young Chong']"
] |
null | null | 2406.01192 | null | null | http://arxiv.org/pdf/2406.01192v1 | 2024-06-03T10:54:58Z | 2024-06-03T10:54:58Z | Sparsity-Agnostic Linear Bandits with Adaptive Adversaries | We study stochastic linear bandits where, in each round, the learner receives a set of actions (i.e., feature vectors), from which it chooses an element and obtains a stochastic reward. The expected reward is a fixed but unknown linear function of the chosen action. We study sparse regret bounds, that depend on the number $S$ of non-zero coefficients in the linear reward function. Previous works focused on the case where $S$ is known, or the action sets satisfy additional assumptions. In this work, we obtain the first sparse regret bounds that hold when $S$ is unknown and the action sets are adversarially generated. Our techniques combine online to confidence set conversions with a novel randomized model selection approach over a hierarchy of nested confidence sets. When $S$ is known, our analysis recovers state-of-the-art bounds for adversarial action sets. We also show that a variant of our approach, using Exp3 to dynamically select the confidence sets, can be used to improve the empirical performance of stochastic linear bandits while enjoying a regret bound with optimal dependence on the time horizon. | [
"['Tianyuan Jin' 'Kyoungseok Jang' 'Nicolò Cesa-Bianchi']"
] |
null | null | 2406.01203 | null | null | http://arxiv.org/pdf/2406.01203v1 | 2024-06-03T11:13:27Z | 2024-06-03T11:13:27Z | Scaling Up Deep Clustering Methods Beyond ImageNet-1K | Deep image clustering methods are typically evaluated on small-scale balanced classification datasets while feature-based $k$-means has been applied on proprietary billion-scale datasets. In this work, we explore the performance of feature-based deep clustering approaches on large-scale benchmarks whilst disentangling the impact of the following data-related factors: i) class imbalance, ii) class granularity, iii) easy-to-recognize classes, and iv) the ability to capture multiple classes. Consequently, we develop multiple new benchmarks based on ImageNet21K. Our experimental analysis reveals that feature-based $k$-means is often unfairly evaluated on balanced datasets. However, deep clustering methods outperform $k$-means across most large-scale benchmarks. Interestingly, $k$-means underperforms on easy-to-classify benchmarks by large margins. The performance gap, however, diminishes on the highest data regimes such as ImageNet21K. Finally, we find that non-primary cluster predictions capture meaningful classes (i.e. coarser classes). | [
"['Nikolas Adaloglou' 'Felix Michels' 'Kaspar Senft' 'Diana Petrusheva'\n 'Markus Kollmann']"
] |
null | null | 2406.01205 | null | null | http://arxiv.org/pdf/2406.01205v1 | 2024-06-03T11:15:16Z | 2024-06-03T11:15:16Z | ControlSpeech: Towards Simultaneous Zero-shot Speaker Cloning and
Zero-shot Language Style Control With Decoupled Codec | In this paper, we present ControlSpeech, a text-to-speech (TTS) system capable of fully cloning the speaker's voice and enabling arbitrary control and adjustment of speaking style, merely based on a few seconds of audio prompt and a simple textual style description prompt. Prior zero-shot TTS models and controllable TTS models either could only mimic the speaker's voice without further control and adjustment capabilities or were unrelated to speaker-specific voice generation. Therefore, ControlSpeech focuses on a more challenging new task-a TTS system with controllable timbre, content, and style at the same time. ControlSpeech takes speech prompts, content prompts, and style prompts as inputs and utilizes bidirectional attention and mask-based parallel decoding to capture corresponding codec representations in a discrete decoupling codec space. Moreover, we discovered the issue of text style controllability in a many-to-many mapping fashion and proposed the Style Mixture Semantic Density (SMSD) model to resolve this problem. SMSD module which is based on Gaussian mixture density networks, is designed to enhance the fine-grained partitioning and sampling capabilities of style semantic information and generate speech with more diverse styles. In terms of experiments, we make available a controllable model toolkit called ControlToolkit with a new style controllable dataset, some replicated baseline models and propose new metrics to evaluate both the control capability and the quality of generated audio in ControlSpeech. The relevant ablation studies validate the necessity of each component in ControlSpeech is necessary. We hope that ControlSpeech can establish the next foundation paradigm of controllable speech synthesis. The relevant code and demo are available at https://github.com/jishengpeng/ControlSpeech . | [
"['Shengpeng Ji' 'Jialong Zuo' 'Minghui Fang' 'Siqi Zheng' 'Qian Chen'\n 'Wen Wang' 'Ziyue Jiang' 'Hai Huang' 'Xize Cheng' 'Rongjie Huang'\n 'Zhou Zhao']"
] |
null | null | 2406.01229 | null | null | http://arxiv.org/pdf/2406.01229v2 | 2024-06-07T15:50:36Z | 2024-06-03T11:50:47Z | AGALE: A Graph-Aware Continual Learning Evaluation Framework | In recent years, continual learning (CL) techniques have made significant progress in learning from streaming data while preserving knowledge across sequential tasks, particularly in the realm of euclidean data. To foster fair evaluation and recognize challenges in CL settings, several evaluation frameworks have been proposed, focusing mainly on the single- and multi-label classification task on euclidean data. However, these evaluation frameworks are not trivially applicable when the input data is graph-structured, as they do not consider the topological structure inherent in graphs. Existing continual graph learning (CGL) evaluation frameworks have predominantly focussed on single-label scenarios in the node classification (NC) task. This focus has overlooked the complexities of multi-label scenarios, where nodes may exhibit affiliations with multiple labels, simultaneously participating in multiple tasks. We develop a graph-aware evaluation (agale) framework that accommodates both single-labeled and multi-labeled nodes, addressing the limitations of previous evaluation frameworks. In particular, we define new incremental settings and devise data partitioning algorithms tailored to CGL datasets. We perform extensive experiments comparing methods from the domains of continual learning, continual graph learning, and dynamic graph learning (DGL). We theoretically analyze agale and provide new insights about the role of homophily in the performance of compared methods. We release our framework at https://github.com/Tianqi-py/AGALE. | [
"['Tianqi Zhao' 'Alan Hanjalic' 'Megha Khosla']"
] |
null | null | 2406.01234 | null | null | http://arxiv.org/pdf/2406.01234v1 | 2024-06-03T11:53:44Z | 2024-06-03T11:53:44Z | Achieving Tractable Minimax Optimal Regret in Average Reward MDPs | In recent years, significant attention has been directed towards learning average-reward Markov Decision Processes (MDPs). However, existing algorithms either suffer from sub-optimal regret guarantees or computational inefficiencies. In this paper, we present the first tractable algorithm with minimax optimal regret of $widetilde{mathrm{O}}(sqrt{mathrm{sp}(h^*) S A T})$, where $mathrm{sp}(h^*)$ is the span of the optimal bias function $h^*$, $S times A$ is the size of the state-action space and $T$ the number of learning steps. Remarkably, our algorithm does not require prior information on $mathrm{sp}(h^*)$. Our algorithm relies on a novel subroutine, Projected Mitigated Extended Value Iteration (PMEVI), to compute bias-constrained optimal policies efficiently. This subroutine can be applied to various previous algorithms to improve regret bounds. | [
"['Victor Boone' 'Zihan Zhang']"
] |
null | null | 2406.01249 | null | null | http://arxiv.org/pdf/2406.01249v1 | 2024-06-03T12:07:01Z | 2024-06-03T12:07:01Z | Equivariant Machine Learning on Graphs with Nonlinear Spectral Filters | Equivariant machine learning is an approach for designing deep learning models that respect the symmetries of the problem, with the aim of reducing model complexity and improving generalization. In this paper, we focus on an extension of shift equivariance, which is the basis of convolution networks on images, to general graphs. Unlike images, graphs do not have a natural notion of domain translation. Therefore, we consider the graph functional shifts as the symmetry group: the unitary operators that commute with the graph shift operator. Notably, such symmetries operate in the signal space rather than directly in the spatial space. We remark that each linear filter layer of a standard spectral graph neural network (GNN) commutes with graph functional shifts, but the activation function breaks this symmetry. Instead, we propose nonlinear spectral filters (NLSFs) that are fully equivariant to graph functional shifts and show that they have universal approximation properties. The proposed NLSFs are based on a new form of spectral domain that is transferable between graphs. We demonstrate the superior performance of NLSFs over existing spectral GNNs in node and graph classification benchmarks. | [
"['Ya-Wei Eileen Lin' 'Ronen Talmon' 'Ron Levie']"
] |
null | null | 2406.01250 | null | null | http://arxiv.org/pdf/2406.01250v1 | 2024-06-03T12:07:22Z | 2024-06-03T12:07:22Z | DumpKV: Learning based lifetime aware garbage collection for key value
separation in LSM-tree | Key-value separation is used in LSM-tree to stored large value in separate log files to reduce write amplification, but requires garbage collection to garbage collect invalid values. Existing garbage collection techniques in LSM-tree typically adopt static parameter based garbage collection to garbage collect obsolete values which struggles to achieve low write amplification and it's challenging to find proper parameter for garbage collection triggering. In this work we introduce DumpKV, which introduces learning based lifetime aware garbage collection with dynamic lifetime adjustment to do efficient garbage collection to achieve lower write amplification. DumpKV manages large values using trained lightweight model with features suitable for various application based on past write access information of keys to give lifetime prediction for each individual key to enable efficient garbage collection. To reduce interference to write throughput DumpKV conducts feature collection during L0-L1 compaction leveraging the fact that LSM-tree is small under KV separation. Experimental results show that DumpKV achieves lower write amplification by 38%-73% compared to existing key-value separation garbage collection LSM-tree stores with small feature storage overhead. | [
"['Zhutao Zhuang' 'Xinqi Zeng' 'Zhiguang Chen']"
] |
null | null | 2406.01255 | null | null | http://arxiv.org/pdf/2406.01255v1 | 2024-06-03T12:11:34Z | 2024-06-03T12:11:34Z | On the Nonlinearity of Layer Normalization | Layer normalization (LN) is a ubiquitous technique in deep learning but our theoretical understanding to it remains elusive. This paper investigates a new theoretical direction for LN, regarding to its nonlinearity and representation capacity. We investigate the representation capacity of a network with layerwise composition of linear and LN transformations, referred to as LN-Net. We theoretically show that, given $m$ samples with any label assignment, an LN-Net with only 3 neurons in each layer and $O(m)$ LN layers can correctly classify them. We further show the lower bound of the VC dimension of an LN-Net. The nonlinearity of LN can be amplified by group partition, which is also theoretically demonstrated with mild assumption and empirically supported by our experiments. Based on our analyses, we consider to design neural architecture by exploiting and amplifying the nonlinearity of LN, and the effectiveness is supported by our experiments. | [
"['Yunhao Ni' 'Yuxin Guo' 'Junlong Jia' 'Lei Huang']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.