categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.16380 | null | null | http://arxiv.org/pdf/2405.16380v1 | 2024-05-25T23:39:35Z | 2024-05-25T23:39:35Z | Dynamic Inhomogeneous Quantum Resource Scheduling with Reinforcement
Learning | A central challenge in quantum information science and technology is achieving real-time estimation and feedforward control of quantum systems. This challenge is compounded by the inherent inhomogeneity of quantum resources, such as qubit properties and controls, and their intrinsically probabilistic nature. This leads to stochastic challenges in error detection and probabilistic outcomes in processes such as heralded remote entanglement. Given these complexities, optimizing the construction of quantum resource states is an NP-hard problem. In this paper, we address the quantum resource scheduling issue by formulating the problem and simulating it within a digitized environment, allowing the exploration and development of agent-based optimization strategies. We employ reinforcement learning agents within this probabilistic setting and introduce a new framework utilizing a Transformer model that emphasizes self-attention mechanisms for pairs of qubits. This approach facilitates dynamic scheduling by providing real-time, next-step guidance. Our method significantly improves the performance of quantum systems, achieving more than a 3$times$ improvement over rule-based agents, and establishes an innovative framework that improves the joint design of physical and control systems for quantum applications in communication, networking, and computing. | [
"['Linsen Li' 'Pratyush Anand' 'Kaiming He' 'Dirk Englund']"
] |
null | null | 2405.16381 | null | null | http://arxiv.org/pdf/2405.16381v1 | 2024-05-25T23:53:07Z | 2024-05-25T23:53:07Z | Trivialized Momentum Facilitates Diffusion Generative Modeling on Lie
Groups | The generative modeling of data on manifold is an important task, for which diffusion models in flat spaces typically need nontrivial adaptations. This article demonstrates how a technique called `trivialization' can transfer the effectiveness of diffusion models in Euclidean spaces to Lie groups. In particular, an auxiliary momentum variable was algorithmically introduced to help transport the position variable between data distribution and a fixed, easy-to-sample distribution. Normally, this would incur further difficulty for manifold data because momentum lives in a space that changes with the position. However, our trivialization technique creates to a new momentum variable that stays in a simple $textbf{fixed vector space}$. This design, together with a manifold preserving integrator, simplifies implementation and avoids inaccuracies created by approximations such as projections to tangent space and manifold, which were typically used in prior work, hence facilitating generation with high-fidelity and efficiency. The resulting method achieves state-of-the-art performance on protein and RNA torsion angle generation and sophisticated torus datasets. We also, arguably for the first time, tackle the generation of data on high-dimensional Special Orthogonal and Unitary groups, the latter essential for quantum problems. | [
"['Yuchen Zhu' 'Tianrong Chen' 'Lingkai Kong' 'Evangelos A. Theodorou'\n 'Molei Tao']"
] |
null | null | 2405.16383 | null | null | http://arxiv.org/pdf/2405.16383v1 | 2024-05-26T00:01:29Z | 2024-05-26T00:01:29Z | Rewarded Region Replay (R3) for Policy Learning with Discrete Action
Space | We introduce a new on-policy algorithm called Rewarded Region Replay (R3), which significantly improves on PPO in solving environments with discrete action spaces. R3 improves sample efficiency by using a replay buffer which contains past successful trajectories with reward above a certain threshold, which are used to update a PPO agent with importance sampling. Crucially, we discard the importance sampling factors which are above a certain ratio to reduce variance and stabilize training. We found that R3 significantly outperforms PPO in Minigrid environments with sparse rewards and discrete action space, such as DoorKeyEnv and CrossingEnv, and moreover we found that the improvement margin of our method versus baseline PPO increases with the complexity of the environment. We also benchmarked the performance of R3 against DDQN (Double Deep Q-Network), which is a standard baseline in off-policy methods for discrete actions, and found that R3 also outperforms DDQN agent in DoorKeyEnv. Lastly, we adapt the idea of R3 to dense reward setting to obtain the Dense R3 algorithm (or DR3) and benchmarked it against PPO on Cartpole-V1 environment. We found that DR3 outperforms PPO significantly on this dense reward environment. Our code can be found at https://github.com/chry-santhemum/R3. | [
"['Bangzheng Li' 'Ningshan Ma' 'Zifan Wang']"
] |
null | null | 2405.16386 | null | null | http://arxiv.org/pdf/2405.16386v1 | 2024-05-26T00:24:46Z | 2024-05-26T00:24:46Z | Variational Offline Multi-agent Skill Discovery | Skills are effective temporal abstractions established for sequential decision making tasks, which enable efficient hierarchical learning for long-horizon tasks and facilitate multi-task learning through their transferability. Despite extensive research, research gaps remain in multi-agent scenarios, particularly for automatically extracting subgroup coordination patterns in a multi-agent task. In this case, we propose two novel auto-encoder schemes: VO-MASD-3D and VO-MASD-Hier, to simultaneously capture subgroup- and temporal-level abstractions and form multi-agent skills, which firstly solves the aforementioned challenge. An essential algorithm component of these schemes is a dynamic grouping function that can automatically detect latent subgroups based on agent interactions in a task. Notably, our method can be applied to offline multi-task data, and the discovered subgroup skills can be transferred across relevant tasks without retraining. Empirical evaluations on StarCraft tasks indicate that our approach significantly outperforms existing methods regarding applying skills in multi-agent reinforcement learning (MARL). Moreover, skills discovered using our method can effectively reduce the learning difficulty in MARL scenarios with delayed and sparse reward signals. | [
"['Jiayu Chen' 'Bhargav Ganguly' 'Tian Lan' 'Vaneet Aggarwal']"
] |
null | null | 2405.16387 | null | null | http://arxiv.org/pdf/2405.16387v1 | 2024-05-26T00:26:57Z | 2024-05-26T00:26:57Z | Reverse Transition Kernel: A Flexible Framework to Accelerate Diffusion
Inference | To generate data from trained diffusion models, most inference algorithms, such as DDPM, DDIM, and other variants, rely on discretizing the reverse SDEs or their equivalent ODEs. In this paper, we view such approaches as decomposing the entire denoising diffusion process into several segments, each corresponding to a reverse transition kernel (RTK) sampling subproblem. Specifically, DDPM uses a Gaussian approximation for the RTK, resulting in low per-subproblem complexity but requiring a large number of segments (i.e., subproblems), which is conjectured to be inefficient. To address this, we develop a general RTK framework that enables a more balanced subproblem decomposition, resulting in $tilde O(1)$ subproblems, each with strongly log-concave targets. We then propose leveraging two fast sampling algorithms, the Metropolis-Adjusted Langevin Algorithm (MALA) and Underdamped Langevin Dynamics (ULD), for solving these strongly log-concave subproblems. This gives rise to the RTK-MALA and RTK-ULD algorithms for diffusion inference. In theory, we further develop the convergence guarantees for RTK-MALA and RTK-ULD in total variation (TV) distance: RTK-ULD can achieve $epsilon$ target error within $tilde{mathcal O}(d^{1/2}epsilon^{-1})$ under mild conditions, and RTK-MALA enjoys a $mathcal{O}(d^{2}log(d/epsilon))$ convergence rate under slightly stricter conditions. These theoretical results surpass the state-of-the-art convergence rates for diffusion inference and are well supported by numerical experiments. | [
"['Xunpeng Huang' 'Difan Zou' 'Hanze Dong' 'Yi Zhang' 'Yi-An Ma'\n 'Tong Zhang']"
] |
null | null | 2405.16388 | null | null | http://arxiv.org/pdf/2405.16388v1 | 2024-05-26T00:29:04Z | 2024-05-26T00:29:04Z | Multi-Reference Preference Optimization for Large Language Models | How can Large Language Models (LLMs) be aligned with human intentions and values? A typical solution is to gather human preference on model outputs and finetune the LLMs accordingly while ensuring that updates do not deviate too far from a reference model. Recent approaches, such as direct preference optimization (DPO), have eliminated the need for unstable and sluggish reinforcement learning optimization by introducing close-formed supervised losses. However, a significant limitation of the current approach is its design for a single reference model only, neglecting to leverage the collective power of numerous pretrained LLMs. To overcome this limitation, we introduce a novel closed-form formulation for direct preference optimization using multiple reference models. The resulting algorithm, Multi-Reference Preference Optimization (MRPO), leverages broader prior knowledge from diverse reference models, substantially enhancing preference learning capabilities compared to the single-reference DPO. Our experiments demonstrate that LLMs finetuned with MRPO generalize better in various preference data, regardless of data scarcity or abundance. Furthermore, MRPO effectively finetunes LLMs to exhibit superior performance in several downstream natural language processing tasks such as GSM8K and TruthfulQA. | [
"['Hung Le' 'Quan Tran' 'Dung Nguyen' 'Kien Do' 'Saloni Mittal'\n 'Kelechi Ogueji' 'Svetha Venkatesh']"
] |
null | null | 2405.16390 | null | null | http://arxiv.org/pdf/2405.16390v1 | 2024-05-26T00:42:10Z | 2024-05-26T00:42:10Z | Safe and Balanced: A Framework for Constrained Multi-Objective
Reinforcement Learning | In numerous reinforcement learning (RL) problems involving safety-critical systems, a key challenge lies in balancing multiple objectives while simultaneously meeting all stringent safety constraints. To tackle this issue, we propose a primal-based framework that orchestrates policy optimization between multi-objective learning and constraint adherence. Our method employs a novel natural policy gradient manipulation method to optimize multiple RL objectives and overcome conflicting gradients between different tasks, since the simple weighted average gradient direction may not be beneficial for specific tasks' performance due to misaligned gradients of different task objectives. When there is a violation of a hard constraint, our algorithm steps in to rectify the policy to minimize this violation. We establish theoretical convergence and constraint violation guarantees in a tabular setting. Empirically, our proposed method also outperforms prior state-of-the-art methods on challenging safe multi-objective reinforcement learning tasks. | [
"['Shangding Gu' 'Bilgehan Sel' 'Yuhao Ding' 'Lu Wang' 'Qingwei Lin'\n 'Alois Knoll' 'Ming Jin']"
] |
null | null | 2405.16391 | null | null | http://arxiv.org/pdf/2405.16391v1 | 2024-05-26T00:50:11Z | 2024-05-26T00:50:11Z | When does compositional structure yield compositional generalization? A
kernel theory | Compositional generalization (the ability to respond correctly to novel combinations of familiar components) is thought to be a cornerstone of intelligent behavior. Compositionally structured (e.g. disentangled) representations are essential for this; however, the conditions under which they yield compositional generalization remain unclear. To address this gap, we present a general theory of compositional generalization in kernel models with fixed, potentially nonlinear representations (which also applies to neural networks in the "lazy regime"). We prove that these models are functionally limited to adding up values assigned to conjunctions/combinations of components that have been seen during training ("conjunction-wise additivity"), and identify novel compositionality failure modes that arise from the data and model structure, even for disentangled inputs. For models in the representation learning (or "rich") regime, we show that networks can generalize on an important non-additive task (associative inference), and give a mechanistic explanation for why. Finally, we validate our theory empirically, showing that it captures the behavior of deep neural networks trained on a set of compositional tasks. In sum, our theory characterizes the principles giving rise to compositional generalization in kernel models and shows how representation learning can overcome their limitations. We further provide a formally grounded, novel generalization class for compositional tasks that highlights fundamental differences in the required learning mechanisms (conjunction-wise additivity). | [
"['Samuel Lippl' 'Kim Stachenfeld']"
] |
null | null | 2405.16395 | null | null | http://arxiv.org/pdf/2405.16395v1 | 2024-05-26T01:08:28Z | 2024-05-26T01:08:28Z | Daily Physical Activity Monitoring -- Adaptive Learning from
Multi-source Motion Sensor Data | In healthcare applications, there is a growing need to develop machine learning models that use data from a single source, such as that from a wrist wearable device, to monitor physical activities, assess health risks, and provide immediate health recommendations or interventions. However, the limitation of using single-source data often compromises the model's accuracy, as it fails to capture the full scope of human activities. While a more comprehensive dataset can be gathered in a lab setting using multiple sensors attached to various body parts, this approach is not practical for everyday use due to the impracticality of wearing multiple sensors. To address this challenge, we introduce a transfer learning framework that optimizes machine learning models for everyday applications by leveraging multi-source data collected in a laboratory setting. We introduce a novel metric to leverage the inherent relationship between these multiple data sources, as they are all paired to capture aspects of the same physical activity. Through numerical experiments, our framework outperforms existing methods in classification accuracy and robustness to noise, offering a promising avenue for the enhancement of daily activity monitoring. | [
"['Haoting Zhang' 'Donglin Zhan' 'Yunduan Lin' 'Jinghai He' 'Qing Zhu'\n 'Zuo-Jun Max Shen' 'Zeyu Zheng']"
] |
null | null | 2405.16396 | null | null | http://arxiv.org/pdf/2405.16396v1 | 2024-05-26T01:12:24Z | 2024-05-26T01:12:24Z | Machine learning in business process management: A systematic literature
review | Machine learning (ML) provides algorithms to create computer programs based on data without explicitly programming them. In business process management (BPM), ML applications are used to analyse and improve processes efficiently. Three frequent examples of using ML are providing decision support through predictions, discovering accurate process models, and improving resource allocation. This paper organises the body of knowledge on ML in BPM. We extract BPM tasks from different literature streams, summarise them under the phases of a process`s lifecycle, explain how ML helps perform these tasks and identify technical commonalities in ML implementations across tasks. This study is the first exhaustive review of how ML has been used in BPM. We hope that it can open the door for a new era of cumulative research by helping researchers to identify relevant preliminary work and then combine and further develop existing approaches in a focused fashion. Our paper helps managers and consultants to find ML applications that are relevant in the current project phase of a BPM initiative, like redesigning a business process. We also offer - as a synthesis of our review - a research agenda that spreads ten avenues for future research, including applying novel ML concepts like federated learning, addressing less regarded BPM lifecycle phases like process identification, and delivering ML applications with a focus on end-users. | [
"['Sven Weinzierl' 'Sandra Zilker' 'Sebastian Dunzer' 'Martin Matzner']"
] |
null | null | 2405.16397 | null | null | http://arxiv.org/pdf/2405.16397v1 | 2024-05-26T01:25:02Z | 2024-05-26T01:25:02Z | AdaFisher: Adaptive Second Order Optimization via Fisher Information | First-order optimization methods are currently the mainstream in training deep neural networks (DNNs). Optimizers like Adam incorporate limited curvature information by employing the diagonal matrix preconditioning of the stochastic gradient during the training. Despite their widespread, second-order optimization algorithms exhibit superior convergence properties compared to their first-order counterparts e.g. Adam and SGD. However, their practicality in training DNNs are still limited due to increased per-iteration computations and suboptimal accuracy compared to the first order methods. We present AdaFisher--an adaptive second-order optimizer that leverages a block-diagonal approximation to the Fisher information matrix for adaptive gradient preconditioning. AdaFisher aims to bridge the gap between enhanced convergence capabilities and computational efficiency in second-order optimization framework for training DNNs. Despite the slow pace of second-order optimizers, we showcase that AdaFisher can be reliably adopted for image classification, language modelling and stand out for its stability and robustness in hyperparameter tuning. We demonstrate that AdaFisher outperforms the SOTA optimizers in terms of both accuracy and convergence speed. Code available from href{https://github.com/AtlasAnalyticsLab/AdaFisher}{https://github.com/AtlasAnalyticsLab/AdaFisher} | [
"['Damien Martins Gomes' 'Yanlei Zhang' 'Eugene Belilovsky' 'Guy Wolf'\n 'Mahdi S. Hosseini']"
] |
null | null | 2405.16401 | null | null | http://arxiv.org/pdf/2405.16401v1 | 2024-05-26T01:46:22Z | 2024-05-26T01:46:22Z | Understanding the Effect of using Semantically Meaningful Tokens for
Visual Representation Learning | Vision transformers have established a precedent of patchifying images into uniformly-sized chunks before processing. We hypothesize that this design choice may limit models in learning comprehensive and compositional representations from visual data. This paper explores the notion of providing semantically-meaningful visual tokens to transformer encoders within a vision-language pre-training framework. Leveraging off-the-shelf segmentation and scene-graph models, we extract representations of instance segmentation masks (referred to as tangible tokens) and relationships and actions (referred to as intangible tokens). Subsequently, we pre-train a vision-side transformer by incorporating these newly extracted tokens and aligning the resultant embeddings with caption embeddings from a text-side encoder. To capture the structural and semantic relationships among visual tokens, we introduce additive attention weights, which are used to compute self-attention scores. Our experiments on COCO demonstrate notable improvements over ViTs in learned representation quality across text-to-image (+47%) and image-to-text retrieval (+44%) tasks. Furthermore, we showcase the advantages on compositionality benchmarks such as ARO (+18%) and Winoground (+10%). | [
"['Neha Kalibhat' 'Priyatham Kattakinda' 'Arman Zarei' 'Nikita Seleznev'\n 'Samuel Sharpe' 'Senthil Kumar' 'Soheil Feizi']"
] |
null | null | 2405.16405 | null | null | http://arxiv.org/pdf/2405.16405v1 | 2024-05-26T02:12:02Z | 2024-05-26T02:12:02Z | Intruding with Words: Towards Understanding Graph Injection Attacks at
the Text Level | Graph Neural Networks (GNNs) excel across various applications but remain vulnerable to adversarial attacks, particularly Graph Injection Attacks (GIAs), which inject malicious nodes into the original graph and pose realistic threats. Text-attributed graphs (TAGs), where nodes are associated with textual features, are crucial due to their prevalence in real-world applications and are commonly used to evaluate these vulnerabilities. However, existing research only focuses on embedding-level GIAs, which inject node embeddings rather than actual textual content, limiting their applicability and simplifying detection. In this paper, we pioneer the exploration of GIAs at the text level, presenting three novel attack designs that inject textual content into the graph. Through theoretical and empirical analysis, we demonstrate that text interpretability, a factor previously overlooked at the embedding level, plays a crucial role in attack strength. Among the designs we investigate, the Word-frequency-based Text-level GIA (WTGIA) is particularly notable for its balance between performance and interpretability. Despite the success of WTGIA, we discover that defenders can easily enhance their defenses with customized text embedding methods or large language model (LLM)--based predictors. These insights underscore the necessity for further research into the potential and practical significance of text-level GIAs. | [
"['Runlin Lei' 'Yuwei Hu' 'Yuchen Ren' 'Zhewei Wei']"
] |
null | null | 2405.16406 | null | null | http://arxiv.org/pdf/2405.16406v2 | 2024-05-28T18:14:15Z | 2024-05-26T02:15:49Z | SpinQuant: LLM quantization with learned rotations | Post-training quantization (PTQ) techniques applied to weights, activations, and the KV cache greatly reduce memory usage, latency, and power consumption of Large Language Models (LLMs), but may lead to large quantization errors when outliers are present. Recent findings suggest that rotating activation or weight matrices helps remove outliers and benefits quantization. In this work, we identify a collection of applicable rotation parameterizations that lead to identical outputs in full-precision Transformer architectures, and find that some random rotations lead to much better quantization than others, with an up to 13 points difference in downstream zero-shot reasoning performance. As a result, we propose SpinQuant that optimizes (or learns) the rotation matrices with Cayley optimization on a small validation set. With 4-bit quantization of weight, activation, and KV-cache, SpinQuant narrows the accuracy gap on zero-shot reasoning tasks with full precision to merely 2.9 points on the LLaMA-2 7B model, surpassing LLM-QAT by 19.1 points and SmoothQuant by 25.0 points. SpinQuant also outperforms concurrent work QuaRot, which applies random rotations to remove outliers. In particular, for LLaMA-2 7B/LLaMA-3 8B models that are hard to quantize, SpinQuant reduces the gap to full precision by 30.2%/34.1% relative to QuaRot. | [
"['Zechun Liu' 'Changsheng Zhao' 'Igor Fedorov' 'Bilge Soran'\n 'Dhruv Choudhary' 'Raghuraman Krishnamoorthi' 'Vikas Chandra'\n 'Yuandong Tian' 'Tijmen Blankevoort']"
] |
null | null | 2405.16409 | null | null | http://arxiv.org/pdf/2405.16409v1 | 2024-05-26T02:34:26Z | 2024-05-26T02:34:26Z | Network Interdiction Goes Neural | Network interdiction problems are combinatorial optimization problems involving two players: one aims to solve an optimization problem on a network, while the other seeks to modify the network to thwart the first player's objectives. Such problems typically emerge in an attacker-defender context, encompassing areas such as military operations, disease spread analysis, and communication network management. The primary bottleneck in network interdiction arises from the high time complexity of using conventional exact solvers and the challenges associated with devising efficient heuristic solvers. GNNs, recognized as a cutting-edge methodology, have shown significant effectiveness in addressing single-level CO problems on graphs, such as the traveling salesman problem, graph matching, and graph edit distance. Nevertheless, network interdiction presents a bi-level optimization challenge, which current GNNs find difficult to manage. To address this gap, we represent network interdiction problems as Mixed-Integer Linear Programming (MILP) instances, then apply a multipartite GNN with sufficient representational capacity to learn these formulations. This approach ensures that our neural network is more compatible with the mathematical algorithms designed to solve network interdiction problems, resulting in improved generalization. Through two distinct tasks, we demonstrate that our proposed method outperforms theoretical baseline models and provides advantages over traditional exact solvers. | [
"['Lei Zhang' 'Zhiqian Chen' 'Chang-Tien Lu' 'Liang Zhao']"
] |
null | null | 2405.16411 | null | null | http://arxiv.org/pdf/2405.16411v1 | 2024-05-26T02:59:13Z | 2024-05-26T02:59:13Z | Tensor Attention Training: Provably Efficient Learning of Higher-order
Transformers | Tensor Attention, a multi-view attention that is able to capture high-order correlations among multiple modalities, can overcome the representational limitations of classical matrix attention. However, the $Omega(n^3)$ time complexity of tensor attention poses a significant obstacle to its practical implementation in transformers, where $n$ is the input sequence length. In this work, we prove that the backward gradient of tensor attention training can be computed in almost linear $n^{1+o(1)}$ time, the same complexity as its forward computation under a bounded entries assumption. We provide a closed-form solution for the gradient and propose a fast computation method utilizing polynomial approximation methods and tensor algebraic tricks. Furthermore, we prove the necessity and tightness of our assumption through hardness analysis, showing that slightly weakening it renders the gradient problem unsolvable in truly subcubic time. Our theoretical results establish the feasibility of efficient higher-order transformer training and may facilitate practical applications of tensor attention architectures. | [
"['Jiuxiang Gu' 'Yingyu Liang' 'Zhenmei Shi' 'Zhao Song' 'Yufa Zhou']"
] |
null | null | 2405.16412 | null | null | http://arxiv.org/pdf/2405.16412v2 | 2024-06-04T07:35:32Z | 2024-05-26T03:04:26Z | KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World Knowledge | Knowledge Graph Embedding (KGE) techniques are crucial in learning compact representations of entities and relations within a knowledge graph, facilitating efficient reasoning and knowledge discovery. While existing methods typically focus either on training KGE models solely based on graph structure or fine-tuning pre-trained language models with classification data in KG, KG-FIT leverages LLM-guided refinement to construct a semantically coherent hierarchical structure of entity clusters. By incorporating this hierarchical knowledge along with textual information during the fine-tuning process, KG-FIT effectively captures both global semantics from the LLM and local semantics from the KG. Extensive experiments on the benchmark datasets FB15K-237, YAGO3-10, and PrimeKG demonstrate the superiority of KG-FIT over state-of-the-art pre-trained language model-based methods, achieving improvements of 14.4%, 13.5%, and 11.9% in the Hits@10 metric for the link prediction task, respectively. Furthermore, KG-FIT yields substantial performance gains of 12.6%, 6.7%, and 17.7% compared to the structure-based base models upon which it is built. These results highlight the effectiveness of KG-FIT in incorporating open-world knowledge from LLMs to significantly enhance the expressiveness and informativeness of KG embeddings. | [
"['Pengcheng Jiang' 'Lang Cao' 'Cao Xiao' 'Parminder Bhatia' 'Jimeng Sun'\n 'Jiawei Han']"
] |
null | null | 2405.16413 | null | null | http://arxiv.org/pdf/2405.16413v1 | 2024-05-26T03:05:10Z | 2024-05-26T03:05:10Z | Augmented Risk Prediction for the Onset of Alzheimer's Disease from
Electronic Health Records with Large Language Models | Alzheimer's disease (AD) is the fifth-leading cause of death among Americans aged 65 and older. Screening and early detection of AD and related dementias (ADRD) are critical for timely intervention and for identifying clinical trial participants. The widespread adoption of electronic health records (EHRs) offers an important resource for developing ADRD screening tools such as machine learning based predictive models. Recent advancements in large language models (LLMs) demonstrate their unprecedented capability of encoding knowledge and performing reasoning, which offers them strong potential for enhancing risk prediction. This paper proposes a novel pipeline that augments risk prediction by leveraging the few-shot inference power of LLMs to make predictions on cases where traditional supervised learning methods (SLs) may not excel. Specifically, we develop a collaborative pipeline that combines SLs and LLMs via a confidence-driven decision-making mechanism, leveraging the strengths of SLs in clear-cut cases and LLMs in more complex scenarios. We evaluate this pipeline using a real-world EHR data warehouse from Oregon Health & Science University (OHSU) Hospital, encompassing EHRs from over 2.5 million patients and more than 20 million patient encounters. Our results show that our proposed approach effectively combines the power of SLs and LLMs, offering significant improvements in predictive performance. This advancement holds promise for revolutionizing ADRD screening and early detection practices, with potential implications for better strategies of patient management and thus improving healthcare. | [
"['Jiankun Wang' 'Sumyeong Ahn' 'Taykhoom Dalal' 'Xiaodan Zhang'\n 'Weishen Pan' 'Qiannan Zhang' 'Bin Chen' 'Hiroko H. Dodge' 'Fei Wang'\n 'Jiayu Zhou']"
] |
null | null | 2405.16418 | null | null | http://arxiv.org/pdf/2405.16418v1 | 2024-05-26T03:32:27Z | 2024-05-26T03:32:27Z | Unraveling the Smoothness Properties of Diffusion Models: A Gaussian
Mixture Perspective | Diffusion models have made rapid progress in generating high-quality samples across various domains. However, a theoretical understanding of the Lipschitz continuity and second momentum properties of the diffusion process is still lacking. In this paper, we bridge this gap by providing a detailed examination of these smoothness properties for the case where the target data distribution is a mixture of Gaussians, which serves as a universal approximator for smooth densities such as image data. We prove that if the target distribution is a $k$-mixture of Gaussians, the density of the entire diffusion process will also be a $k$-mixture of Gaussians. We then derive tight upper bounds on the Lipschitz constant and second momentum that are independent of the number of mixture components $k$. Finally, we apply our analysis to various diffusion solvers, both SDE and ODE based, to establish concrete error guarantees in terms of the total variation distance and KL divergence between the target and learned distributions. Our results provide deeper theoretical insights into the dynamics of the diffusion process under common data distributions. | [
"['Jiuxiang Gu' 'Yingyu Liang' 'Zhenmei Shi' 'Zhao Song' 'Yufa Zhou']"
] |
null | null | 2405.16422 | null | null | http://arxiv.org/pdf/2405.16422v1 | 2024-05-26T04:26:07Z | 2024-05-26T04:26:07Z | AI-Generated Text Detection and Classification Based on BERT Deep
Learning Algorithm | AI-generated text detection plays an increasingly important role in various fields. In this study, we developed an efficient AI-generated text detection model based on the BERT algorithm, which provides new ideas and methods for solving related problems. In the data preprocessing stage, a series of steps were taken to process the text, including operations such as converting to lowercase, word splitting, removing stop words, stemming extraction, removing digits, and eliminating redundant spaces, to ensure data quality and accuracy. By dividing the dataset into a training set and a test set in the ratio of 60% and 40%, and observing the changes in the accuracy and loss values during the training process, we found that the model performed well during the training process. The accuracy increases steadily from the initial 94.78% to 99.72%, while the loss value decreases from 0.261 to 0.021 and converges gradually, which indicates that the BERT model is able to detect AI-generated text with high accuracy and the prediction results are gradually approaching the real classification results. Further analysis of the results of the training and test sets reveals that in terms of loss value, the average loss of the training set is 0.0565, while the average loss of the test set is 0.0917, showing a slightly higher loss value. As for the accuracy, the average accuracy of the training set reaches 98.1%, while the average accuracy of the test set is 97.71%, which is not much different from each other, indicating that the model has good generalisation ability. In conclusion, the AI-generated text detection model based on the BERT algorithm proposed in this study shows high accuracy and stability in experiments, providing an effective solution for related fields. | [
"['Hao Wang' 'Jianwei Li' 'Zhengyu Li']"
] |
null | null | 2405.16424 | null | null | http://arxiv.org/pdf/2405.16424v1 | 2024-05-26T04:30:17Z | 2024-05-26T04:30:17Z | Improving Health Professionals' Onboarding with AI and XAI for
Trustworthy Human-AI Collaborative Decision Making | With advanced AI/ML, there has been growing research on explainable AI (XAI) and studies on how humans interact with AI and XAI for effective human-AI collaborative decision-making. However, we still have a lack of understanding of how AI systems and XAI should be first presented to users without technical backgrounds. In this paper, we present the findings of semi-structured interviews with health professionals (n=12) and students (n=4) majoring in medicine and health to study how to improve onboarding with AI and XAI. For the interviews, we built upon human-AI interaction guidelines to create onboarding materials of an AI system for stroke rehabilitation assessment and AI explanations and introduce them to the participants. Our findings reveal that beyond presenting traditional performance metrics on AI, participants desired benchmark information, the practical benefits of AI, and interaction trials to better contextualize AI performance, and refine the objectives and performance of AI. Based on these findings, we highlight directions for improving onboarding with AI and XAI and human-AI collaborative decision-making. | [
"['Min Hun Lee' 'Silvana Xin Yi Choo' 'Shamala D/O Thilarajah']"
] |
null | null | 2405.16435 | null | null | http://arxiv.org/pdf/2405.16435v1 | 2024-05-26T05:22:38Z | 2024-05-26T05:22:38Z | Structure-aware Semantic Node Identifiers for Learning on Graphs | We present a novel graph tokenization framework that generates structure-aware, semantic node identifiers (IDs) in the form of a short sequence of discrete codes, serving as symbolic representations of nodes. We employs vector quantization to compress continuous node embeddings from multiple layers of a graph neural network (GNN), into compact, meaningful codes, under both self-supervised and supervised learning paradigms. The resulting node IDs capture a high-level abstraction of graph data, enhancing the efficiency and interpretability of GNNs. Through extensive experiments on 34 datasets, including node classification, graph classification, link prediction, and attributed graph clustering tasks, we demonstrate that our generated node IDs not only improve computational efficiency but also achieve competitive performance compared to current state-of-the-art methods. | [
"['Yuankai Luo' 'Qijiong Liu' 'Lei Shi' 'Xiao-Ming Wu']"
] |
null | null | 2405.16436 | null | null | http://arxiv.org/pdf/2405.16436v1 | 2024-05-26T05:38:50Z | 2024-05-26T05:38:50Z | Provably Mitigating Overoptimization in RLHF: Your SFT Loss is
Implicitly an Adversarial Regularizer | Aligning generative models with human preference via RLHF typically suffers from overoptimization, where an imperfectly learned reward model can misguide the generative model to output undesired responses. We investigate this problem in a principled manner by identifying the source of the misalignment as a form of distributional shift and uncertainty in learning human preferences. To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model; one that simultaneously minimizes the maximum likelihood estimation of the loss and a reward penalty term. Here, the reward penalty term is introduced to prevent the policy from choosing actions with spurious high proxy rewards, resulting in provable sample efficiency of the algorithm under a partial coverage style condition. Moving from theory to practice, the proposed algorithm further enjoys an equivalent but surprisingly easy-to-implement reformulation. Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines: (i) a preference optimization loss that directly aligns the policy with human preference, and (ii) a supervised learning loss that explicitly imitates the policy with a (suitable) baseline distribution. In the context of aligning large language models (LLM), this objective fuses the direct preference optimization (DPO) loss with the supervised fune-tuning (SFT) loss to help mitigate the overoptimization towards undesired responses, for which we name the algorithm Regularized Preference Optimization (RPO). Experiments of aligning LLMs demonstrate the improved performance of RPO compared with DPO baselines. Our work sheds light on the interplay between preference optimization and SFT in tuning LLMs with both theoretical guarantees and empirical evidence. | [
"['Zhihan Liu' 'Miao Lu' 'Shenao Zhang' 'Boyi Liu' 'Hongyi Guo'\n 'Yingxiang Yang' 'Jose Blanchet' 'Zhaoran Wang']"
] |
null | null | 2405.16439 | null | null | http://arxiv.org/pdf/2405.16439v1 | 2024-05-26T05:48:21Z | 2024-05-26T05:48:21Z | Towards Imitation Learning in Real World Unstructured Social Mini-Games
in Pedestrian Crowds | Imitation Learning (IL) strategies are used to generate policies for robot motion planning and navigation by learning from human trajectories. Recently, there has been a lot of excitement in applying IL in social interactions arising in urban environments such as university campuses, restaurants, grocery stores, and hospitals. However, obtaining numerous expert demonstrations in social settings might be expensive, risky, or even impossible. Current approaches therefore, focus only on simulated social interaction scenarios. This raises the question: textit{How can a robot learn to imitate an expert demonstrator from real world multi-agent social interaction scenarios}? It remains unknown which, if any, IL methods perform well and what assumptions they require. We benchmark representative IL methods in real world social interaction scenarios on a motion planning task, using a novel pedestrian intersection dataset collected at the University of Texas at Austin campus. Our evaluation reveals two key findings: first, learning multi-agent cost functions is required for learning the diverse behavior modes of agents in tightly coupled interactions and second, conditioning the training of IL methods on partial state information or providing global information in simulation can improve imitation learning, especially in real world social interaction scenarios. | [
"['Rohan Chandra' 'Haresh Karnan' 'Negar Mehr' 'Peter Stone'\n 'Joydeep Biswas']"
] |
null | null | 2405.16440 | null | null | http://arxiv.org/pdf/2405.16440v1 | 2024-05-26T05:50:17Z | 2024-05-26T05:50:17Z | MambaTS: Improved Selective State Space Models for Long-term Time Series
Forecasting | In recent years, Transformers have become the de-facto architecture for long-term sequence forecasting (LTSF), but faces challenges such as quadratic complexity and permutation invariant bias. A recent model, Mamba, based on selective state space models (SSMs), has emerged as a competitive alternative to Transformer, offering comparable performance with higher throughput and linear complexity related to sequence length. In this study, we analyze the limitations of current Mamba in LTSF and propose four targeted improvements, leading to MambaTS. We first introduce variable scan along time to arrange the historical information of all the variables together. We suggest that causal convolution in Mamba is not necessary for LTSF and propose the Temporal Mamba Block (TMB). We further incorporate a dropout mechanism for selective parameters of TMB to mitigate model overfitting. Moreover, we tackle the issue of variable scan order sensitivity by introducing variable permutation training. We further propose variable-aware scan along time to dynamically discover variable relationships during training and decode the optimal variable scan order by solving the shortest path visiting all nodes problem during inference. Extensive experiments conducted on eight public datasets demonstrate that MambaTS achieves new state-of-the-art performance. | [
"['Xiuding Cai' 'Yaoyao Zhu' 'Xueyao Wang' 'Yu Yao']"
] |
null | null | 2405.16441 | null | null | http://arxiv.org/pdf/2405.16441v1 | 2024-05-26T05:50:39Z | 2024-05-26T05:50:39Z | Categorical Flow Matching on Statistical Manifolds | We introduce Statistical Flow Matching (SFM), a novel and mathematically rigorous flow-matching framework on the manifold of parameterized probability measures inspired by the results from information geometry. We demonstrate the effectiveness of our method on the discrete generation problem by instantiating SFM on the manifold of categorical distributions whose geometric properties remain unexplored in previous discrete generative models. Utilizing the Fisher information metric, we equip the manifold with a Riemannian structure whose intrinsic geometries are effectively leveraged by following the shortest paths of geodesics. We develop an efficient training and sampling algorithm that overcomes numerical stability issues with a diffeomorphism between manifolds. Our distinctive geometric perspective of statistical manifolds allows us to apply optimal transport during training and interpret SFM as following the steepest direction of the natural gradient. Unlike previous models that rely on variational bounds for likelihood estimation, SFM enjoys the exact likelihood calculation for arbitrary probability measures. We manifest that SFM can learn more complex patterns on the statistical manifold where existing models often fail due to strong prior assumptions. Comprehensive experiments on real-world generative tasks ranging from image, text to biological domains further demonstrate that SFM achieves higher sampling quality and likelihood than other discrete diffusion or flow-based models. | [
"['Chaoran Cheng' 'Jiahan Li' 'Jian Peng' 'Ge Liu']"
] |
null | null | 2405.16444 | null | null | http://arxiv.org/pdf/2405.16444v2 | 2024-06-03T10:57:57Z | 2024-05-26T06:00:17Z | CacheBlend: Fast Large Language Model Serving for RAG with Cached
Knowledge Fusion | Large language models (LLMs) often incorporate multiple text chunks in their inputs to provide the necessary contexts. To speed up the prefill of the long LLM inputs, one can pre-compute the KV cache of a text and re-use the KV cache when the context is reused as the prefix of another LLM input. However, the reused text chunks are not always the input prefix, and when they are not, their precomputed KV caches cannot be directly used since they ignore the text's cross-attention with the preceding text in the LLM input. Thus, the benefits of reusing KV caches remain largely unrealized. This paper tackles just one question: when an LLM input contains multiple text chunks, how to quickly combine their precomputed KV caches in order to achieve the same generation quality as the expensive full prefill (i.e., without reusing KV cache)? We present CacheBlend, a scheme that reuses the pre-computed KV caches, regardless prefix or not, and selectively recomputes the KV values of a small subset of tokens to partially update each reused KV cache. In the meantime,the small extra delay for recomputing some tokens can be pipelined with the retrieval of KV caches within the same job,allowing CacheBlend to store KV caches in slower devices with more storage capacity while retrieving them without increasing the inference delay. By comparing CacheBlend with the state-of-the-art KV cache reusing schemes on three open-source LLMs of various sizes and four popular benchmark datasets of different tasks, we show that CacheBlend reduces time-to-first-token (TTFT) by 2.2-3.3X and increases the inference throughput by 2.8-5X, compared with full KV recompute, without compromising generation quality or incurring more storage cost. | [
"['Jiayi Yao' 'Hanchen Li' 'Yuhan Liu' 'Siddhant Ray' 'Yihua Cheng'\n 'Qizheng Zhang' 'Kuntai Du' 'Shan Lu' 'Junchen Jiang']"
] |
null | null | 2405.16447 | null | null | http://arxiv.org/pdf/2405.16447v1 | 2024-05-26T06:29:12Z | 2024-05-26T06:29:12Z | Fast Asymmetric Factorization for Large Scale Multiple Kernel Clustering | Kernel methods are extensively employed for nonlinear data clustering, yet their effectiveness heavily relies on selecting suitable kernels and associated parameters, posing challenges in advance determination. In response, Multiple Kernel Clustering (MKC) has emerged as a solution, allowing the fusion of information from multiple base kernels for clustering. However, both early fusion and late fusion methods for large-scale MKC encounter challenges in memory and time constraints, necessitating simultaneous optimization of both aspects. To address this issue, we propose Efficient Multiple Kernel Concept Factorization (EMKCF), which constructs a new sparse kernel matrix inspired by local regression to achieve memory efficiency. EMKCF learns consensus and individual representations by extending orthogonal concept factorization to handle multiple kernels for time efficiency. Experimental results demonstrate the efficiency and effectiveness of EMKCF on benchmark datasets compared to state-of-the-art methods. The proposed method offers a straightforward, scalable, and effective solution for large-scale MKC tasks. | [
"['Yan Chen' 'Liang Du' 'Lei Duan']"
] |
null | null | 2405.16449 | null | null | http://arxiv.org/pdf/2405.16449v1 | 2024-05-26T06:33:11Z | 2024-05-26T06:33:11Z | Reinforcement Learning for Jump-Diffusions | We study continuous-time reinforcement learning (RL) for stochastic control in which system dynamics are governed by jump-diffusion processes. We formulate an entropy-regularized exploratory control problem with stochastic policies to capture the exploration--exploitation balance essential for RL. Unlike the pure diffusion case initially studied by Wang et al. (2020), the derivation of the exploratory dynamics under jump-diffusions calls for a careful formulation of the jump part. Through a theoretical analysis, we find that one can simply use the same policy evaluation and q-learning algorithms in Jia and Zhou (2022a, 2023), originally developed for controlled diffusions, without needing to check a priori whether the underlying data come from a pure diffusion or a jump-diffusion. However, we show that the presence of jumps ought to affect parameterizations of actors and critics in general. Finally, we investigate as an application the mean-variance portfolio selection problem with stock price modelled as a jump-diffusion, and show that both RL algorithms and parameterizations are invariant with respect to jumps. | [
"['Xuefeng Gao' 'Lingfei Li' 'Xun Yu Zhou']"
] |
null | null | 2405.16450 | null | null | http://arxiv.org/pdf/2405.16450v1 | 2024-05-26T06:33:48Z | 2024-05-26T06:33:48Z | Synthesizing Programmatic Reinforcement Learning Policies with Large
Language Model Guided Search | Programmatic reinforcement learning (PRL) has been explored for representing policies through programs as a means to achieve interpretability and generalization. Despite promising outcomes, current state-of-the-art PRL methods are hindered by sample inefficiency, necessitating tens of millions of program-environment interactions. To tackle this challenge, we introduce a novel LLM-guided search framework (LLM-GS). Our key insight is to leverage the programming expertise and common sense reasoning of LLMs to enhance the efficiency of assumption-free, random-guessing search methods. We address the challenge of LLMs' inability to generate precise and grammatically correct programs in domain-specific languages (DSLs) by proposing a Pythonic-DSL strategy - an LLM is instructed to initially generate Python codes and then convert them into DSL programs. To further optimize the LLM-generated programs, we develop a search algorithm named Scheduled Hill Climbing, designed to efficiently explore the programmatic search space to consistently improve the programs. Experimental results in the Karel domain demonstrate the superior effectiveness and efficiency of our LLM-GS framework. Extensive ablation studies further verify the critical role of our Pythonic-DSL strategy and Scheduled Hill Climbing algorithm. | [
"['Max Liu' 'Chan-Hung Yu' 'Wei-Hsu Lee' 'Cheng-Wei Hung' 'Yen-Chun Chen'\n 'Shao-Hua Sun']"
] |
null | null | 2405.16453 | null | null | http://arxiv.org/pdf/2405.16453v1 | 2024-05-26T06:52:56Z | 2024-05-26T06:52:56Z | A Slices Perspective for Incremental Nonparametric Inference in High
Dimensional State Spaces | We introduce an innovative method for incremental nonparametric probabilistic inference in high-dimensional state spaces. Our approach leverages slices from high-dimensional surfaces to efficiently approximate posterior distributions of any shape. Unlike many existing graph-based methods, our slices perspective eliminates the need for additional intermediate reconstructions, maintaining a more accurate representation of posterior distributions. Additionally, we propose a novel heuristic to balance between accuracy and efficiency, enabling real-time operation in nonparametric scenarios. In empirical evaluations on synthetic and real-world datasets, our slices approach consistently outperforms other state-of-the-art methods. It demonstrates superior accuracy and achieves a significant reduction in computational complexity, often by an order of magnitude. | [
"['Moshe Shienman' 'Ohad Levy-Or' 'Michael Kaess' 'Vadim Indelman']"
] |
null | null | 2405.16455 | null | null | http://arxiv.org/pdf/2405.16455v1 | 2024-05-26T07:00:05Z | 2024-05-26T07:00:05Z | On the Algorithmic Bias of Aligning Large Language Models with RLHF:
Preference Collapse and Matching Regularization | Accurately aligning large language models (LLMs) with human preferences is crucial for informing fair, economically sound, and statistically efficient decision-making processes. However, we argue that reinforcement learning from human feedback (RLHF) -- the predominant approach for aligning LLMs with human preferences through a reward model -- suffers from an inherent algorithmic bias due to its Kullback--Leibler-based regularization in optimization. In extreme cases, this bias could lead to a phenomenon we term preference collapse, where minority preferences are virtually disregarded. To mitigate this algorithmic bias, we introduce preference matching (PM) RLHF, a novel approach that provably aligns LLMs with the preference distribution of the reward model under the Bradley--Terry--Luce/Plackett--Luce model. Central to our approach is a PM regularizer that takes the form of the negative logarithm of the LLM's policy probability distribution over responses, which helps the LLM balance response diversification and reward maximization. Notably, we obtain this regularizer by solving an ordinary differential equation that is necessary for the PM property. For practical implementation, we introduce a conditional variant of PM RLHF that is tailored to natural language generation. Finally, we empirically validate the effectiveness of conditional PM RLHF through experiments on the OPT-1.3B and Llama-2-7B models, demonstrating a 29% to 41% improvement in alignment with human preferences, as measured by a certain metric, compared to standard RLHF. | [
"['Jiancong Xiao' 'Ziniu Li' 'Xingyu Xie' 'Emily Getzen' 'Cong Fang'\n 'Qi Long' 'Weijie J. Su']"
] |
null | null | 2405.16456 | null | null | http://arxiv.org/pdf/2405.16456v1 | 2024-05-26T07:00:12Z | 2024-05-26T07:00:12Z | Dominant Shuffle: A Simple Yet Powerful Data Augmentation for
Time-series Prediction | Recent studies have suggested frequency-domain Data augmentation (DA) is effec tive for time series prediction. Existing frequency-domain augmentations disturb the original data with various full-spectrum noises, leading to excess domain gap between augmented and original data. Although impressive performance has been achieved in certain cases, frequency-domain DA has yet to be generalized to time series prediction datasets. In this paper, we found that frequency-domain augmentations can be significantly improved by two modifications that limit the perturbations. First, we found that limiting the perturbation to only dominant frequencies significantly outperforms full-spectrum perturbations. Dominant fre quencies represent the main periodicity and trends of the signal and are more important than other frequencies. Second, we found that simply shuffling the dominant frequency components is superior over sophisticated designed random perturbations. Shuffle rearranges the original components (magnitudes and phases) and limits the external noise. With these two modifications, we proposed dominant shuffle, a simple yet effective data augmentation for time series prediction. Our method is very simple yet powerful and can be implemented with just a few lines of code. Extensive experiments with eight datasets and six popular time series models demonstrate that our method consistently improves the baseline performance under various settings and significantly outperforms other DA methods. Code can be accessed at https://kaizhao.net/time-series. | [
"['Kai Zhao' 'Zuojie He' 'Alex Hung' 'Dan Zeng']"
] |
null | null | 2405.16460 | null | null | http://arxiv.org/pdf/2405.16460v1 | 2024-05-26T07:08:13Z | 2024-05-26T07:08:13Z | Probabilistic Contrastive Learning with Explicit Concentration on the
Hypersphere | Self-supervised contrastive learning has predominantly adopted deterministic methods, which are not suited for environments characterized by uncertainty and noise. This paper introduces a new perspective on incorporating uncertainty into contrastive learning by embedding representations within a spherical space, inspired by the von Mises-Fisher distribution (vMF). We introduce an unnormalized form of vMF and leverage the concentration parameter, kappa, as a direct, interpretable measure to quantify uncertainty explicitly. This approach not only provides a probabilistic interpretation of the embedding space but also offers a method to calibrate model confidence against varying levels of data corruption and characteristics. Our empirical results demonstrate that the estimated concentration parameter correlates strongly with the degree of unforeseen data corruption encountered at test time, enables failure analysis, and enhances existing out-of-distribution detection methods. | [
"['Hongwei Bran Li' 'Cheng Ouyang' 'Tamaz Amiranashvili' 'Matthew S. Rosen'\n 'Bjoern Menze' 'Juan Eugenio Iglesias']"
] |
null | null | 2405.16472 | null | null | http://arxiv.org/pdf/2405.16472v1 | 2024-05-26T07:54:53Z | 2024-05-26T07:54:53Z | Multi-Level Additive Modeling for Structured Non-IID Federated Learning | The primary challenge in Federated Learning (FL) is to model non-IID distributions across clients, whose fine-grained structure is important to improve knowledge sharing. For example, some knowledge is globally shared across all clients, some is only transferable within a subgroup of clients, and some are client-specific. To capture and exploit this structure, we train models organized in a multi-level structure, called ``Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients and their personalization. In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels. For the top level, FeMAM trains one global model shared by all clients as FedAvg. For every mid-level, it learns multiple models each assigned to a subgroup of clients, as clustered FL. Every bottom-level model is trained for one client only. In the training objective, each model aims to minimize the residual of the additive predictions by the other models assigned to each client. To approximate the arbitrary structure of non-IID across clients, FeMAM introduces more flexibility and adaptivity to FL by incrementally adding new models to the prediction of each client and reassigning another if necessary, automatically optimizing the knowledge-sharing structure. Extensive experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings. Our code is available at https://github.com/shutong043/FeMAM. | [
"['Shutong Chen' 'Tianyi Zhou' 'Guodong Long' 'Jie Ma' 'Jing Jiang'\n 'Chengqi Zhang']"
] |
null | null | 2405.16474 | null | null | http://arxiv.org/pdf/2405.16474v1 | 2024-05-26T07:58:07Z | 2024-05-26T07:58:07Z | Inaccurate Label Distribution Learning with Dependency Noise | In this paper, we introduce the Dependent Noise-based Inaccurate Label Distribution Learning (DN-ILDL) framework to tackle the challenges posed by noise in label distribution learning, which arise from dependencies on instances and labels. We start by modeling the inaccurate label distribution matrix as a combination of the true label distribution and a noise matrix influenced by specific instances and labels. To address this, we develop a linear mapping from instances to their true label distributions, incorporating label correlations, and decompose the noise matrix using feature and label representations, applying group sparsity constraints to accurately capture the noise. Furthermore, we employ graph regularization to align the topological structures of the input and output spaces, ensuring accurate reconstruction of the true label distribution matrix. Utilizing the Alternating Direction Method of Multipliers (ADMM) for efficient optimization, we validate our method's capability to recover true labels accurately and establish a generalization error bound. Extensive experiments demonstrate that DN-ILDL effectively addresses the ILDL problem and outperforms existing LDL methods. | [
"['Zhiqiang Kou' 'Jing Wang' 'Yuheng Jia' 'Xin Geng']"
] |
null | null | 2405.16475 | null | null | http://arxiv.org/pdf/2405.16475v2 | 2024-06-04T11:53:44Z | 2024-05-26T07:58:51Z | Looks Too Good To Be True: An Information-Theoretic Analysis of
Hallucinations in Generative Restoration Models | The pursuit of high perceptual quality in image restoration has driven the development of revolutionary generative models, capable of producing results often visually indistinguishable from real data. However, as their perceptual quality continues to improve, these models also exhibit a growing tendency to generate hallucinations - realistic-looking details that do not exist in the ground truth images. The presence of hallucinations introduces uncertainty regarding the reliability of the models' predictions, raising major concerns about their practical application. In this paper, we employ information-theory tools to investigate this phenomenon, revealing a fundamental tradeoff between uncertainty and perception. We rigorously analyze the relationship between these two factors, proving that the global minimal uncertainty in generative models grows in tandem with perception. In particular, we define the inherent uncertainty of the restoration problem and show that attaining perfect perceptual quality entails at least twice this uncertainty. Additionally, we establish a relation between mean squared-error distortion, uncertainty and perception, through which we prove the aforementioned uncertainly-perception tradeoff induces the well-known perception-distortion tradeoff. This work uncovers fundamental limitations of generative models in achieving both high perceptual quality and reliable predictions for image restoration. We demonstrate our theoretical findings through an analysis of single image super-resolution algorithms. Our work aims to raise awareness among practitioners about this inherent tradeoff, empowering them to make informed decisions and potentially prioritize safety over perceptual performance. | [
"['Regev Cohen' 'Idan Kligvasser' 'Ehud Rivlin' 'Daniel Freedman']"
] |
null | null | 2405.16476 | null | null | http://arxiv.org/pdf/2405.16476v1 | 2024-05-26T08:02:02Z | 2024-05-26T08:02:02Z | KiNETGAN: Enabling Distributed Network Intrusion Detection through
Knowledge-Infused Synthetic Data Generation | In the realm of IoT/CPS systems connected over mobile networks, traditional intrusion detection methods analyze network traffic across multiple devices using anomaly detection techniques to flag potential security threats. However, these methods face significant privacy challenges, particularly with deep packet inspection and network communication analysis. This type of monitoring is highly intrusive, as it involves examining the content of data packets, which can include personal and sensitive information. Such data scrutiny is often governed by stringent laws and regulations, especially in environments like smart homes where data privacy is paramount. Synthetic data offers a promising solution by mimicking real network behavior without revealing sensitive details. Generative models such as Generative Adversarial Networks (GANs) can produce synthetic data, but they often struggle to generate realistic data in specialized domains like network activity. This limitation stems from insufficient training data, which impedes the model's ability to grasp the domain's rules and constraints adequately. Moreover, the scarcity of training data exacerbates the problem of class imbalance in intrusion detection methods. To address these challenges, we propose a Privacy-Driven framework that utilizes a knowledge-infused Generative Adversarial Network for generating synthetic network activity data (KiNETGAN). This approach enhances the resilience of distributed intrusion detection while addressing privacy concerns. Our Knowledge Guided GAN produces realistic representations of network activity, validated through rigorous experimentation. We demonstrate that KiNETGAN maintains minimal accuracy loss in downstream tasks, effectively balancing data privacy and utility. | [
"['Anantaa Kotal' 'Brandon Luton' 'Anupam Joshi']"
] |
null | null | 2405.16489 | null | null | http://arxiv.org/pdf/2405.16489v1 | 2024-05-26T08:55:22Z | 2024-05-26T08:55:22Z | Causal-Aware Graph Neural Architecture Search under Distribution Shifts | Graph NAS has emerged as a promising approach for autonomously designing GNN architectures by leveraging the correlations between graphs and architectures. Existing methods fail to generalize under distribution shifts that are ubiquitous in real-world graph scenarios, mainly because the graph-architecture correlations they exploit might be spurious and varying across distributions. We propose to handle the distribution shifts in the graph architecture search process by discovering and exploiting the causal relationship between graphs and architectures to search for the optimal architectures that can generalize under distribution shifts. The problem remains unexplored with following challenges: how to discover the causal graph-architecture relationship that has stable predictive abilities across distributions, and how to handle distribution shifts with the discovered causal graph-architecture relationship to search the generalized graph architectures. To address these challenges, we propose Causal-aware Graph Neural Architecture Search (CARNAS), which is able to capture the causal graph-architecture relationship during the architecture search process and discover the generalized graph architecture under distribution shifts. Specifically, we propose Disentangled Causal Subgraph Identification to capture the causal subgraphs that have stable prediction abilities across distributions. Then, we propose Graph Embedding Intervention to intervene on causal subgraphs within the latent space, ensuring that these subgraphs encapsulate essential features for prediction while excluding non-causal elements. Additionally, we propose Invariant Architecture Customization to reinforce the causal invariant nature of the causal subgraphs, which are utilized to tailor generalized graph architectures. Extensive experiments demonstrate that CARNAS achieves advanced out-of-distribution generalization ability. | [
"['Peiwen Li' 'Xin Wang' 'Zeyang Zhang' 'Yijian Qin' 'Ziwei Zhang'\n 'Jialong Wang' 'Yang Li' 'Wenwu Zhu']"
] |
null | null | 2405.16496 | null | null | http://arxiv.org/pdf/2405.16496v1 | 2024-05-26T09:16:34Z | 2024-05-26T09:16:34Z | Exploring a Multimodal Fusion-based Deep Learning Network for Detecting
Facial Palsy | Algorithmic detection of facial palsy offers the potential to improve current practices, which usually involve labor-intensive and subjective assessment by clinicians. In this paper, we present a multimodal fusion-based deep learning model that utilizes unstructured data (i.e. an image frame with facial line segments) and structured data (i.e. features of facial expressions) to detect facial palsy. We then contribute to a study to analyze the effect of different data modalities and the benefits of a multimodal fusion-based approach using videos of 21 facial palsy patients. Our experimental results show that among various data modalities (i.e. unstructured data - RGB images and images of facial line segments and structured data - coordinates of facial landmarks and features of facial expressions), the feed-forward neural network using features of facial expression achieved the highest precision of 76.22 while the ResNet-based model using images of facial line segments achieved the highest recall of 83.47. When we leveraged both images of facial line segments and features of facial expressions, our multimodal fusion-based deep learning model slightly improved the precision score to 77.05 at the expense of a decrease in the recall score. | [
"['Nicole Heng Yim Oo' 'Min Hun Lee' 'Jeong Hoon Lim']"
] |
null | null | 2405.16498 | null | null | http://arxiv.org/pdf/2405.16498v1 | 2024-05-26T09:20:47Z | 2024-05-26T09:20:47Z | On Sequential Loss Approximation for Continual Learning | We introduce for continual learning Autodiff Quadratic Consolidation (AQC), which approximates the previous loss function with a quadratic function, and Neural Consolidation (NC), which approximates the previous loss function with a neural network. Although they are not scalable to large neural networks, they can be used with a fixed pre-trained feature extractor. We empirically study these methods in class-incremental learning, for which regularization-based methods produce unsatisfactory results, unless combined with replay. We find that for small datasets, quadratic approximation of the previous loss function leads to poor results, even with full Hessian computation, and NC could significantly improve the predictive performance, while for large datasets, when used with a fixed pre-trained feature extractor, AQC provides superior predictive performance. We also find that using tanh-output features can improve the predictive performance of AQC. In particular, in class-incremental Split MNIST, when a Convolutional Neural Network (CNN) with tanh-output features is pre-trained on EMNIST Letters and used as a fixed pre-trained feature extractor, AQC can achieve predictive performance comparable to joint training. | [
"['Menghao Waiyan William Zhu' 'Ercan Engin Kuruoğlu']"
] |
null | null | 2405.16503 | null | null | http://arxiv.org/pdf/2405.16503v1 | 2024-05-26T09:47:17Z | 2024-05-26T09:47:17Z | Integrating GNN and Neural ODEs for Estimating Two-Body Interactions in
Mixed-Species Collective Motion | Analyzing the motion of multiple biological agents, be it cells or individual animals, is pivotal for the understanding of complex collective behaviors. With the advent of advanced microscopy, detailed images of complex tissue formations involving multiple cell types have become more accessible in recent years. However, deciphering the underlying rules that govern cell movements is far from trivial. Here, we present a novel deep learning framework to estimate the underlying equations of motion from observed trajectories, a pivotal step in decoding such complex dynamics. Our framework integrates graph neural networks with neural differential equations, enabling effective prediction of two-body interactions based on the states of the interacting entities. We demonstrate the efficacy of our approach through two numerical experiments. First, we used a simulated data from a toy model to tune the hyperparameters. Based on the obtained hyperparameters, we then applied this approach to a more complex model that describes interacting cells of cellular slime molds. Our results show that the proposed method can accurately estimate the function of two-body interactions, thereby precisely replicating both individual and collective behaviors within these systems. | [
"['Masahito Uwamichi' 'Simon K. Schnyder' 'Tetsuya J. Kobayashi'\n 'Satoshi Sawai']"
] |
null | null | 2405.16504 | null | null | http://arxiv.org/pdf/2405.16504v1 | 2024-05-26T09:57:45Z | 2024-05-26T09:57:45Z | A Unified Implicit Attention Formulation for Gated-Linear Recurrent
Sequence Models | Recent advances in efficient sequence modeling have led to attention-free layers, such as Mamba, RWKV, and various gated RNNs, all featuring sub-quadratic complexity in sequence length and excellent scaling properties, enabling the construction of a new type of foundation models. In this paper, we present a unified view of these models, formulating such layers as implicit causal self-attention layers. The formulation includes most of their sub-components and is not limited to a specific part of the architecture. The framework compares the underlying mechanisms on similar grounds for different layers and provides a direct means for applying explainability methods. Our experiments show that our attention matrices and attribution method outperform an alternative and a more limited formulation that was recently proposed for Mamba. For the other architectures for which our method is the first to provide such a view, our method is effective and competitive in the relevant metrics compared to the results obtained by state-of-the-art transformer explainability methods. Our code is publicly available. | [
"['Itamar Zimerman' 'Ameen Ali' 'Lior Wolf']"
] |
null | null | 2405.16506 | null | null | http://arxiv.org/pdf/2405.16506v1 | 2024-05-26T10:11:40Z | 2024-05-26T10:11:40Z | GRAG: Graph Retrieval-Augmented Generation | While Retrieval-Augmented Generation (RAG) enhances the accuracy and relevance of responses by generative language models, it falls short in graph-based contexts where both textual and topological information are important. Naive RAG approaches inherently neglect the structural intricacies of textual graphs, resulting in a critical gap in the generation process. To address this challenge, we introduce $textbf{Graph Retrieval-Augmented Generation (GRAG)}$, which significantly enhances both the retrieval and generation processes by emphasizing the importance of subgraph structures. Unlike RAG approaches that focus solely on text-based entity retrieval, GRAG maintains an acute awareness of graph topology, which is crucial for generating contextually and factually coherent responses. Our GRAG approach consists of four main stages: indexing of $k$-hop ego-graphs, graph retrieval, soft pruning to mitigate the impact of irrelevant entities, and generation with pruned textual subgraphs. GRAG's core workflow-retrieving textual subgraphs followed by soft pruning-efficiently identifies relevant subgraph structures while avoiding the computational infeasibility typical of exhaustive subgraph searches, which are NP-hard. Moreover, we propose a novel prompting strategy that achieves lossless conversion from textual subgraphs to hierarchical text descriptions. Extensive experiments on graph multi-hop reasoning benchmarks demonstrate that in scenarios requiring multi-hop reasoning on textual graphs, our GRAG approach significantly outperforms current state-of-the-art RAG methods while effectively mitigating hallucinations. | [
"['Yuntong Hu' 'Zhihan Lei' 'Zheng Zhang' 'Bo Pan' 'Chen Ling' 'Liang Zhao']"
] |
null | null | 2405.16507 | null | null | http://arxiv.org/pdf/2405.16507v2 | 2024-05-28T08:31:09Z | 2024-05-26T10:15:20Z | Causal Concept Embedding Models: Beyond Causal Opacity in Deep Learning | Causal opacity denotes the difficulty in understanding the "hidden" causal structure underlying a deep neural network's (DNN) reasoning. This leads to the inability to rely on and verify state-of-the-art DNN-based systems especially in high-stakes scenarios. For this reason, causal opacity represents a key open challenge at the intersection of deep learning, interpretability, and causality. This work addresses this gap by introducing Causal Concept Embedding Models (Causal CEMs), a class of interpretable models whose decision-making process is causally transparent by design. The results of our experiments show that Causal CEMs can: (i) match the generalization performance of causally-opaque models, (ii) support the analysis of interventional and counterfactual scenarios, thereby improving the model's causal interpretability and supporting the effective verification of its reliability and fairness, and (iii) enable human-in-the-loop corrections to mispredicted intermediate reasoning steps, boosting not just downstream accuracy after corrections but also accuracy of the explanation provided for a specific instance. | [
"['Gabriele Dominici' 'Pietro Barbiero' 'Mateo Espinosa Zarlenga'\n 'Alberto Termine' 'Martin Gjoreski' 'Giuseppe Marra' 'Marc Langheinrich']"
] |
null | null | 2405.16508 | null | null | http://arxiv.org/pdf/2405.16508v1 | 2024-05-26T10:19:04Z | 2024-05-26T10:19:04Z | AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model | Interpretable deep learning aims at developing neural architectures whose decision-making processes could be understood by their users. Among these techniqes, Concept Bottleneck Models enhance the interpretability of neural networks by integrating a layer of human-understandable concepts. These models, however, necessitate training a new model from the beginning, consuming significant resources and failing to utilize already trained large models. To address this issue, we introduce "AnyCBM", a method that transforms any existing trained model into a Concept Bottleneck Model with minimal impact on computational resources. We provide both theoretical and experimental insights showing the effectiveness of AnyCBMs in terms of classification performances and effectivenss of concept-based interventions on downstream tasks. | [
"['Gabriele Dominici' 'Pietro Barbiero' 'Francesco Giannini'\n 'Martin Gjoreski' 'Marc Langhenirich']"
] |
null | null | 2405.16510 | null | null | http://arxiv.org/pdf/2405.16510v3 | 2024-05-30T12:40:06Z | 2024-05-26T10:33:17Z | Meta-Task Planning for Language Agents | The rapid advancement of neural language models has sparked a new surge of intelligent agent research. Unlike traditional agents, large language model-based agents (LLM agents) have emerged as a promising paradigm for achieving artificial general intelligence (AGI) due to their superior reasoning and generalization capabilities. Effective planning is crucial for the success of LLM agents in real-world tasks, making it a highly pursued topic in the community. Current planning methods typically translate tasks into executable action sequences. However, determining a feasible or optimal sequence for complex tasks at fine granularity, which often requires compositing long chains of heterogeneous actions, remains challenging. This paper introduces Meta-Task Planning (MTP), a zero-shot methodology for collaborative LLM-based multi-agent systems that simplifies complex task planning by decomposing it into a hierarchy of subordinate tasks, or meta-tasks. Each meta-task is then mapped into executable actions. MTP was assessed on two rigorous benchmarks, TravelPlanner and API-Bank. Notably, MTP achieved an average $sim40%$ success rate on TravelPlanner, significantly higher than the state-of-the-art (SOTA) baseline ($2.92%$), and outperforming $LLM_{api}$-4 with ReAct on API-Bank by $sim14%$, showing the immense potential of integrating LLM with multi-agent systems. | [
"['Cong Zhang' 'Derrick Goh Xin Deik' 'Dexun Li' 'Hao Zhang' 'Yong Liu']"
] |
null | null | 2405.16511 | null | null | http://arxiv.org/pdf/2405.16511v1 | 2024-05-26T10:43:16Z | 2024-05-26T10:43:16Z | SE3Set: Harnessing equivariant hypergraph neural networks for molecular
representation learning | In this paper, we develop SE3Set, an SE(3) equivariant hypergraph neural network architecture tailored for advanced molecular representation learning. Hypergraphs are not merely an extension of traditional graphs; they are pivotal for modeling high-order relationships, a capability that conventional equivariant graph-based methods lack due to their inherent limitations in representing intricate many-body interactions. To achieve this, we first construct hypergraphs via proposing a new fragmentation method that considers both chemical and three-dimensional spatial information of molecular system. We then design SE3Set, which incorporates equivariance into the hypergragh neural network. This ensures that the learned molecular representations are invariant to spatial transformations, thereby providing robustness essential for accurate prediction of molecular properties. SE3Set has shown performance on par with state-of-the-art (SOTA) models for small molecule datasets like QM9 and MD17. It excels on the MD22 dataset, achieving a notable improvement of approximately 20% in accuracy across all molecules, which highlights the prevalence of complex many-body interactions in larger molecules. This exceptional performance of SE3Set across diverse molecular structures underscores its transformative potential in computational chemistry, offering a route to more accurate and physically nuanced modeling. | [
"['Hongfei Wu' 'Lijun Wu' 'Guoqing Liu' 'Zhirong Liu' 'Bin Shao' 'Zun Wang']"
] |
null | null | 2405.16519 | null | null | http://arxiv.org/pdf/2405.16519v1 | 2024-05-26T11:04:41Z | 2024-05-26T11:04:41Z | Injective Sliced-Wasserstein embedding for weighted sets and point
clouds | We present the $textit{Sliced Wasserstein Embedding}$ $unicode{x2014}$ a novel method to embed multisets and distributions over $mathbb{R}^d$ into Euclidean space. Our embedding is injective and approximately preserves the Sliced Wasserstein distance. Moreover, when restricted to multisets, it is bi-Lipschitz. We also prove that it is $textit{impossible}$ to embed distributions over $mathbb{R}^d$ into a Euclidean space in a bi-Lipschitz manner, even under the assumption that their support is bounded and finite. We demonstrate empirically that our embedding offers practical advantage in learning tasks over existing methods for handling multisets. | [
"['Tal Amir' 'Nadav Dym']"
] |
null | null | 2405.16522 | null | null | http://arxiv.org/pdf/2405.16522v3 | 2024-07-01T03:21:38Z | 2024-05-26T11:17:49Z | Multi-State TD Target for Model-Free Reinforcement Learning | Temporal difference (TD) learning is a fundamental technique in reinforcement learning that updates value estimates for states or state-action pairs using a TD target. This target represents an improved estimate of the true value by incorporating both immediate rewards and the estimated value of subsequent states. Traditionally, TD learning relies on the value of a single subsequent state. We propose an enhanced multi-state TD (MSTD) target that utilizes the estimated values of multiple subsequent states. Building on this new MSTD concept, we develop complete actor-critic algorithms that include management of replay buffers in two modes, and integrate with deep deterministic policy optimization (DDPG) and soft actor-critic (SAC). Experimental results demonstrate that algorithms employing the MSTD target significantly improve learning performance compared to traditional methods.The code is provided on GitHub. | [
"['Wuhao Wang' 'Zhiyong Chen' 'Lepeng Zhang']"
] |
null | null | 2405.16528 | null | null | http://arxiv.org/pdf/2405.16528v1 | 2024-05-26T11:29:57Z | 2024-05-26T11:29:57Z | LoQT: Low Rank Adapters for Quantized Training | Training of large neural networks requires significant computational resources. Despite advances using low-rank adapters and quantization, pretraining of models such as LLMs on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose LoQT, a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning of models, which we demonstrate experimentally for language modeling and downstream task adaptation. We find that LoQT enables efficient training of models up to 7B parameters on a consumer-grade 24GB GPU. We also demonstrate the feasibility of training a 13B parameter model using per-layer gradient updates on the same hardware. | [
"['Sebastian Loeschcke' 'Mads Toftrup' 'Michael J. Kastoryano'\n 'Serge Belongie' 'Vésteinn Snæbjarnarson']"
] |
null | null | 2405.16541 | null | null | http://arxiv.org/pdf/2405.16541v1 | 2024-05-26T12:25:09Z | 2024-05-26T12:25:09Z | Variance-Reducing Couplings for Random Features: Perspectives from
Optimal Transport | Random features (RFs) are a popular technique to scale up kernel methods in machine learning, replacing exact kernel evaluations with stochastic Monte Carlo estimates. They underpin models as diverse as efficient transformers (by approximating attention) to sparse spectrum Gaussian processes (by approximating the covariance function). Efficiency can be further improved by speeding up the convergence of these estimates: a variance reduction problem. We tackle this through the unifying framework of optimal transport, using theoretical insights and numerical algorithms to develop novel, high-performing RF couplings for kernels defined on Euclidean and discrete input spaces. They enjoy concrete theoretical performance guarantees and sometimes provide strong empirical downstream gains, including for scalable approximate inference on graphs. We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm. | [
"['Isaac Reid' 'Stratis Markou' 'Krzysztof Choromanski' 'Richard E. Turner'\n 'Adrian Weller']"
] |
null | null | 2405.16557 | null | null | http://arxiv.org/pdf/2405.16557v1 | 2024-05-26T13:06:45Z | 2024-05-26T13:06:45Z | Scalable Numerical Embeddings for Multivariate Time Series: Enhancing
Healthcare Data Representation Learning | Multivariate time series (MTS) data, when sampled irregularly and asynchronously, often present extensive missing values. Conventional methodologies for MTS analysis tend to rely on temporal embeddings based on timestamps that necessitate subsequent imputations, yet these imputed values frequently deviate substantially from their actual counterparts, thereby compromising prediction accuracy. Furthermore, these methods typically fail to provide robust initial embeddings for values infrequently observed or even absent within the training set, posing significant challenges to model generalizability. In response to these challenges, we propose SCAlable Numerical Embedding (SCANE), a novel framework that treats each feature value as an independent token, effectively bypassing the need for imputation. SCANE regularizes the traits of distinct feature embeddings and enhances representational learning through a scalable embedding mechanism. Coupling SCANE with the Transformer Encoder architecture, we develop the Scalable nUMerical eMbeddIng Transformer (SUMMIT), which is engineered to deliver precise predictive outputs for MTS characterized by prevalent missing entries. Our experimental validation, conducted across three disparate electronic health record (EHR) datasets marked by elevated missing value frequencies, confirms the superior performance of SUMMIT over contemporary state-of-the-art approaches addressing similar challenges. These results substantiate the efficacy of SCANE and SUMMIT, underscoring their potential applicability across a broad spectrum of MTS data analytical tasks. | [
"['Chun-Kai Huang' 'Yi-Hsien Hsieh' 'Ta-Jung Chien' 'Li-Cheng Chien'\n 'Shao-Hua Sun' 'Tung-Hung Su' 'Jia-Horng Kao' 'Che Lin']"
] |
null | null | 2405.16560 | null | null | http://arxiv.org/pdf/2405.16560v1 | 2024-05-26T13:11:55Z | 2024-05-26T13:11:55Z | Task Groupings Regularization: Data-Free Meta-Learning with
Heterogeneous Pre-trained Models | Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data, enabling the rapid adaptation to new unseen tasks. Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts. In this paper, we empirically and theoretically identify and analyze the model heterogeneity in DFML. We find that model heterogeneity introduces a heterogeneity-homogeneity trade-off, where homogeneous models reduce task conflicts but also increase the overfitting risk. Balancing this trade-off is crucial for learning shared representations across tasks. Based on our findings, we propose Task Groupings Regularization, a novel approach that benefits from model heterogeneity by grouping and aligning conflicting tasks. Specifically, we embed pre-trained models into a task space to compute dissimilarity, and group heterogeneous models together based on this measure. Then, we introduce implicit gradient regularization within each group to mitigate potential conflicts. By encouraging a gradient direction suitable for all tasks, the meta-model captures shared representations that generalize across tasks. Comprehensive experiments showcase the superiority of our approach in multiple benchmarks, effectively tackling the model heterogeneity in challenging multi-domain and multi-architecture scenarios. | [
"['Yongxian Wei' 'Zixuan Hu' 'Li Shen' 'Zhenyi Wang' 'Yu Li' 'Chun Yuan'\n 'Dacheng Tao']"
] |
null | null | 2405.16563 | null | null | http://arxiv.org/pdf/2405.16563v1 | 2024-05-26T13:19:32Z | 2024-05-26T13:19:32Z | Reality Only Happens Once: Single-Path Generalization Bounds for
Transformers | One of the inherent challenges in deploying transformers on time series is that emph{reality only happens once}; namely, one typically only has access to a single trajectory of the data-generating process comprised of non-i.i.d. observations. We derive non-asymptotic statistical guarantees in this setting through bounds on the textit{generalization} of a transformer network at a future-time $t$, given that it has been trained using $Nle t$ observations from a single perturbed trajectory of a Markov process. Under the assumption that the Markov process satisfies a log-Sobolev inequality, we obtain a generalization bound which effectively converges at the rate of ${O}(1/sqrt{N})$. Our bound depends explicitly on the activation function ($operatorname{Swish}$, $operatorname{GeLU}$, or $tanh$ are considered), the number of self-attention heads, depth, width, and norm-bounds defining the transformer architecture. Our bound consists of three components: (I) The first quantifies the gap between the stationary distribution of the data-generating Markov process and its distribution at time $t$, this term converges exponentially to $0$. (II) The next term encodes the complexity of the transformer model and, given enough time, eventually converges to $0$ at the rate ${O}(log(N)^r/sqrt{N})$ for any $r>0$. (III) The third term guarantees that the bound holds with probability at least $1$-$delta$, and converges at a rate of ${O}(sqrt{log(1/delta)}/sqrt{N})$. | [
"['Yannick Limmer' 'Anastasis Kratsios' 'Xuwei Yang' 'Raeid Saqur'\n 'Blanka Horvath']"
] |
null | null | 2405.16564 | null | null | http://arxiv.org/pdf/2405.16564v1 | 2024-05-26T13:27:27Z | 2024-05-26T13:27:27Z | Contextual Linear Optimization with Bandit Feedback | Contextual linear optimization (CLO) uses predictive observations to reduce uncertainty in random cost coefficients and thereby improve average-cost performance. An example is a stochastic shortest path with random edge costs (e.g., traffic) and predictive features (e.g., lagged traffic, weather). Existing work on CLO assumes the data has fully observed cost coefficient vectors, but in many applications, we can only see the realized cost of a historical decision, that is, just one projection of the random cost coefficient vector, to which we refer as bandit feedback. We study a class of algorithms for CLO with bandit feedback, which we term induced empirical risk minimization (IERM), where we fit a predictive model to directly optimize the downstream performance of the policy it induces. We show a fast-rate regret bound for IERM that allows for misspecified model classes and flexible choices of the optimization estimate, and we develop computationally tractable surrogate losses. A byproduct of our theory of independent interest is fast-rate regret bound for IERM with full feedback and misspecified policy class. We compare the performance of different modeling choices numerically using a stochastic shortest path example and provide practical insights from the empirical results. | [
"['Yichun Hu' 'Nathan Kallus' 'Xiaojie Mao' 'Yanchen Wu']"
] |
null | null | 2405.16577 | null | null | http://arxiv.org/pdf/2405.16577v1 | 2024-05-26T14:09:43Z | 2024-05-26T14:09:43Z | Reflected Flow Matching | Continuous normalizing flows (CNFs) learn an ordinary differential equation to transform prior samples into data. Flow matching (FM) has recently emerged as a simulation-free approach for training CNFs by regressing a velocity model towards the conditional velocity field. However, on constrained domains, the learned velocity model may lead to undesirable flows that result in highly unnatural samples, e.g., oversaturated images, due to both flow matching error and simulation error. To address this, we add a boundary constraint term to CNFs, which leads to reflected CNFs that keep trajectories within the constrained domains. We propose reflected flow matching (RFM) to train the velocity model in reflected CNFs by matching the conditional velocity fields in a simulation-free manner, similar to the vanilla FM. Moreover, the analytical form of conditional velocity fields in RFM avoids potentially biased approximations, making it superior to existing score-based generative models on constrained domains. We demonstrate that RFM achieves comparable or better results on standard image benchmarks and produces high-quality class-conditioned samples under high guidance weight. | [
"['Tianyu Xie' 'Yu Zhu' 'Longlin Yu' 'Tong Yang' 'Ziheng Cheng'\n 'Shiyue Zhang' 'Xiangyu Zhang' 'Cheng Zhang']"
] |
null | null | 2405.16580 | null | null | http://arxiv.org/pdf/2405.16580v1 | 2024-05-26T14:14:35Z | 2024-05-26T14:14:35Z | A Study on Unsupervised Anomaly Detection and Defect Localization using
Generative Model in Ultrasonic Non-Destructive Testing | In recent years, the deterioration of artificial materials used in structures has become a serious social issue, increasing the importance of inspections. Non-destructive testing is gaining increased demand due to its capability to inspect for defects and deterioration in structures while preserving their functionality. Among these, Laser Ultrasonic Visualization Testing (LUVT) stands out because it allows the visualization of ultrasonic propagation. This makes it visually straightforward to detect defects, thereby enhancing inspection efficiency. With the increasing number of the deterioration structures, challenges such as a shortage of inspectors and increased workload in non-destructive testing have become more apparent. Efforts to address these challenges include exploring automated inspection using machine learning. However, the lack of anomalous data with defects poses a barrier to improving the accuracy of automated inspection through machine learning. Therefore, in this study, we propose a method for automated LUVT inspection using an anomaly detection approach with a diffusion model that can be trained solely on negative examples (defect-free data). We experimentally confirmed that our proposed method improves defect detection and localization compared to general object detection algorithms used previously. | [
"['Yusaku Ando' 'Miya Nakajima' 'Takahiro Saitoh' 'Tsuyoshi Kato']"
] |
null | null | 2405.16581 | null | null | http://arxiv.org/pdf/2405.16581v1 | 2024-05-26T14:18:38Z | 2024-05-26T14:18:38Z | On Bits and Bandits: Quantifying the Regret-Information Trade-off | In interactive decision-making tasks, information can be acquired by direct interactions, through receiving indirect feedback, and from external knowledgeable sources. We examine the trade-off between the information an agent accumulates and the regret it suffers. We show that information from external sources, measured in bits, can be traded off for regret, measured in reward. We invoke information-theoretic methods for obtaining regret lower bounds, that also allow us to easily re-derive several known lower bounds. We then generalize a variety of interactive decision-making tasks with external information to a new setting. Using this setting, we introduce the first Bayesian regret lower bounds that depend on the information an agent accumulates. These lower bounds also prove the near-optimality of Thompson sampling for Bayesian problems. Finally, we demonstrate the utility of these bounds in improving the performance of a question-answering task with large language models, allowing us to obtain valuable insights. | [
"['Itai Shufaro' 'Nadav Merlis' 'Nir Weinberger' 'Shie Mannor']"
] |
null | null | 2405.16585 | null | null | http://arxiv.org/pdf/2405.16585v1 | 2024-05-26T14:29:10Z | 2024-05-26T14:29:10Z | Fair Federated Learning under Domain Skew with Local Consistency and
Domain Diversity | Federated learning (FL) has emerged as a new paradigm for privacy-preserving collaborative training. Under domain skew, the current FL approaches are biased and face two fairness problems. 1) Parameter Update Conflict: data disparity among clients leads to varying parameter importance and inconsistent update directions. These two disparities cause important parameters to potentially be overwhelmed by unimportant ones of dominant updates. It consequently results in significant performance decreases for lower-performing clients. 2) Model Aggregation Bias: existing FL approaches introduce unfair weight allocation and neglect domain diversity. It leads to biased model convergence objective and distinct performance among domains. We discover a pronounced directional update consistency in Federated Learning and propose a novel framework to tackle above issues. First, leveraging the discovered characteristic, we selectively discard unimportant parameter updates to prevent updates from clients with lower performance overwhelmed by unimportant parameters, resulting in fairer generalization performance. Second, we propose a fair aggregation objective to prevent global model bias towards some domains, ensuring that the global model continuously aligns with an unbiased model. The proposed method is generic and can be combined with other existing FL methods to enhance fairness. Comprehensive experiments on Digits and Office-Caltech demonstrate the high fairness and performance of our method. | [
"['Yuhang Chen' 'Wenke Huang' 'Mang Ye']"
] |
null | null | 2405.16587 | null | null | http://arxiv.org/pdf/2405.16587v1 | 2024-05-26T14:38:24Z | 2024-05-26T14:38:24Z | Cost-Effective Online Multi-LLM Selection with Versatile Reward Models | With the rapid advancement of large language models (LLMs), the diversity of multi-LLM tasks and the variability in their pricing structures have become increasingly important, as costs can vary greatly between different LLMs. To tackle these challenges, we introduce the textit{C2MAB-V}, a underline{C}ost-effective underline{C}ombinatorial underline{M}ulti-armed underline{B}andit with underline{V}ersatile reward models for optimal LLM selection and usage. This online model differs from traditional static approaches or those reliant on a single LLM without cost consideration. With multiple LLMs deployed on a scheduling cloud and a local server dedicated to handling user queries, textit{C2MAB-V} facilitates the selection of multiple LLMs over a combinatorial search space, specifically tailored for various collaborative task types with different reward models. Based on our designed online feedback mechanism and confidence bound technique, textit{C2MAB-V} can effectively address the multi-LLM selection challenge by managing the exploration-exploitation trade-off across different models, while also balancing cost and reward for diverse tasks. The NP-hard integer linear programming problem for selecting multiple LLMs with trade-off dilemmas is addressed by: i) decomposing the integer problem into a relaxed form by the local server, ii) utilizing a discretization rounding scheme that provides optimal LLM combinations by the scheduling cloud, and iii) continual online updates based on feedback. Theoretically, we prove that textit{C2MAB-V} offers strict guarantees over versatile reward models, matching state-of-the-art results for regret and violations in some degenerate cases. Empirically, we show that textit{C2MAB-V} effectively balances performance and cost-efficiency with nine LLMs for three application scenarios. | [
"['Xiangxiang Dai' 'Jin Li' 'Xutong Liu' 'Anqi Yu' 'John C. S. Lui']"
] |
null | null | 2405.16594 | null | null | http://arxiv.org/pdf/2405.16594v1 | 2024-05-26T15:07:16Z | 2024-05-26T15:07:16Z | Training-Conditional Coverage Bounds under Covariate Shift | Training-conditional coverage guarantees in conformal prediction concern the concentration of the error distribution, conditional on the training data, below some nominal level. The conformal prediction methodology has recently been generalized to the covariate shift setting, namely, the covariate distribution changes between the training and test data. In this paper, we study the training-conditional coverage properties of a range of conformal prediction methods under covariate shift via a weighted version of the Dvoretzky-Kiefer-Wolfowitz (DKW) inequality tailored for distribution change. The result for the split conformal method is almost assumption-free, while the results for the full conformal and jackknife+ methods rely on strong assumptions including the uniform stability of the training algorithm. | [
"['Mehrdad Pournaderi' 'Yu Xiang']"
] |
null | null | 2405.16598 | null | null | http://arxiv.org/pdf/2405.16598v1 | 2024-05-26T15:18:22Z | 2024-05-26T15:18:22Z | Regularized Projection Matrix Approximation with Applications to
Community Detection | This paper introduces a regularized projection matrix approximation framework aimed at recovering cluster information from the affinity matrix. The model is formulated as a projection approximation problem incorporating an entrywise penalty function. We explore three distinct penalty functions addressing bounded, positive, and sparse scenarios, respectively, and derive the Alternating Direction Method of Multipliers (ADMM) algorithm to solve the problem. Then, we provide a theoretical analysis establishing the convergence properties of the proposed algorithm. Extensive numerical experiments on both synthetic and real-world datasets demonstrate that our regularized projection matrix approximation approach significantly outperforms state-of-the-art methods in terms of clustering performance. | [
"['Zheng Zhai' 'Mingxin Wu' 'Xiaohui Li']"
] |
null | null | 2405.16601 | null | null | http://arxiv.org/pdf/2405.16601v1 | 2024-05-26T15:28:42Z | 2024-05-26T15:28:42Z | A CMDP-within-online framework for Meta-Safe Reinforcement Learning | Meta-reinforcement learning has widely been used as a learning-to-learn framework to solve unseen tasks with limited experience. However, the aspect of constraint violations has not been adequately addressed in the existing works, making their application restricted in real-world settings. In this paper, we study the problem of meta-safe reinforcement learning (Meta-SRL) through the CMDP-within-online framework to establish the first provable guarantees in this important setting. We obtain task-averaged regret bounds for the reward maximization (optimality gap) and constraint violations using gradient-based meta-learning and show that the task-averaged optimality gap and constraint satisfaction improve with task-similarity in a static environment or task-relatedness in a dynamic environment. Several technical challenges arise when making this framework practical. To this end, we propose a meta-algorithm that performs inexact online learning on the upper bounds of within-task optimality gap and constraint violations estimated by off-policy stationary distribution corrections. Furthermore, we enable the learning rates to be adapted for every task and extend our approach to settings with a competing dynamically changing oracle. Finally, experiments are conducted to demonstrate the effectiveness of our approach. | [
"['Vanshaj Khattar' 'Yuhao Ding' 'Bilgehan Sel' 'Javad Lavaei' 'Ming Jin']"
] |
null | null | 2405.16608 | null | null | http://arxiv.org/pdf/2405.16608v1 | 2024-05-26T15:37:19Z | 2024-05-26T15:37:19Z | Efficient Probabilistic Modeling of Crystallization at Mesoscopic Scale | Crystallization processes at the mesoscopic scale, where faceted, dendritic growth, and multigrain formation can be observed, are of particular interest within materials science and metallurgy. These processes are highly nonlinear, stochastic, and sensitive to small perturbations of system parameters and initial conditions. Methods for the simulation of these processes have been developed using discrete numerical models, but these are computationally expensive. This work aims to scale crystal growth simulation with a machine learning emulator. Specifically, autoregressive latent variable models are well suited for modeling the joint distribution over system parameters and the crystallization trajectories. However, successfully training such models is challenging due to the stochasticity and sensitivity of the system. Existing approaches consequently fail to produce diverse and faithful crystallization trajectories. In this paper, we introduce the Crystal Growth Neural Emulator (CGNE), a probabilistic model for efficient crystal growth emulation at the mesoscopic scale that overcomes these challenges. We validate CGNE results using the morphological properties of the crystals produced by numerical simulation. CGNE delivers a factor of 11 improvement in inference time and performance gains compared with recent state-of-the-art probabilistic models for dynamical systems. | [
"['Pol Timmer' 'Koen Minartz' 'Vlado Menkovski']"
] |
null | null | 2405.16610 | null | null | http://arxiv.org/pdf/2405.16610v1 | 2024-05-26T15:44:53Z | 2024-05-26T15:44:53Z | The devil is in discretization discrepancy. Robustifying Differentiable
NAS with Single-Stage Searching Protocol | Neural Architecture Search (NAS) has been widely adopted to design neural networks for various computer vision tasks. One of its most promising subdomains is differentiable NAS (DNAS), where the optimal architecture is found in a differentiable manner. However, gradient-based methods suffer from the discretization error, which can severely damage the process of obtaining the final architecture. In our work, we first study the risk of discretization error and show how it affects an unregularized supernet. Then, we present that penalizing high entropy, a common technique of architecture regularization, can hinder the supernet's performance. Therefore, to robustify the DNAS framework, we introduce a novel single-stage searching protocol, which is not reliant on decoding a continuous architecture. Our results demonstrate that this approach outperforms other DNAS methods by achieving 75.3% in the searching stage on the Cityscapes validation dataset and attains performance 1.1% higher than the optimal network of DCNAS on the non-dense search space comprising short connections. The entire training process takes only 5.5 GPU days due to the weight reuse, and yields a computationally efficient architecture. Additionally, we propose a new dataset split procedure, which substantially improves results and prevents architecture degeneration in DARTS. | [
"['Konstanty Subbotko' 'Wojciech Jablonski' 'Piotr Bilinski']"
] |
null | null | 2405.16616 | null | null | http://arxiv.org/pdf/2405.16616v1 | 2024-05-26T16:08:55Z | 2024-05-26T16:08:55Z | DPHGNN: A Dual Perspective Hypergraph Neural Networks | Message passing on hypergraphs has been a standard framework for learning higher-order correlations between hypernodes. Recently-proposed hypergraph neural networks (HGNNs) can be categorized into spatial and spectral methods based on their design choices. In this work, we analyze the impact of change in hypergraph topology on the suboptimal performance of HGNNs and propose DPHGNN, a novel dual-perspective HGNN that introduces equivariant operator learning to capture lower-order semantics by inducing topology-aware spatial and spectral inductive biases. DPHGNN employs a unified framework to dynamically fuse lower-order explicit feature representations from the underlying graph into the super-imposed hypergraph structure. We benchmark DPHGNN over eight benchmark hypergraph datasets for the semi-supervised hypernode classification task and obtain superior performance compared to seven state-of-the-art baselines. We also provide a theoretical framework and a synthetic hypergraph isomorphism test to express the power of spatial HGNNs and quantify the expressivity of DPHGNN beyond the Generalized Weisfeiler Leman (1-GWL) test. Finally, DPHGNN was deployed by our partner e-commerce company for the Return-to-Origin (RTO) prediction task, which shows ~7% higher macro F1-Score than the best baseline. | [
"['Siddhant Saxena' 'Shounak Ghatak' 'Raghu Kolla' 'Debashis Mukherjee'\n 'Tanmoy Chakraborty']"
] |
null | null | 2405.16623 | null | null | http://arxiv.org/pdf/2405.16623v1 | 2024-05-26T16:39:19Z | 2024-05-26T16:39:19Z | Graph neural networks with configuration cross-attention for tensor
compilers | With the recent popularity of neural networks comes the need for efficient serving of inference workloads. A neural network inference workload can be represented as a computational graph with nodes as operators transforming multidimensional tensors. The tensors can be transposed and/or tiled in a combinatorially large number of ways, some configurations leading to accelerated inference. We propose TGraph, a neural graph architecture that allows screening for fast configurations of the target computational graph, thus representing an artificial intelligence (AI) tensor compiler in contrast to the traditional heuristics-based compilers. The proposed solution improves mean Kendall's $tau$ across layout collections of TpuGraphs from 29.8% of the reliable baseline to 67.4% of TGraph. We estimate the potential CO$_2$ emission reduction associated with our work to be equivalent to over 50% of the total household emissions in the areas hosting AI-oriented data centers. | [
"['Dmitrii Khizbullin' 'Eduardo Rocha de Andrade' 'Thanh Hau Nguyen'\n 'Matheus Pedroza Ferreira' 'David R. Pugh']"
] |
null | null | 2405.16628 | null | null | http://arxiv.org/pdf/2405.16628v1 | 2024-05-26T17:00:17Z | 2024-05-26T17:00:17Z | Competing for pixels: a self-play algorithm for weakly-supervised
segmentation | Weakly-supervised segmentation (WSS) methods, reliant on image-level labels indicating object presence, lack explicit correspondence between labels and regions of interest (ROIs), posing a significant challenge. Despite this, WSS methods have attracted attention due to their much lower annotation costs compared to fully-supervised segmentation. Leveraging reinforcement learning (RL) self-play, we propose a novel WSS method that gamifies image segmentation of a ROI. We formulate segmentation as a competition between two agents that compete to select ROI-containing patches until exhaustion of all such patches. The score at each time-step, used to compute the reward for agent training, represents likelihood of object presence within the selection, determined by an object presence detector pre-trained using only image-level binary classification labels of object presence. Additionally, we propose a game termination condition that can be called by either side upon exhaustion of all ROI-containing patches, followed by the selection of a final patch from each. Upon termination, the agent is incentivised if ROI-containing patches are exhausted or disincentivised if an ROI-containing patch is found by the competitor. This competitive setup ensures minimisation of over- or under-segmentation, a common problem with WSS methods. Extensive experimentation across four datasets demonstrates significant performance improvements over recent state-of-the-art methods. Code: https://github.com/s-sd/spurl/tree/main/wss | [
"['Shaheer U. Saeed' 'Shiqi Huang' 'João Ramalhinho' 'Iani J. M. B. Gayo'\n 'Nina Montaña-Brown' 'Ester Bonmati' 'Stephen P. Pereira'\n 'Brian Davidson' 'Dean C. Barratt' 'Matthew J. Clarkson' 'Yipeng Hu']"
] |
null | null | 2405.16630 | null | null | http://arxiv.org/pdf/2405.16630v1 | 2024-05-26T17:08:04Z | 2024-05-26T17:08:04Z | Bayesian Inference with Deep Weakly Nonlinear Networks | We show at a physics level of rigor that Bayesian inference with a fully connected neural network and a shaped nonlinearity of the form $phi(t) = t + psi t^3/L$ is (perturbatively) solvable in the regime where the number of training datapoints $P$ , the input dimension $N_0$, the network layer widths $N$, and the network depth $L$ are simultaneously large. Our results hold with weak assumptions on the data; the main constraint is that $P < N_0$. We provide techniques to compute the model evidence and posterior to arbitrary order in $1/N$ and at arbitrary temperature. We report the following results from the first-order computation: 1. When the width $N$ is much larger than the depth $L$ and training set size $P$, neural network Bayesian inference coincides with Bayesian inference using a kernel. The value of $psi$ determines the curvature of a sphere, hyperbola, or plane into which the training data is implicitly embedded under the feature map. 2. When $LP/N$ is a small constant, neural network Bayesian inference departs from the kernel regime. At zero temperature, neural network Bayesian inference is equivalent to Bayesian inference using a data-dependent kernel, and $LP/N$ serves as an effective depth that controls the extent of feature learning. 3. In the restricted case of deep linear networks ($psi=0$) and noisy data, we show a simple data model for which evidence and generalization error are optimal at zero temperature. As $LP/N$ increases, both evidence and generalization further improve, demonstrating the benefit of depth in benign overfitting. | [
"['Boris Hanin' 'Alexander Zlokapa']"
] |
null | null | 2405.16639 | null | null | http://arxiv.org/pdf/2405.16639v1 | 2024-05-26T17:30:44Z | 2024-05-26T17:30:44Z | A unified law of robustness for Bregman divergence losses | In contemporary deep learning practice, models are often trained to near zero loss i.e. to nearly interpolate the training data. However, the number of parameters in the model is usually far more than the number of data points $n$, the theoretical minimum needed for interpolation: a phenomenon referred to as overparameterization. In an interesting piece of work that contributes to the considerable research that has been devoted to understand overparameterization, Bubeck, and Sellke showed that for a broad class of covariate distributions (specifically those satisfying a natural notion of concentration of measure), overparameterization is necessary for robust interpolation i.e. if the interpolating function is required to be Lipschitz. However, their robustness results were proved only in the setting of regression with square loss. In practice, however many other kinds of losses are used, e.g. cross entropy loss for classification. In this work, we generalize Bubeck and Selke's result to Bregman divergence losses, which form a common generalization of square loss and cross-entropy loss. Our generalization relies on identifying a bias variance-type decomposition that lies at the heart of the proof and Bubeck and Sellke. | [
"['Santanu Das' 'Jatin Batra' 'Piyush Srivastava']"
] |
null | null | 2405.16642 | null | null | http://arxiv.org/pdf/2405.16642v2 | 2024-07-08T17:00:07Z | 2024-05-26T17:38:44Z | Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement
Learning | A key challenge in lifelong reinforcement learning (RL) is the loss of plasticity, where previous learning progress hinders an agent's adaptation to new tasks. While regularization and resetting can help, they require precise hyperparameter selection at the outset and environment-dependent adjustments. Building on the principled theory of online convex optimization, we present a parameter-free optimizer for lifelong RL, called TRAC, which requires no tuning or prior knowledge about the distribution shifts. Extensive experiments on Procgen, Atari, and Gym Control environments show that TRAC works surprisingly well-mitigating loss of plasticity and rapidly adapting to challenging distribution shifts-despite the underlying optimization problem being nonconvex and nonstationary. | [
"['Aneesh Muppidi' 'Zhiyu Zhang' 'Heng Yang']"
] |
null | null | 2405.16644 | null | null | http://arxiv.org/pdf/2405.16644v1 | 2024-05-26T17:43:30Z | 2024-05-26T17:43:30Z | Gaussian Approximation and Multiplier Bootstrap for Polyak-Ruppert
Averaged Linear Stochastic Approximation with Applications to TD Learning | In this paper, we obtain the Berry-Esseen bound for multivariate normal approximation for the Polyak-Ruppert averaged iterates of the linear stochastic approximation (LSA) algorithm with decreasing step size. Our findings reveal that the fastest rate of normal approximation is achieved when setting the most aggressive step size $alpha_{k} asymp k^{-1/2}$. Moreover, we prove the non-asymptotic validity of the confidence intervals for parameter estimation with LSA based on multiplier bootstrap. This procedure updates the LSA estimate together with a set of randomly perturbed LSA estimates upon the arrival of subsequent observations. We illustrate our findings in the setting of temporal difference learning with linear function approximation. | [
"['Sergey Samsonov' 'Eric Moulines' 'Qi-Man Shao' 'Zhuo-Song Zhang'\n 'Alexey Naumov']"
] |
null | null | 2405.16646 | null | null | http://arxiv.org/pdf/2405.16646v3 | 2024-05-30T17:30:42Z | 2024-05-26T17:52:58Z | A Provably Effective Method for Pruning Experts in Fine-tuned Sparse
Mixture-of-Experts | The sparsely gated mixture of experts (MoE) architecture sends different inputs to different subnetworks, i.e., experts, through trainable routers. MoE reduces the training computation significantly for large models, but its deployment can be still memory or computation expensive for some downstream tasks. Model pruning is a popular approach to reduce inference computation, but its application in MoE architecture is largely unexplored. To the best of our knowledge, this paper provides the first provably efficient technique for pruning experts in finetuned MoE models. We theoretically prove that prioritizing the pruning of the experts with a smaller change of the routers l2 norm from the pretrained model guarantees the preservation of test accuracy, while significantly reducing the model size and the computational requirements. Although our theoretical analysis is centered on binary classification tasks on simplified MoE architecture, our expert pruning method is verified on large vision MoE models such as VMoE and E3MoE finetuned on benchmark datasets such as CIFAR10, CIFAR100, and ImageNet. | [
"['Mohammed Nowaz Rabbani Chowdhury' 'Meng Wang' 'Kaoutar El Maghraoui'\n 'Naigang Wang' 'Pin-Yu Chen' 'Christopher Carothers']"
] |
null | null | 2405.16655 | null | null | http://arxiv.org/pdf/2405.16655v1 | 2024-05-26T18:17:46Z | 2024-05-26T18:17:46Z | Predicting Likely-Vulnerable Code Changes: Machine Learning-based
Vulnerability Protections for Android Open Source Project | This paper presents a framework that selectively triggers security reviews for incoming source code changes. Functioning as a review bot within a code review service, the framework can automatically request additional security reviews at pre-submit time before the code changes are submitted to a source code repository. Because performing such secure code reviews add cost, the framework employs a classifier trained to identify code changes with a high likelihood of vulnerabilities. The online classifier leverages various types of input features to analyze the review patterns, track the software engineering process, and mine specific text patterns within given code changes. The classifier and its features are meticulously chosen and optimized using data from the submitted code changes and reported vulnerabilities in Android Open Source Project (AOSP). The evaluation results demonstrate that our Vulnerability Prevention (VP) framework identifies approximately 80% of the vulnerability-inducing code changes in the dataset with a precision ratio of around 98% and a false positive rate of around 1.7%. We discuss the implications of deploying the VP framework in multi-project settings and future directions for Android security research. This paper explores and validates our approach to code change-granularity vulnerability prediction, offering a preventive technique for software security by preemptively detecting vulnerable code changes before submission. | [
"['Keun Soo Yim']"
] |
null | null | 2405.16658 | null | null | http://arxiv.org/pdf/2405.16658v1 | 2024-05-26T18:29:24Z | 2024-05-26T18:29:24Z | Acceleration of Grokking in Learning Arithmetic Operations via
Kolmogorov-Arnold Representation | We propose novel methodologies aimed at accelerating the grokking phenomenon, which refers to the rapid increment of test accuracy after a long period of overfitting as reported in~cite{power2022grokking}. Focusing on the grokking phenomenon that arises in learning arithmetic binary operations via the transformer model, we begin with a discussion on data augmentation in the case of commutative binary operations. To further accelerate, we elucidate arithmetic operations through the lens of the Kolmogorov-Arnold (KA) representation theorem, revealing its correspondence to the transformer architecture: embedding, decoder block, and classifier. Observing the shared structure between KA representations associated with binary operations, we suggest various transfer learning mechanisms that expedite grokking. This interpretation is substantiated through a series of rigorous experiments. In addition, our approach is successful in learning two nonstandard arithmetic tasks: composition of operations and a system of equations. Furthermore, we reveal that the model is capable of learning arithmetic operations using a limited number of tokens under embedding transfer, which is supported by a set of experiments as well. | [
"['Yeachan Park' 'Minseok Kim' 'Yeoneung Kim']"
] |
null | null | 2405.16661 | null | null | http://arxiv.org/pdf/2405.16661v1 | 2024-05-26T18:49:59Z | 2024-05-26T18:49:59Z | RLSF: Reinforcement Learning via Symbolic Feedback | In recent years, large language models (LLMs) have had a dramatic impact on various sub-fields of AI, most notably on natural language understanding tasks. However, there is widespread agreement that the logical reasoning capabilities of contemporary LLMs are, at best, fragmentary (i.e., may work well on some problem instances but fail dramatically on others). While traditional LLM fine-tuning approaches (e.g., those that use human feedback) do address this problem to some degree, they suffer from many issues, including unsound black-box reward models, difficulties in collecting preference data, and sparse scalar reward values. To address these challenges, we propose a new training/fine-tuning paradigm we refer to as Reinforcement Learning via Symbolic Feedback (RLSF), which is aimed at enhancing the reasoning capabilities of LLMs. In the RLSF setting, the LLM that is being trained/fine-tuned is considered as the RL agent, while the environment is allowed access to reasoning or domain knowledge tools (e.g., solvers, algebra systems). Crucially, in RLSF, these reasoning tools can provide feedback to the LLMs via poly-sized certificates (e.g., proofs), that characterize errors in the LLM-generated object with respect to some correctness specification. The ability of RLSF-based training/fine-tuning to leverage certificate-generating symbolic tools enables sound fine-grained (token-level) reward signals to LLMs, and thus addresses the limitations of traditional reward models mentioned above. Via extensive evaluations, we show that our RLSF-based fine-tuning of LLMs outperforms traditional approaches on two different applications, namely, program synthesis from natural language pseudo-code to programming language (C++) and solving the Game of 24. | [
"['Piyush Jha' 'Prithwish Jana' 'Arnav Arora' 'Vijay Ganesh']"
] |
null | null | 2405.16663 | null | null | http://arxiv.org/pdf/2405.16663v2 | 2024-06-03T18:36:15Z | 2024-05-26T18:59:44Z | Private Edge Density Estimation for Random Graphs: Optimal, Efficient
and Robust | We give the first polynomial-time, differentially node-private, and robust algorithm for estimating the edge density of ErdH{o}s-R'enyi random graphs and their generalization, inhomogeneous random graphs. We further prove information-theoretical lower bounds, showing that the error rate of our algorithm is optimal up to logarithmic factors. Previous algorithms incur either exponential running time or suboptimal error rates. Two key ingredients of our algorithm are (1) a new sum-of-squares algorithm for robust edge density estimation, and (2) the reduction from privacy to robustness based on sum-of-squares exponential mechanisms due to Hopkins et al. (STOC 2023). | [
"['Hongjie Chen' 'Jingqiu Ding' 'Yiding Hua' 'David Steurer']"
] |
null | null | 2405.16666 | null | null | http://arxiv.org/pdf/2405.16666v1 | 2024-05-26T19:13:51Z | 2024-05-26T19:13:51Z | Comments on Friedman's Method for Class Distribution Estimation | The purpose of class distribution estimation (also known as quantification) is to determine the values of the prior class probabilities in a test dataset without class label observations. A variety of methods to achieve this have been proposed in the literature, most of them based on the assumption that the distributions of the training and test data are related through prior probability shift (also known as label shift). Among these methods, Friedman's method has recently been found to perform relatively well both for binary and multi-class quantification. We discuss the properties of Friedman's method and another approach mentioned by Friedman (called DeBias method in the literature) in the context of a general framework for designing linear equation systems for class distribution estimation. | [
"['Dirk Tasche']"
] |
null | null | 2405.16668 | null | null | http://arxiv.org/pdf/2405.16668v1 | 2024-05-26T19:17:32Z | 2024-05-26T19:17:32Z | Provably Efficient Off-Policy Adversarial Imitation Learning with
Convergence Guarantees | Adversarial Imitation Learning (AIL) faces challenges with sample inefficiency because of its reliance on sufficient on-policy data to evaluate the performance of the current policy during reward function updates. In this work, we study the convergence properties and sample complexity of off-policy AIL algorithms. We show that, even in the absence of importance sampling correction, reusing samples generated by the $o(sqrt{K})$ most recent policies, where $K$ is the number of iterations of policy updates and reward updates, does not undermine the convergence guarantees of this class of algorithms. Furthermore, our results indicate that the distribution shift error induced by off-policy updates is dominated by the benefits of having more data available. This result provides theoretical support for the sample efficiency of off-policy AIL algorithms. To the best of our knowledge, this is the first work that provides theoretical guarantees for off-policy AIL algorithms. | [
"['Yilei Chen' 'Vittorio Giammarino' 'James Queeney'\n 'Ioannis Ch. Paschalidis']"
] |
null | null | 2405.16671 | null | null | http://arxiv.org/pdf/2405.16671v1 | 2024-05-26T19:25:08Z | 2024-05-26T19:25:08Z | Mixture of Experts Using Tensor Products | In multi-task learning, the conventional approach involves training a model on multiple tasks simultaneously. However, the training signals from different tasks can interfere with one another, potentially leading to textit{negative transfer}. To mitigate this, we investigate if modular language models can facilitate positive transfer and systematic generalization. Specifically, we propose a novel modular language model (texttt{TensorPoly}), that balances parameter efficiency with nuanced routing methods. For textit{modules}, we reparameterize Low-Rank Adaptation (texttt{LoRA}) by employing an entangled tensor through the use of tensor product operations and name the resulting approach texttt{TLoRA}. For textit{routing function}, we tailor two innovative routing functions according to the granularity: texttt{TensorPoly-I} which directs to each rank within the entangled tensor while texttt{TensorPoly-II} offers a finer-grained routing approach targeting each order of the entangled tensor. The experimental results from the multi-task T0-benchmark demonstrate that: 1) all modular LMs surpass the corresponding dense approaches, highlighting the potential of modular language models to mitigate negative inference in multi-task learning and deliver superior outcomes. 2) texttt{TensorPoly-I} achieves higher parameter efficiency in adaptation and outperforms other modular LMs, which shows the potential of our approach in multi-task transfer learning. | [
"['Zhan Su' 'Fengran Mo' 'Prayag Tiwari' 'Benyou Wang' 'Jian-Yun Nie'\n 'Jakob Grue Simonsen']"
] |
null | null | 2405.16672 | null | null | http://arxiv.org/pdf/2405.16672v1 | 2024-05-26T19:30:14Z | 2024-05-26T19:30:14Z | Transfer Learning Under High-Dimensional Graph Convolutional Regression
Model for Node Classification | Node classification is a fundamental task, but obtaining node classification labels can be challenging and expensive in many real-world scenarios. Transfer learning has emerged as a promising solution to address this challenge by leveraging knowledge from source domains to enhance learning in a target domain. Existing transfer learning methods for node classification primarily focus on integrating Graph Convolutional Networks (GCNs) with various transfer learning techniques. While these approaches have shown promising results, they often suffer from a lack of theoretical guarantees, restrictive conditions, and high sensitivity to hyperparameter choices. To overcome these limitations, we propose a Graph Convolutional Multinomial Logistic Regression (GCR) model and a transfer learning method based on the GCR model, called Trans-GCR. We provide theoretical guarantees of the estimate obtained under GCR model in high-dimensional settings. Moreover, Trans-GCR demonstrates superior empirical performance, has a low computational cost, and requires fewer hyperparameters than existing methods. | [
"['Jiachen Chen' 'Danyang Huang' 'Liyuan Wang' 'Kathryn L. Lunetta'\n 'Debarghya Mukherjee' 'Huimin Cheng']"
] |
null | null | 2405.16674 | null | null | http://arxiv.org/pdf/2405.16674v1 | 2024-05-26T19:33:23Z | 2024-05-26T19:33:23Z | Limits of Deep Learning: Sequence Modeling through the Lens of
Complexity Theory | Deep learning models have achieved significant success across various applications but continue to struggle with tasks requiring complex reasoning over sequences, such as function composition and compositional tasks. Despite advancements, models like Structured State Space Models (SSMs) and Transformers underperform in deep compositionality tasks due to inherent architectural and training limitations. Maintaining accuracy over multiple reasoning steps remains a primary challenge, as current models often rely on shortcuts rather than genuine multi-step reasoning, leading to performance degradation as task complexity increases. Existing research highlights these shortcomings but lacks comprehensive theoretical and empirical analysis for SSMs. Our contributions address this gap by providing a theoretical framework based on complexity theory to explain SSMs' limitations. Moreover, we present extensive empirical evidence demonstrating how these limitations impair function composition and algorithmic task performance. Our experiments reveal significant performance drops as task complexity increases, even with Chain-of-Thought (CoT) prompting. Models frequently resort to shortcuts, leading to errors in multi-step reasoning. This underscores the need for innovative solutions beyond current deep learning paradigms to achieve reliable multi-step reasoning and compositional task-solving in practical applications. | [
"['Nikola Zubić' 'Federico Soldá' 'Aurelio Sulser' 'Davide Scaramuzza']"
] |
null | null | 2405.16682 | null | null | http://arxiv.org/pdf/2405.16682v1 | 2024-05-26T20:20:44Z | 2024-05-26T20:20:44Z | A Systematic Review of Federated Generative Models | Federated Learning (FL) has emerged as a solution for distributed systems that allow clients to train models on their data and only share models instead of local data. Generative Models are designed to learn the distribution of a dataset and generate new data samples that are similar to the original data. Many prior works have tried proposing Federated Generative Models. Using Federated Learning and Generative Models together can be susceptible to attacks, and designing the optimal architecture remains challenging. This survey covers the growing interest in the intersection of FL and Generative Models by comprehensively reviewing research conducted from 2019 to 2024. We systematically compare nearly 100 papers, focusing on their FL and Generative Model methods and privacy considerations. To make this field more accessible to newcomers, we highlight the state-of-the-art advancements and identify unresolved challenges, offering insights for future research in this evolving field. | [
"['Ashkan Vedadi Gargary' 'Emiliano De Cristofaro']"
] |
null | null | 2405.16683 | null | null | http://arxiv.org/pdf/2405.16683v1 | 2024-05-26T20:25:04Z | 2024-05-26T20:25:04Z | Toward Digitalization: A Secure Approach to Find a Missing Person Using
Facial Recognition Technology | Facial Recognition is a technique, based on machine learning technology that can recognize a human being analyzing his facial profile, and is applied in solving various types of realworld problems nowadays. In this paper, a common real-world problem, finding a missing person has been solved in a secure and effective way with the help of facial recognition technology. Although there exist a few works on solving the problem, the proposed work is unique with respect to its security, design, and feasibility. Impeding intruders in participating in the processes and giving importance to both finders and family members of a missing person are two of the major features of this work. The proofs of the works of our system in finding a missing person have been described in the result section of the paper. The advantages that our system provides over the other existing systems can be realized from the comparisons, described in the result summary section of the paper. The work is capable of providing a worthy solution to find a missing person on the digital platform. | [
"['Abid Faisal Ayon' 'S M Maksudul Alam']"
] |
null | null | 2405.16684 | null | null | http://arxiv.org/pdf/2405.16684v1 | 2024-05-26T20:33:08Z | 2024-05-26T20:33:08Z | gzip Predicts Data-dependent Scaling Laws | Past work has established scaling laws that predict the performance of a neural language model (LM) as a function of its parameter count and the number of tokens it's trained on, enabling optimal allocation of a fixed compute budget. Are these scaling laws agnostic to training data as some prior work suggests? We generate training datasets of varying complexities by modulating the syntactic properties of a PCFG, finding that 1) scaling laws are sensitive to differences in data complexity and that 2) gzip, a compression algorithm, is an effective predictor of how data complexity impacts scaling properties. We propose a new data-dependent scaling law for LM's that accounts for the training data's gzip-compressibility; its compute-optimal frontier increases in dataset size preference (over parameter count preference) as training data becomes harder to compress. | [
"['Rohan Pandey']"
] |
null | null | 2405.16697 | null | null | http://arxiv.org/pdf/2405.16697v1 | 2024-05-26T21:12:34Z | 2024-05-26T21:12:34Z | CNN Autoencoder Resizer: A Power-Efficient LoS/NLoS Detector in
MIMO-enabled UAV Networks | Optimizing the design, performance, and resource efficiency of wireless networks (WNs) necessitates the ability to discern Line of Sight (LoS) and Non-Line of Sight (NLoS) scenarios across diverse applications and environments. Unmanned Aerial Vehicles (UAVs) exhibit significant potential in this regard due to their rapid mobility, aerial capabilities, and payload characteristics. Particularly, UAVs can serve as vital non-terrestrial base stations (NTBS) in the event of terrestrial base station (TBS) failures or downtime. In this paper, we propose CNN autoencoder resizer (CAR) as a framework that improves the accuracy of LoS/NLoS detection without demanding extra power consumption. Our proposed method increases the mean accuracy of detecting LoS/NLoS signals from 66% to 86%, while maintaining consistent power consumption levels. In addition, the resolution provided by CAR shows that it can be employed as a preprocessing tool in other methods to enhance the quality of signals. | [
"['Azim Akhtarshenas' 'Navid Ayoobi' 'David Lopez-Perez' 'Ramin Toosi'\n 'Matin Amoozadeh']"
] |
null | null | 2405.16700 | null | null | http://arxiv.org/pdf/2405.16700v1 | 2024-05-26T21:31:59Z | 2024-05-26T21:31:59Z | Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to
Multimodal Inputs | Large Language Models (LLMs) have demonstrated impressive performance on multimodal tasks, without any multimodal finetuning. They are the building block for Large Multimodal Models, yet, we still lack a proper understanding of their success. In this work, we expose frozen LLMs to image, video, audio and text inputs and analyse their internal representation aiming to understand their generalization beyond textual inputs. Findings. Perceptual tokens (1) are easily distinguishable from textual ones inside LLMs, with significantly different representations, and complete translation to textual tokens does not exist. Yet, (2) both perceptual and textual tokens activate similar LLM weights. Despite being different, (3) perceptual and textual tokens are implicitly aligned inside LLMs, we call this the implicit multimodal alignment (IMA), and argue that this is linked to architectural design, helping LLMs to generalize. This provide more evidence to believe that the generalization of LLMs to multimodal inputs is mainly due to their architecture. Implications. (1) We find a positive correlation between the implicit alignment score and the task performance, suggesting that this could act as a proxy metric for model evaluation and selection. (2) A negative correlation exists regarding hallucinations, revealing that this problem is mainly due to misalignment between the internal perceptual and textual representations. (3) Perceptual tokens change slightly throughout the model, thus, we propose different approaches to skip computations (e.g. in FFN layers), and significantly reduce the inference cost. (4) Due to the slowly changing embeddings across layers, and the high overlap between textual and multimodal activated weights, we compress LLMs by keeping only 1 subnetwork that works well across a wide range of multimodal tasks. Paper code: https://github.com/mshukor/ima-lmms. | [
"['Mustafa Shukor' 'Matthieu Cord']"
] |
null | null | 2405.16712 | null | null | http://arxiv.org/pdf/2405.16712v1 | 2024-05-26T22:23:02Z | 2024-05-26T22:23:02Z | Zamba: A Compact 7B SSM Hybrid Model | In this technical report, we present Zamba, a novel 7B SSM-transformer hybrid model which achieves competitive performance against leading open-weight models at a comparable scale. Zamba is trained on 1T tokens from openly available datasets and is the best non-transformer model at this scale. Zamba pioneers a unique architecture combining a Mamba backbone with a single shared attention module, thus obtaining the benefits of attention at minimal parameter cost. Due to its architecture, Zamba is significantly faster at inference than comparable transformer models and requires substantially less memory for generation of long sequences. Zamba is pretrained in two phases: the first phase is based on existing web datasets, while the second one consists of annealing the model over high-quality instruct and synthetic datasets, and is characterized by a rapid learning rate decay. We open-source the weights and all checkpoints for Zamba, through both phase 1 and annealing phases. | [
"['Paolo Glorioso' 'Quentin Anthony' 'Yury Tokpanov' 'James Whittington'\n 'Jonathan Pilault' 'Adam Ibrahim' 'Beren Millidge']"
] |
null | null | 2405.16714 | null | null | http://arxiv.org/pdf/2405.16714v1 | 2024-05-26T22:30:29Z | 2024-05-26T22:30:29Z | Crafting Interpretable Embeddings by Asking LLMs Questions | Large language models (LLMs) have rapidly improved text embeddings for a growing array of natural-language processing tasks. However, their opaqueness and proliferation into scientific domains such as neuroscience have created a growing need for interpretability. Here, we ask whether we can obtain interpretable embeddings through LLM prompting. We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM. Training QA-Emb reduces to selecting a set of underlying questions rather than learning model weights. We use QA-Emb to flexibly generate interpretable models for predicting fMRI voxel responses to language stimuli. QA-Emb significantly outperforms an established interpretable baseline, and does so while requiring very few questions. This paves the way towards building flexible feature spaces that can concretize and evaluate our understanding of semantic brain representations. We additionally find that QA-Emb can be effectively approximated with an efficient model, and we explore broader applications in simple NLP tasks. | [
"['Vinamra Benara' 'Chandan Singh' 'John X. Morris' 'Richard Antonello'\n 'Ion Stoica' 'Alexander G. Huth' 'Jianfeng Gao']"
] |
null | null | 2405.16718 | null | null | http://arxiv.org/pdf/2405.16718v1 | 2024-05-26T23:14:37Z | 2024-05-26T23:14:37Z | Amortized Active Causal Induction with Deep Reinforcement Learning | We present Causal Amortized Active Structure Learning (CAASL), an active intervention design policy that can select interventions that are adaptive, real-time and that does not require access to the likelihood. This policy, an amortized network based on the transformer, is trained with reinforcement learning on a simulator of the design environment, and a reward function that measures how close the true causal graph is to a causal graph posterior inferred from the gathered data. On synthetic data and a single-cell gene expression simulator, we demonstrate empirically that the data acquired through our policy results in a better estimate of the underlying causal graph than alternative strategies. Our design policy successfully achieves amortized intervention design on the distribution of the training environment while also generalizing well to distribution shifts in test-time design environments. Further, our policy also demonstrates excellent zero-shot generalization to design environments with dimensionality higher than that during training, and to intervention types that it has not been trained on. | [
"['Yashas Annadani' 'Panagiotis Tigas' 'Stefan Bauer' 'Adam Foster']"
] |
null | null | 2405.16726 | null | null | http://arxiv.org/pdf/2405.16726v1 | 2024-05-26T23:48:30Z | 2024-05-26T23:48:30Z | Exploring Edge Probability Graph Models Beyond Edge Independency:
Concepts, Analyses, and Algorithms | Desirable random graph models (RGMs) should (i) be tractable so that we can compute and control graph statistics, and (ii) generate realistic structures such as high clustering (i.e., high subgraph densities). A popular category of RGMs (e.g., Erdos-Renyi and stochastic Kronecker) outputs edge probabilities, and we need to realize (i.e., sample from) the edge probabilities to generate graphs. Typically, each edge (in)existence is assumed to be determined independently. However, with edge independency, RGMs theoretically cannot produce high subgraph densities unless they "replicate" input graphs. In this work, we explore realization beyond edge independence that can produce more realistic structures while ensuring high tractability. Specifically, we propose edge-dependent realization schemes called binding and derive closed-form tractability results on subgraph (e.g., triangle) densities in graphs generated with binding. We propose algorithms for graph generation with binding and parameter fitting of binding. We empirically validate that binding exhibits high tractability and generates realistic graphs with high clustering, significantly improving upon existing RGMs assuming edge independency. | [
"['Fanchen Bu' 'Ruochen Yang' 'Paul Bogdan' 'Kijung Shin']"
] |
null | null | 2405.16727 | null | null | http://arxiv.org/pdf/2405.16727v1 | 2024-05-26T23:52:51Z | 2024-05-26T23:52:51Z | Disentangling and Integrating Relational and Sensory Information in
Transformer Architectures | The Transformer architecture processes sequences by implementing a form of neural message-passing that consists of iterative information retrieval (attention), followed by local processing (position-wise MLP). Two types of information are essential under this general computational paradigm: "sensory" information about individual objects, and "relational" information describing the relationships between objects. Standard attention naturally encodes the former, but does not explicitly encode the latter. In this paper, we present an extension of Transformers where multi-head attention is augmented with two distinct types of attention heads, each routing information of a different type. The first type is the standard attention mechanism of Transformers, which captures object-level features, while the second type is a novel attention mechanism we propose to explicitly capture relational information. The two types of attention heads each possess different inductive biases, giving the resulting architecture greater efficiency and versatility. The promise of this approach is demonstrated empirically across a range of tasks. | [
"['Awni Altabaa' 'John Lafferty']"
] |
null | null | 2405.16728 | null | null | http://arxiv.org/pdf/2405.16728v1 | 2024-05-26T23:56:45Z | 2024-05-26T23:56:45Z | Towards Multi-Task Multi-Modal Models: A Video Generative Perspective | Advancements in language foundation models have primarily fueled the recent surge in artificial intelligence. In contrast, generative learning of non-textual modalities, especially videos, significantly trails behind language modeling. This thesis chronicles our endeavor to build multi-task models for generating videos and other modalities under diverse conditions, as well as for understanding and compression applications. Given the high dimensionality of visual data, we pursue concise and accurate latent representations. Our video-native spatial-temporal tokenizers preserve high fidelity. We unveil a novel approach to mapping bidirectionally between visual observation and interpretable lexical terms. Furthermore, our scalable visual token representation proves beneficial across generation, compression, and understanding tasks. This achievement marks the first instances of language models surpassing diffusion models in visual synthesis and a video tokenizer outperforming industry-standard codecs. Within these multi-modal latent spaces, we study the design of multi-task generative models. Our masked multi-task transformer excels at the quality, efficiency, and flexibility of video generation. We enable a frozen language model, trained solely on text, to generate visual content. Finally, we build a scalable generative multi-modal transformer trained from scratch, enabling the generation of videos containing high-fidelity motion with the corresponding audio given diverse conditions. Throughout the course, we have shown the effectiveness of integrating multiple tasks, crafting high-fidelity latent representation, and generating multiple modalities. This work suggests intriguing potential for future exploration in generating non-textual data and enabling real-time, interactive experiences across various media forms. | [
"['Lijun Yu']"
] |
null | null | 2405.16729 | null | null | http://arxiv.org/pdf/2405.16729v1 | 2024-05-27T00:08:36Z | 2024-05-27T00:08:36Z | Free-Space Optical Channel Turbulence Prediction: A Machine Learning
Approach | Channel turbulence presents a formidable obstacle for free-space optical (FSO) communication. Anticipation of turbulence levels is highly important for mitigating disruptions. We study the application of machine learning (ML) to FSO data streams to rapidly predict channel turbulence levels with no additional sensing hardware. An optical bit stream was transmitted through a controlled channel in the lab under six distinct turbulence levels, and the efficacy of using ML to classify turbulence levels was examined. ML-based turbulence level classification was found to be >98% accurate with multiple ML training parameters, but highly dependent upon the timescale of changes between turbulence levels. | [
"['Md Zobaer Islam' 'Ethan Abele' 'Fahim Ferdous Hossain' 'Arsalan Ahmad'\n 'Sabit Ekin' \"John F. O'Hara\"]"
] |
null | null | 2405.16730 | null | null | http://arxiv.org/pdf/2405.16730v1 | 2024-05-27T00:11:53Z | 2024-05-27T00:11:53Z | Latent Energy-Based Odyssey: Black-Box Optimization via Expanded
Exploration in the Energy-Based Latent Space | Offline Black-Box Optimization (BBO) aims at optimizing a black-box function using the knowledge from a pre-collected offline dataset of function values and corresponding input designs. However, the high-dimensional and highly-multimodal input design space of black-box function pose inherent challenges for most existing methods that model and operate directly upon input designs. These issues include but are not limited to high sample complexity, which relates to inaccurate approximation of black-box function; and insufficient coverage and exploration of input design modes, which leads to suboptimal proposal of new input designs. In this work, we consider finding a latent space that serves as a compressed yet accurate representation of the design-value joint space, enabling effective latent exploration of high-value input design modes. To this end, we formulate an learnable energy-based latent space, and propose Noise-intensified Telescoping density-Ratio Estimation (NTRE) scheme for variational learning of an accurate latent space model without costly Markov Chain Monte Carlo. The optimization process is then exploration of high-value designs guided by the learned energy-based model in the latent space, formulated as gradient-based sampling from a latent-variable-parameterized inverse model. We show that our particular parameterization encourages expanded exploration around high-value design modes, motivated by inversion thinking of a fundamental result of conditional covariance matrix typically used for variance reduction. We observe that our method, backed by an accurately learned informative latent space and an expanding-exploration model design, yields significant improvements over strong previous methods on both synthetic and real world datasets such as the design-bench suite. | [
"['Peiyu Yu' 'Dinghuai Zhang' 'Hengzhi He' 'Xiaojian Ma' 'Ruiyao Miao'\n 'Yifan Lu' 'Yasi Zhang' 'Deqian Kong' 'Ruiqi Gao' 'Jianwen Xie'\n 'Guang Cheng' 'Ying Nian Wu']"
] |
null | null | 2405.16731 | null | null | http://arxiv.org/pdf/2405.16731v1 | 2024-05-27T00:12:51Z | 2024-05-27T00:12:51Z | Pretraining with Random Noise for Fast and Robust Learning without
Weight Transport | The brain prepares for learning even before interacting with the environment, by refining and optimizing its structures through spontaneous neural activity that resembles random noise. However, the mechanism of such a process has yet to be thoroughly understood, and it is unclear whether this process can benefit the algorithm of machine learning. Here, we study this issue using a neural network with a feedback alignment algorithm, demonstrating that pretraining neural networks with random noise increases the learning efficiency as well as generalization abilities without weight transport. First, we found that random noise training modifies forward weights to match backward synaptic feedback, which is necessary for teaching errors by feedback alignment. As a result, a network with pre-aligned weights learns notably faster than a network without random noise training, even reaching a convergence speed comparable to that of a backpropagation algorithm. Sequential training with both random noise and data brings weights closer to synaptic feedback than training solely with data, enabling more precise credit assignment and faster learning. We also found that each readout probability approaches the chance level and that the effective dimensionality of weights decreases in a network pretrained with random noise. This pre-regularization allows the network to learn simple solutions of a low rank, reducing the generalization loss during subsequent training. This also enables the network robustly to generalize a novel, out-of-distribution dataset. Lastly, we confirmed that random noise pretraining reduces the amount of meta-loss, enhancing the network ability to adapt to various tasks. Overall, our results suggest that random noise training with feedback alignment offers a straightforward yet effective method of pretraining that facilitates quick and reliable learning without weight transport. | [
"['Jeonghwan Cheon' 'Sang Wan Lee' 'Se-Bum Paik']"
] |
null | null | 2405.16732 | null | null | http://arxiv.org/pdf/2405.16732v1 | 2024-05-27T00:23:42Z | 2024-05-27T00:23:42Z | The Collusion of Memory and Nonlinearity in Stochastic Approximation
With Constant Stepsize | In this work, we investigate stochastic approximation (SA) with Markovian data and nonlinear updates under constant stepsize $alpha>0$. Existing work has primarily focused on either i.i.d. data or linear update rules. We take a new perspective and carefully examine the simultaneous presence of Markovian dependency of data and nonlinear update rules, delineating how the interplay between these two structures leads to complications that are not captured by prior techniques. By leveraging the smoothness and recurrence properties of the SA updates, we develop a fine-grained analysis of the correlation between the SA iterates $theta_k$ and Markovian data $x_k$. This enables us to overcome the obstacles in existing analysis and establish for the first time the weak convergence of the joint process $(x_k, theta_k)_{kgeq0}$. Furthermore, we present a precise characterization of the asymptotic bias of the SA iterates, given by $mathbb{E}[theta_infty]-theta^ast=alpha(b_text{m}+b_text{n}+b_text{c})+O(alpha^{3/2})$. Here, $b_text{m}$ is associated with the Markovian noise, $b_text{n}$ is tied to the nonlinearity, and notably, $b_text{c}$ represents a multiplicative interaction between the Markovian noise and nonlinearity, which is absent in previous works. As a by-product of our analysis, we derive finite-time bounds on higher moment $mathbb{E}[|theta_k-theta^ast|^{2p}]$ and present non-asymptotic geometric convergence rates for the iterates, along with a Central Limit Theorem. | [
"['Dongyan Huo' 'Yixuan Zhang' 'Yudong Chen' 'Qiaomin Xie']"
] |
null | null | 2405.16734 | null | null | http://arxiv.org/pdf/2405.16734v1 | 2024-05-27T00:53:18Z | 2024-05-27T00:53:18Z | Faster Sampling via Stochastic Gradient Proximal Sampler | Stochastic gradients have been widely integrated into Langevin-based methods to improve their scalability and efficiency in solving large-scale sampling problems. However, the proximal sampler, which exhibits much faster convergence than Langevin-based algorithms in the deterministic setting Lee et al. (2021), has yet to be explored in its stochastic variants. In this paper, we study the Stochastic Proximal Samplers (SPS) for sampling from non-log-concave distributions. We first establish a general framework for implementing stochastic proximal samplers and establish the convergence theory accordingly. We show that the convergence to the target distribution can be guaranteed as long as the second moment of the algorithm trajectory is bounded and restricted Gaussian oracles can be well approximated. We then provide two implementable variants based on Stochastic gradient Langevin dynamics (SGLD) and Metropolis-adjusted Langevin algorithm (MALA), giving rise to SPS-SGLD and SPS-MALA. We further show that SPS-SGLD and SPS-MALA can achieve $epsilon$-sampling error in total variation (TV) distance within $tilde{mathcal{O}}(depsilon^{-2})$ and $tilde{mathcal{O}}(d^{1/2}epsilon^{-2})$ gradient complexities, which outperform the best-known result by at least an $tilde{mathcal{O}}(d^{1/3})$ factor. This enhancement in performance is corroborated by our empirical studies on synthetic data with various dimensions, demonstrating the efficiency of our proposed algorithm. | [
"['Xunpeng Huang' 'Difan Zou' 'Yi-An Ma' 'Hanze Dong' 'Tong Zhang']"
] |
null | null | 2405.16739 | null | null | http://arxiv.org/pdf/2405.16739v1 | 2024-05-27T01:08:23Z | 2024-05-27T01:08:23Z | Oracle-Efficient Reinforcement Learning for Max Value Ensembles | Reinforcement learning (RL) in large or infinite state spaces is notoriously challenging, both theoretically (where worst-case sample and computational complexities must scale with state space cardinality) and experimentally (where function approximation and policy gradient techniques often scale poorly and suffer from instability and high variance). One line of research attempting to address these difficulties makes the natural assumption that we are given a collection of heuristic base or $textit{constituent}$ policies upon which we would like to improve in a scalable manner. In this work we aim to compete with the $textit{max-following policy}$, which at each state follows the action of whichever constituent policy has the highest value. The max-following policy is always at least as good as the best constituent policy, and may be considerably better. Our main result is an efficient algorithm that learns to compete with the max-following policy, given only access to the constituent policies (but not their value functions). In contrast to prior work in similar settings, our theoretical results require only the minimal assumption of an ERM oracle for value function approximation for the constituent policies (and not the global optimal policy or the max-following policy itself) on samplable distributions. We illustrate our algorithm's experimental effectiveness and behavior on several robotic simulation testbeds. | [
"['Marcel Hussing' 'Michael Kearns' 'Aaron Roth' 'Sikata Bela Sengupta'\n 'Jessica Sorrell']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.