categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.15661 | null | null | http://arxiv.org/pdf/2406.15661v1 | 2024-06-21T21:36:18Z | 2024-06-21T21:36:18Z | The Stochastic Occupation Kernel Method for System Identification | The method of occupation kernels has been used to learn ordinary differential equations from data in a non-parametric way. We propose a two-step method for learning the drift and diffusion of a stochastic differential equation given snapshots of the process. In the first step, we learn the drift by applying the occupation kernel algorithm to the expected value of the process. In the second step, we learn the diffusion given the drift using a semi-definite program. Specifically, we learn the diffusion squared as a non-negative function in a RKHS associated with the square of a kernel. We present examples and simulations. | [
"['Michael Wells' 'Kamel Lahouel' 'Bruno Jedynak']"
] |
null | null | 2406.15662 | null | null | http://arxiv.org/pdf/2406.15662v1 | 2024-06-21T21:39:34Z | 2024-06-21T21:39:34Z | Matching Problems to Solutions: An Explainable Way of Solving Machine
Learning Problems | Domain experts from all fields are called upon, working with data scientists, to explore the use of ML techniques to solve their problems. Starting from a domain problem/question, ML-based problem-solving typically involves three steps: (1) formulating the business problem (problem domain) as a data analysis problem (solution domain), (2) sketching a high-level ML-based solution pattern, given the domain requirements and the properties of the available data, and (3) designing and refining the different components of the solution pattern. There has to be a substantial body of ML problem solving knowledge that ML researchers agree on, and that ML practitioners routinely apply to solve the most common problems. Our work deals with capturing this body of knowledge, and embodying it in a ML problem solving workbench to helps domain specialists who are not ML experts to explore the ML solution space. This paper focuses on: 1) the representation of domain problems, ML problems, and the main ML solution artefacts, and 2) a heuristic matching function that helps identify the ML algorithm family that is most appropriate for the domain problem at hand, given the domain (expert) requirements, and the characteristics of the training data. We review related work and outline our strategy for validating the workbench | [
"['Lokman Saleh' 'Hafedh Mili' 'Mounir Boukadoum']"
] |
null | null | 2406.15664 | null | null | http://arxiv.org/pdf/2406.15664v1 | 2024-06-21T21:44:27Z | 2024-06-21T21:44:27Z | Flat Posterior Does Matter For Bayesian Transfer Learning | The large-scale pre-trained neural network has achieved notable success in enhancing performance for downstream tasks. Another promising approach for generalization is Bayesian Neural Network (BNN), which integrates Bayesian methods into neural network architectures, offering advantages such as Bayesian Model averaging (BMA) and uncertainty quantification. Despite these benefits, transfer learning for BNNs has not been widely investigated and shows limited improvement. We hypothesize that this issue arises from the inability to find flat minima, which is crucial for generalization performance. To address this, we evaluate the sharpness of BNNs in various settings, revealing their insufficiency in seeking flat minima and the influence of flatness on BMA performance. Therefore, we propose Sharpness-aware Bayesian Model Averaging (SA-BMA), a Bayesian-fitting flat posterior seeking optimizer integrated with Bayesian transfer learning. SA-BMA calculates the divergence between posteriors in the parameter space, aligning with the nature of BNNs, and serves as a generalized version of existing sharpness-aware optimizers. We validate that SA-BMA improves generalization performance in few-shot classification and distribution shift scenarios by ensuring flatness. | [
"['Sungjun Lim' 'Jeyoon Yeom' 'Sooyon Kim' 'Hoyoon Byun' 'Jinho Kang'\n 'Yohan Jung' 'Jiyoung Jung' 'Kyungwoo Song']"
] |
null | null | 2406.15708 | null | null | http://arxiv.org/pdf/2406.15708v1 | 2024-06-22T02:07:10Z | 2024-06-22T02:07:10Z | Teach Better or Show Smarter? On Instructions and Exemplars in Automatic
Prompt Optimization | Large language models have demonstrated remarkable capabilities, but their performance is heavily reliant on effective prompt engineering. Automatic prompt optimization (APO) methods are designed to automate this and can be broadly categorized into those targeting instructions (instruction optimization, IO) vs. those targeting exemplars (exemplar selection, ES). Despite their shared objective, these have evolved rather independently, with IO recently receiving more research attention. This paper seeks to bridge this gap by comprehensively comparing the performance of representative IO and ES techniques, both isolation and combination, on a diverse set of challenging tasks. Our findings reveal that intelligently reusing model-generated input-output pairs obtained from evaluating prompts on the validation set as exemplars consistently improves performance over IO methods but is currently under-investigated. We also find that despite the recent focus on IO, how we select exemplars can outweigh how we optimize instructions, with ES strategies as simple as random search outperforming state-of-the-art IO methods with seed instructions without any optimization. Moreover, we observe synergy between ES and IO, with optimal combinations surpassing individual contributions. We conclude that studying exemplar selection as a standalone method and its optimal combination with instruction optimization remains a crucial aspect of APO and deserves greater consideration in future research, even in the era of highly capable instruction-following models. | [
"['Xingchen Wan' 'Ruoxi Sun' 'Hootan Nakhost' 'Sercan O. Arik']"
] |
null | null | 2406.15713 | null | null | http://arxiv.org/pdf/2406.15713v2 | 2024-06-26T15:00:46Z | 2024-06-22T02:37:13Z | Efficient Low-rank Identification via Accelerated Iteratively Reweighted
Nuclear Norm Minimization | This paper considers the problem of minimizing the sum of a smooth function and the Schatten-$p$ norm of the matrix. Our contribution involves proposing accelerated iteratively reweighted nuclear norm methods designed for solving the nonconvex low-rank minimization problem. Two major novelties characterize our approach. Firstly, the proposed method possesses a rank identification property, enabling the provable identification of the "correct" rank of the stationary point within a finite number of iterations. Secondly, we introduce an adaptive updating strategy for smoothing parameters. This strategy automatically fixes parameters associated with zero singular values as constants upon detecting the "correct" rank while quickly driving the rest of the parameters to zero. This adaptive behavior transforms the algorithm into one that effectively solves smooth problems after a few iterations, setting our work apart from existing iteratively reweighted methods for low-rank optimization. We prove the global convergence of the proposed algorithm, guaranteeing that every limit point of the iterates is a critical point. Furthermore, a local convergence rate analysis is provided under the Kurdyka-{L}ojasiewicz property. We conduct numerical experiments using both synthetic and real data to showcase our algorithm's efficiency and superiority over existing methods. | [
"['Hao Wang' 'Ye Wang' 'Xiangyu Yang']"
] |
null | null | 2406.15736 | null | null | http://arxiv.org/pdf/2406.15736v1 | 2024-06-22T05:04:39Z | 2024-06-22T05:04:39Z | Evaluating Large Vision-and-Language Models on Children's Mathematical
Olympiads | Recent years have seen a significant progress in the general-purpose problem solving abilities of large vision and language models (LVLMs), such as ChatGPT, Gemini, etc.; some of these breakthroughs even seem to enable AI models to outperform human abilities in varied tasks that demand higher-order cognitive skills. Are the current large AI models indeed capable of generalized problem solving as humans do? A systematic analysis of AI capabilities for joint vision and text reasoning, however, is missing in the current scientific literature. In this paper, we make an effort towards filling this gap, by evaluating state-of-the-art LVLMs on their mathematical and algorithmic reasoning abilities using visuo-linguistic problems from children's Olympiads. Specifically, we consider problems from the Mathematical Kangaroo (MK) Olympiad, which is a popular international competition targeted at children from grades 1-12, that tests children's deeper mathematical abilities using puzzles that are appropriately gauged to their age and skills. Using the puzzles from MK, we created a dataset, dubbed SMART-840, consisting of 840 problems from years 2020-2024. With our dataset, we analyze LVLMs power on mathematical reasoning; their responses on our puzzles offer a direct way to compare against that of children. Our results show that modern LVLMs do demonstrate increasingly powerful reasoning skills in solving problems for higher grades, but lack the foundations to correctly answer problems designed for younger children. Further analysis shows that there is no significant correlation between the reasoning capabilities of AI models and that of young children, and their capabilities appear to be based on a different type of reasoning than the cumulative knowledge that underlies children's mathematics and logic skills. | [
"['Anoop Cherian' 'Kuan-Chuan Peng' 'Suhas Lohit' 'Joanna Matthiesen'\n 'Kevin Smith' 'Joshua B. Tenenbaum']"
] |
null | null | 2406.15741 | null | null | http://arxiv.org/pdf/2406.15741v1 | 2024-06-22T05:33:35Z | 2024-06-22T05:33:35Z | Ladder: A Model-Agnostic Framework Boosting LLM-based Machine
Translation to the Next Level | General-purpose Large Language Models (LLMs) like GPT-4 have achieved remarkable advancements in machine translation (MT) by leveraging extensive web content. On the other hand, translation-specific LLMs are built by pre-training on domain-specific monolingual corpora and fine-tuning with human-annotated translation data. Despite the superior performance, these methods either demand an unprecedented scale of computing and data or substantial human editing and annotation efforts. In this paper, we develop Ladder, a novel model-agnostic and cost-effective tool to refine the performance of general LLMs for MT. Ladder is trained on pseudo-refinement triplets which can be easily obtained from existing LLMs without additional human cost. During training, we propose a hierarchical fine-tuning strategy with an easy-to-hard schema, improving Ladder's refining performance progressively. The trained Ladder can be seamlessly integrated with any general-purpose LLMs to boost their translation performance. By utilizing Gemma-2B/7B as the backbone, Ladder-2B can elevate raw translations to the level of top-tier open-source models (e.g., refining BigTranslate-13B with +6.91 BLEU and +3.52 COMET for XX-En), and Ladder-7B can further enhance model performance to be on par with the state-of-the-art GPT-4. Extensive ablation and analysis corroborate the effectiveness of Ladder in diverse settings. Our code is available at https://github.com/fzp0424/Ladder | [
"['Zhaopeng Feng' 'Ruizhe Chen' 'Yan Zhang' 'Zijie Meng' 'Zuozhu Liu']"
] |
null | null | 2406.15742 | null | null | http://arxiv.org/abs/2406.15742v1 | 2024-06-22T05:49:37Z | 2024-06-22T05:49:37Z | Probabilistic Programming with Programmable Variational Inference | Compared to the wide array of advanced Monte Carlo methods supported by modern probabilistic programming languages (PPLs), PPL support for variational inference (VI) is less developed: users are typically limited to a predefined selection of variational objectives and gradient estimators, which are implemented monolithically (and without formal correctness arguments) in PPL backends. In this paper, we propose a more modular approach to supporting variational inference in PPLs, based on compositional program transformation. In our approach, variational objectives are expressed as programs, that may employ first-class constructs for computing densities of and expected values under user-defined models and variational families. We then transform these programs systematically into unbiased gradient estimators for optimizing the objectives they define. Our design enables modular reasoning about many interacting concerns, including automatic differentiation, density accumulation, tracing, and the application of unbiased gradient estimation strategies. Additionally, relative to existing support for VI in PPLs, our design increases expressiveness along three axes: (1) it supports an open-ended set of user-defined variational objectives, rather than a fixed menu of options; (2) it supports a combinatorial space of gradient estimation strategies, many not automated by today's PPLs; and (3) it supports a broader class of models and variational families, because it supports constructs for approximate marginalization and normalization (previously introduced only for Monte Carlo inference). We implement our approach in an extension to the Gen probabilistic programming system (genjax.vi, implemented in JAX), and evaluate on several deep generative modeling tasks, showing minimal performance overhead vs. hand-coded implementations and performance competitive with well-established open-source PPLs. | [
"['McCoy R. Becker' 'Alexander K. Lew' 'Xiaoyan Wang' 'Matin Ghavami'\n 'Mathieu Huot' 'Martin C. Rinard' 'Vikash K. Mansinghka']"
] |
null | null | 2406.15747 | null | null | http://arxiv.org/pdf/2406.15747v1 | 2024-06-22T06:21:44Z | 2024-06-22T06:21:44Z | Modeling Unknown Stochastic Dynamical System Subject to External
Excitation | We present a numerical method for learning unknown nonautonomous stochastic dynamical system, i.e., stochastic system subject to time dependent excitation or control signals. Our basic assumption is that the governing equations for the stochastic system are unavailable. However, short bursts of input/output (I/O) data consisting of certain known excitation signals and their corresponding system responses are available. When a sufficient amount of such I/O data are available, our method is capable of learning the unknown dynamics and producing an accurate predictive model for the stochastic responses of the system subject to arbitrary excitation signals not in the training data. Our method has two key components: (1) a local approximation of the training I/O data to transfer the learning into a parameterized form; and (2) a generative model to approximate the underlying unknown stochastic flow map in distribution. After presenting the method in detail, we present a comprehensive set of numerical examples to demonstrate the performance of the proposed method, especially for long-term system predictions. | [
"['Yuan Chen' 'Dongbin Xiu']"
] |
null | null | 2406.15753 | null | null | http://arxiv.org/pdf/2406.15753v1 | 2024-06-22T06:43:51Z | 2024-06-22T06:43:51Z | The Perils of Optimizing Learned Reward Functions: Low Training Error
Does Not Guarantee Low Regret | In reinforcement learning, specifying reward functions that capture the intended task can be very challenging. Reward learning aims to address this issue by learning the reward function. However, a learned reward model may have a low error on the training distribution, and yet subsequently produce a policy with large regret. We say that such a reward model has an error-regret mismatch. The main source of an error-regret mismatch is the distributional shift that commonly occurs during policy optimization. In this paper, we mathematically show that a sufficiently low expected test error of the reward model guarantees low worst-case regret, but that for any fixed expected test error, there exist realistic data distributions that allow for error-regret mismatch to occur. We then show that similar problems persist even when using policy regularization techniques, commonly employed in methods such as RLHF. Our theoretical results highlight the importance of developing new ways to measure the quality of learned reward models. | [
"['Lukas Fluri' 'Leon Lang' 'Alessandro Abate' 'Patrick Forré'\n 'David Krueger' 'Joar Skalse']"
] |
null | null | 2406.15754 | null | null | http://arxiv.org/pdf/2406.15754v1 | 2024-06-22T06:44:38Z | 2024-06-22T06:44:38Z | Multimodal Segmentation for Vocal Tract Modeling | Accurate modeling of the vocal tract is necessary to construct articulatory representations for interpretable speech processing and linguistics. However, vocal tract modeling is challenging because many internal articulators are occluded from external motion capture technologies. Real-time magnetic resonance imaging (RT-MRI) allows measuring precise movements of internal articulators during speech, but annotated datasets of MRI are limited in size due to time-consuming and computationally expensive labeling methods. We first present a deep labeling strategy for the RT-MRI video using a vision-only segmentation approach. We then introduce a multimodal algorithm using audio to improve segmentation of vocal articulators. Together, we set a new benchmark for vocal tract modeling in MRI video segmentation and use this to release labels for a 75-speaker RT-MRI dataset, increasing the amount of labeled public RT-MRI data of the vocal tract by over a factor of 9. The code and dataset labels can be found at url{rishiraij.github.io/multimodal-mri-avatar/}. | [
"['Rishi Jain' 'Bohan Yu' 'Peter Wu' 'Tejas Prabhune'\n 'Gopala Anumanchipalli']"
] |
null | null | 2406.15758 | null | null | http://arxiv.org/pdf/2406.15758v1 | 2024-06-22T06:51:47Z | 2024-06-22T06:51:47Z | EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge
Devices via Layerwise Unified Compression and Adaptive Layer Tuning and
Voting | Efficient adaption of large language models (LLMs) on edge devices is essential for applications requiring continuous and privacy-preserving adaptation and inference. However, existing tuning techniques fall short because of the high computation and memory overheads. To this end, we introduce a computation- and memory-efficient LLM tuning framework, called Edge-LLM, to facilitate affordable and effective LLM adaptation on edge devices. Specifically, Edge-LLM features three core components: (1) a layer-wise unified compression (LUC) technique to reduce the computation overhead by generating layer-wise pruning sparsity and quantization bit-width policies, (2) an adaptive layer tuning and voting scheme to reduce the memory overhead by reducing the backpropagation depth, and (3) a complementary hardware scheduling strategy to handle the irregular computation patterns introduced by LUC and adaptive layer tuning, thereby achieving efficient computation and data movements. Extensive experiments demonstrate that Edge-LLM achieves a 2.92x speed up and a 4x memory overhead reduction as compared to vanilla tuning methods with comparable task accuracy. Our code is available at https://github.com/GATECH-EIC/Edge-LLM | [
"['Zhongzhi Yu' 'Zheng Wang' 'Yuhan Li' 'Haoran You' 'Ruijie Gao'\n 'Xiaoya Zhou' 'Sreenidhi Reedy Bommu' 'Yang Katie Zhao'\n 'Yingyan Celine Lin']"
] |
null | null | 2406.15760 | null | null | http://arxiv.org/pdf/2406.15760v1 | 2024-06-22T06:56:09Z | 2024-06-22T06:56:09Z | ICM Ensemble with Novel Betting Functions for Concept Drift | This study builds upon our previous work by introducing a refined Inductive Conformal Martingale (ICM) approach for addressing Concept Drift (CD). Specifically, we enhance our previously proposed CAUTIOUS betting function to incorporate multiple density estimators for improving detection ability. We also combine this betting function with two base estimators that have not been previously utilized within the ICM framework: the Interpolated Histogram and Nearest Neighbor Density Estimators. We assess these extensions using both a single ICM and an ensemble of ICMs. For the latter, we conduct a comprehensive experimental investigation into the influence of the ensemble size on prediction accuracy and the number of available predictions. Our experimental results on four benchmark datasets demonstrate that the proposed approach surpasses our previous methodology in terms of performance while matching or in many cases exceeding that of three contemporary state-of-the-art techniques. | [
"['Charalambos Eliades' 'Harris Papadopoulos']"
] |
null | null | 2406.15762 | null | null | http://arxiv.org/pdf/2406.15762v1 | 2024-06-22T06:59:32Z | 2024-06-22T06:59:32Z | Rethinking the Diffusion Models for Numerical Tabular Data Imputation
from the Perspective of Wasserstein Gradient Flow | Diffusion models (DMs) have gained attention in Missing Data Imputation (MDI), but there remain two long-neglected issues to be addressed: (1). Inaccurate Imputation, which arises from inherently sample-diversification-pursuing generative process of DMs. (2). Difficult Training, which stems from intricate design required for the mask matrix in model training stage. To address these concerns within the realm of numerical tabular datasets, we introduce a novel principled approach termed Kernelized Negative Entropy-regularized Wasserstein gradient flow Imputation (KnewImp). Specifically, based on Wasserstein gradient flow (WGF) framework, we first prove that issue (1) stems from the cost functionals implicitly maximized in DM-based MDI are equivalent to the MDI's objective plus diversification-promoting non-negative terms. Based on this, we then design a novel cost functional with diversification-discouraging negative entropy and derive our KnewImp approach within WGF framework and reproducing kernel Hilbert space. After that, we prove that the imputation procedure of KnewImp can be derived from another cost functional related to the joint distribution, eliminating the need for the mask matrix and hence naturally addressing issue (2). Extensive experiments demonstrate that our proposed KnewImp approach significantly outperforms existing state-of-the-art methods. | [
"['Zhichao Chen' 'Haoxuan Li' 'Fangyikang Wang' 'Odin Zhang' 'Hu Xu'\n 'Xiaoyu Jiang' 'Zhihuan Song' 'Eric H. Wang']"
] |
null | null | 2406.15763 | null | null | http://arxiv.org/pdf/2406.15763v2 | 2024-07-09T14:35:57Z | 2024-06-22T06:59:52Z | AllMatch: Exploiting All Unlabeled Data for Semi-Supervised Learning | Existing semi-supervised learning algorithms adopt pseudo-labeling and consistency regulation techniques to introduce supervision signals for unlabeled samples. To overcome the inherent limitation of threshold-based pseudo-labeling, prior studies have attempted to align the confidence threshold with the evolving learning status of the model, which is estimated through the predictions made on the unlabeled data. In this paper, we further reveal that classifier weights can reflect the differentiated learning status across categories and consequently propose a class-specific adaptive threshold mechanism. Additionally, considering that even the optimal threshold scheme cannot resolve the problem of discarding unlabeled samples, a binary classification consistency regulation approach is designed to distinguish candidate classes from negative options for all unlabeled samples. By combining the above strategies, we present a novel SSL algorithm named AllMatch, which achieves improved pseudo-label accuracy and a 100% utilization ratio for the unlabeled data. We extensively evaluate our approach on multiple benchmarks, encompassing both balanced and imbalanced settings. The results demonstrate that AllMatch consistently outperforms existing state-of-the-art methods. | [
"['Zhiyu Wu' 'Jinshi Cui']"
] |
null | null | 2406.15765 | null | null | http://arxiv.org/pdf/2406.15765v1 | 2024-06-22T07:00:43Z | 2024-06-22T07:00:43Z | Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large
Language Models without Training through Attention Calibration | Attention is a fundamental component behind the remarkable achievements of large language models (LLMs). However, our current understanding of the attention mechanism, especially regarding how attention distributions are established, remains limited. Inspired by recent studies that explore the presence of attention sink in the initial token, which receives disproportionately large attention scores despite their lack of semantic importance, this work delves deeper into this phenomenon. We aim to provide a more profound understanding of the existence of attention sinks within LLMs and to uncover ways to enhance the achievable accuracy of LLMs by directly optimizing the attention distributions, without the need for weight finetuning. Specifically, this work begins with comprehensive visualizations of the attention distributions in LLMs during inference across various inputs and tasks. Based on these visualizations, to the best of our knowledge, we are the first to discover that (1) attention sinks occur not only at the start of sequences but also within later tokens of the input, and (2) not all attention sinks have a positive impact on the achievable accuracy of LLMs. Building upon our findings, we propose a training-free Attention Calibration Technique (ACT) that automatically optimizes the attention distributions on the fly during inference in an input-adaptive manner. Extensive experiments validate that ACT consistently enhances the accuracy of various LLMs across different applications. Specifically, ACT achieves an average improvement of up to 7.30% in accuracy across different datasets when applied to Llama-30B. Our code is available at https://github.com/GATECH-EIC/ACT. | [
"['Zhongzhi Yu' 'Zheng Wang' 'Yonggan Fu' 'Huihong Shi' 'Khalid Shaikh'\n 'Yingyan Celine Lin']"
] |
null | null | 2406.15766 | null | null | http://arxiv.org/pdf/2406.15766v1 | 2024-06-22T07:02:25Z | 2024-06-22T07:02:25Z | Continual Learning with Diffusion-based Generative Replay for Industrial
Streaming Data | The Industrial Internet of Things (IIoT) integrates interconnected sensors and devices to support industrial applications, but its dynamic environments pose challenges related to data drift. Considering the limited resources and the need to effectively adapt models to new data distributions, this paper introduces a Continual Learning (CL) approach, i.e., Distillation-based Self-Guidance (DSG), to address challenges presented by industrial streaming data via a novel generative replay mechanism. DSG utilizes knowledge distillation to transfer knowledge from the previous diffusion-based generator to the updated one, improving both the stability of the generator and the quality of reproduced data, thereby enhancing the mitigation of catastrophic forgetting. Experimental results on CWRU, DSA, and WISDM datasets demonstrate the effectiveness of DSG. DSG outperforms the state-of-the-art baseline in accuracy, demonstrating improvements ranging from 2.9% to 5.0% on key datasets, showcasing its potential for practical industrial applications. | [
"['Jiayi He' 'Jiao Chen' 'Qianmiao Liu' 'Suyan Dai' 'Jianhua Tang'\n 'Dongpo Liu']"
] |
null | null | 2406.15786 | null | null | http://arxiv.org/pdf/2406.15786v2 | 2024-07-08T00:28:52Z | 2024-06-22T08:41:48Z | What Matters in Transformers? Not All Attention is Needed | Scaling Transformer-based large language models (LLMs) has demonstrated promising performance across various tasks. However, this scaling also introduces redundant structures, posing challenges for real-world deployment. Despite some recognition of redundancy in LLMs, the variability of redundancy across different structures, such as MLP and Attention layers, is under-explored. In this work, we investigate the varying redundancy across different modules within Transformers, including Blocks, MLP, and Attention layers, using a similarity-based metric. This metric operates on the premise that redundant structures produce outputs highly similar to their inputs. Surprisingly, while attention layers are essential for transformers and distinguish them from other mainstream architectures, we found that a large proportion of attention layers exhibit excessively high similarity and can be safely pruned without degrading performance, leading to reduced memory and computation costs. Additionally, we further propose a method that jointly drops Attention and MLP layers, achieving improved performance and dropping ratios. Extensive experiments demonstrate the effectiveness of our methods, e.g., Llama-3-70B maintains comparable performance even after pruning half of the attention layers. Our findings provide valuable insights for future network architecture design. The code will be released at: url{https://github.com/Shwai-He/LLM-Drop}. | [
"['Shwai He' 'Guoheng Sun' 'Zheyu Shen' 'Ang Li']"
] |
null | null | 2406.15788 | null | null | http://arxiv.org/pdf/2406.15788v1 | 2024-06-22T08:51:57Z | 2024-06-22T08:51:57Z | Distributionally Robust Constrained Reinforcement Learning under Strong
Duality | We study the problem of Distributionally Robust Constrained RL (DRC-RL), where the goal is to maximize the expected reward subject to environmental distribution shifts and constraints. This setting captures situations where training and testing environments differ, and policies must satisfy constraints motivated by safety or limited budgets. Despite significant progress toward algorithm design for the separate problems of distributionally robust RL and constrained RL, there do not yet exist algorithms with end-to-end convergence guarantees for DRC-RL. We develop an algorithmic framework based on strong duality that enables the first efficient and provable solution in a class of environmental uncertainties. Further, our framework exposes an inherent structure of DRC-RL that arises from the combination of distributional robustness and constraints, which prevents a popular class of iterative methods from tractably solving DRC-RL, despite such frameworks being applicable for each of distributionally robust RL and constrained RL individually. Finally, we conduct experiments on a car racing benchmark to evaluate the effectiveness of the proposed algorithm. | [
"['Zhengfei Zhang' 'Kishan Panaganti' 'Laixi Shi' 'Yanan Sui'\n 'Adam Wierman' 'Yisong Yue']"
] |
null | null | 2406.15789 | null | null | http://arxiv.org/pdf/2406.15789v1 | 2024-06-22T08:51:58Z | 2024-06-22T08:51:58Z | Privacy Implications of Explainable AI in Data-Driven Systems | Machine learning (ML) models, demonstrably powerful, suffer from a lack of interpretability. The absence of transparency, often referred to as the black box nature of ML models, undermines trust and urges the need for efforts to enhance their explainability. Explainable AI (XAI) techniques address this challenge by providing frameworks and methods to explain the internal decision-making processes of these complex models. Techniques like Counterfactual Explanations (CF) and Feature Importance play a crucial role in achieving this goal. Furthermore, high-quality and diverse data remains the foundational element for robust and trustworthy ML applications. In many applications, the data used to train ML and XAI explainers contain sensitive information. In this context, numerous privacy-preserving techniques can be employed to safeguard sensitive information in the data, such as differential privacy. Subsequently, a conflict between XAI and privacy solutions emerges due to their opposing goals. Since XAI techniques provide reasoning for the model behavior, they reveal information relative to ML models, such as their decision boundaries, the values of features, or the gradients of deep learning models when explanations are exposed to a third entity. Attackers can initiate privacy breaching attacks using these explanations, to perform model extraction, inference, and membership attacks. This dilemma underscores the challenge of finding the right equilibrium between understanding ML decision-making and safeguarding privacy. | [
"['Fatima Ezzeddine']"
] |
null | null | 2406.15797 | null | null | http://arxiv.org/pdf/2406.15797v1 | 2024-06-22T09:40:34Z | 2024-06-22T09:40:34Z | Synergistic Deep Graph Clustering Network | Employing graph neural networks (GNNs) to learn cohesive and discriminative node representations for clustering has shown promising results in deep graph clustering. However, existing methods disregard the reciprocal relationship between representation learning and structure augmentation. This study suggests that enhancing embedding and structure synergistically becomes imperative for GNNs to unleash their potential in deep graph clustering. A reliable structure promotes obtaining more cohesive node representations, while high-quality node representations can guide the augmentation of the structure, enhancing structural reliability in return. Moreover, the generalization ability of existing GNNs-based models is relatively poor. While they perform well on graphs with high homogeneity, they perform poorly on graphs with low homogeneity. To this end, we propose a graph clustering framework named Synergistic Deep Graph Clustering Network (SynC). In our approach, we design a Transform Input Graph Auto-Encoder (TIGAE) to obtain high-quality embeddings for guiding structure augmentation. Then, we re-capture neighborhood representations on the augmented graph to obtain clustering-friendly embeddings and conduct self-supervised clustering. Notably, representation learning and structure augmentation share weights, significantly reducing the number of model parameters. Additionally, we introduce a structure fine-tuning strategy to improve the model's generalization. Extensive experiments on benchmark datasets demonstrate the superiority and effectiveness of our method. The code is released on GitHub and Code Ocean. | [
"['Benyu Wu' 'Shifei Ding' 'Xiao Xu' 'Lili Guo' 'Ling Ding' 'Xindong Wu']"
] |
null | null | 2406.15809 | null | null | http://arxiv.org/pdf/2406.15809v1 | 2024-06-22T10:25:55Z | 2024-06-22T10:25:55Z | LaMSUM: A Novel Framework for Extractive Summarization of User Generated
Content using LLMs | Large Language Models (LLMs) have demonstrated impressive performance across a wide range of NLP tasks, including summarization. Inherently LLMs produce abstractive summaries, and the task of achieving extractive summaries through LLMs still remains largely unexplored. To bridge this gap, in this work, we propose a novel framework LaMSUM to generate extractive summaries through LLMs for large user-generated text by leveraging voting algorithms. Our evaluation on three popular open-source LLMs (Llama 3, Mixtral and Gemini) reveal that the LaMSUM outperforms state-of-the-art extractive summarization methods. We further attempt to provide the rationale behind the output summary produced by LLMs. Overall, this is one of the early attempts to achieve extractive summarization for large user-generated text by utilizing LLMs, and likely to generate further interest in the community. | [
"['Garima Chhikara' 'Anurag Sharma' 'V. Gurucharan' 'Kripabandhu Ghosh'\n 'Abhijnan Chakraborty']"
] |
null | null | 2406.15812 | null | null | http://arxiv.org/pdf/2406.15812v1 | 2024-06-22T10:36:04Z | 2024-06-22T10:36:04Z | Intrinsic Dimension Correlation: uncovering nonlinear connections in
multimodal representations | To gain insight into the mechanisms behind machine learning methods, it is crucial to establish connections among the features describing data points. However, these correlations often exhibit a high-dimensional and strongly nonlinear nature, which makes them challenging to detect using standard methods. This paper exploits the entanglement between intrinsic dimensionality and correlation to propose a metric that quantifies the (potentially nonlinear) correlation between high-dimensional manifolds. We first validate our method on synthetic data in controlled environments, showcasing its advantages and drawbacks compared to existing techniques. Subsequently, we extend our analysis to large-scale applications in neural network representations. Specifically, we focus on latent representations of multimodal data, uncovering clear correlations between paired visual and textual embeddings, whereas existing methods struggle significantly in detecting similarity. Our results indicate the presence of highly nonlinear correlation patterns between latent manifolds. | [
"['Lorenzo Basile' 'Santiago Acevedo' 'Luca Bortolussi' 'Fabio Anselmi'\n 'Alex Rodriguez']"
] |
null | null | 2406.15819 | null | null | http://arxiv.org/pdf/2406.15819v1 | 2024-06-22T11:17:50Z | 2024-06-22T11:17:50Z | Automatic AI Model Selection for Wireless Systems: Online Learning via
Digital Twinning | In modern wireless network architectures, such as O-RAN, artificial intelligence (AI)-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control. The AI "apps" are selected on the basis of contextual information such as network conditions, topology, traffic statistics, and design goals. The mapping between context and AI model parameters is ideally done in a zero-shot fashion via an automatic model selection (AMS) mapping that leverages only contextual information without requiring any current data. This paper introduces a general methodology for the online optimization of AMS mappings. Optimizing an AMS mapping is challenging, as it requires exposure to data collected from many different contexts. Therefore, if carried out online, this initial optimization phase would be extremely time consuming. A possible solution is to leverage a digital twin of the physical system to generate synthetic data from multiple simulated contexts. However, given that the simulator at the digital twin is imperfect, a direct use of simulated data for the optimization of the AMS mapping would yield poor performance when tested in the real system. This paper proposes a novel method for the online optimization of AMS mapping that corrects for the bias of the simulator by means of limited real data collected from the physical system. Experimental results for a graph neural network-based power control app demonstrate the significant advantages of the proposed approach. | [
"['Qiushuo Hou' 'Matteo Zecchin' 'Sangwoo Park' 'Yunlong Cai' 'Guanding Yu'\n 'Kaushik Chowdhury' 'Osvaldo Simeone']"
] |
null | null | 2406.15836 | null | null | http://arxiv.org/pdf/2406.15836v1 | 2024-06-22T12:40:03Z | 2024-06-22T12:40:03Z | Decentralized Transformers with Centralized Aggregation are
Sample-Efficient Multi-Agent World Models | Learning a world model for model-free Reinforcement Learning (RL) agents can significantly improve the sample efficiency by learning policies in imagination. However, building a world model for Multi-Agent RL (MARL) can be particularly challenging due to the scalability issue in a centralized architecture arising from a large number of agents, and also the non-stationarity issue in a decentralized architecture stemming from the inter-dependency among agents. To address both challenges, we propose a novel world model for MARL that learns decentralized local dynamics for scalability, combined with a centralized representation aggregation from all agents. We cast the dynamics learning as an auto-regressive sequence modeling problem over discrete tokens by leveraging the expressive Transformer architecture, in order to model complex local dynamics across different agents and provide accurate and consistent long-term imaginations. As the first pioneering Transformer-based world model for multi-agent systems, we introduce a Perceiver Transformer as an effective solution to enable centralized representation aggregation within this context. Results on Starcraft Multi-Agent Challenge (SMAC) show that it outperforms strong model-free approaches and existing model-based methods in both sample efficiency and overall performance. | [
"['Yang Zhang' 'Chenjia Bai' 'Bin Zhao' 'Junchi Yan' 'Xiu Li' 'Xuelong Li']"
] |
null | null | 2406.15839 | null | null | http://arxiv.org/pdf/2406.15839v1 | 2024-06-22T12:59:12Z | 2024-06-22T12:59:12Z | The Effect of Similarity Measures on Accurate Stability Estimates for
Local Surrogate Models in Text-based Explainable AI | Recent work has investigated the vulnerability of local surrogate methods to adversarial perturbations on a machine learning (ML) model's inputs, where the explanation is manipulated while the meaning and structure of the original input remains similar under the complex model. While weaknesses across many methods have been shown to exist, the reasons behind why still remain little explored. Central to the concept of adversarial attacks on explainable AI (XAI) is the similarity measure used to calculate how one explanation differs from another A poor choice of similarity measure can result in erroneous conclusions on the efficacy of an XAI method. Too sensitive a measure results in exaggerated vulnerability, while too coarse understates its weakness. We investigate a variety of similarity measures designed for text-based ranked lists including Kendall's Tau, Spearman's Footrule and Rank-biased Overlap to determine how substantial changes in the type of measure or threshold of success affect the conclusions generated from common adversarial attack processes. Certain measures are found to be overly sensitive, resulting in erroneous estimates of stability. | [
"['Christopher Burger' 'Charles Walter' 'Thai Le']"
] |
null | null | 2406.15847 | null | null | http://arxiv.org/pdf/2406.15847v1 | 2024-06-22T13:26:14Z | 2024-06-22T13:26:14Z | Enhancing Solar Driver Forecasting with Multivariate Transformers | In this work, we develop a comprehensive framework for F10.7, S10.7, M10.7, and Y10.7 solar driver forecasting with a time series Transformer (PatchTST). To ensure an equal representation of high and low levels of solar activity, we construct a custom loss function to weight samples based on the distance between the solar driver's historical distribution and the training set. The solar driver forecasting framework includes an 18-day lookback window and forecasts 6 days into the future. When benchmarked against the Space Environment Technologies (SET) dataset, our model consistently produces forecasts with a lower standard mean error in nearly all cases, with improved prediction accuracy during periods of high solar activity. All the code is available on Github https://github.com/ARCLab-MIT/sw-driver-forecaster. | [
"['Sergio Sanchez-Hurtado' 'Victor Rodriguez-Fernandez' 'Julia Briden'\n 'Peng Mun Siew' 'Richard Linares']"
] |
null | null | 2406.15850 | null | null | http://arxiv.org/pdf/2406.15850v1 | 2024-06-22T13:41:02Z | 2024-06-22T13:41:02Z | Learning Abstract World Model for Value-preserving Planning with Options | General-purpose agents require fine-grained controls and rich sensory inputs to perform a wide range of tasks. However, this complexity often leads to intractable decision-making. Traditionally, agents are provided with task-specific action and observation spaces to mitigate this challenge, but this reduces autonomy. Instead, agents must be capable of building state-action spaces at the correct abstraction level from their sensorimotor experiences. We leverage the structure of a given set of temporally-extended actions to learn abstract Markov decision processes (MDPs) that operate at a higher level of temporal and state granularity. We characterize state abstractions necessary to ensure that planning with these skills, by simulating trajectories in the abstract MDP, results in policies with bounded value loss in the original MDP. We evaluate our approach in goal-based navigation environments that require continuous abstract states to plan successfully and show that abstract model learning improves the sample efficiency of planning and learning. | [
"['Rafael Rodriguez-Sanchez' 'George Konidaris']"
] |
null | null | 2406.15852 | null | null | http://arxiv.org/pdf/2406.15852v1 | 2024-06-22T13:57:09Z | 2024-06-22T13:57:09Z | Next Level Message-Passing with Hierarchical Support Graphs | Message-Passing Neural Networks (MPNNs) are extensively employed in graph learning tasks but suffer from limitations such as the restricted scope of information exchange, by being confined to neighboring nodes during each round of message passing. Various strategies have been proposed to address these limitations, including incorporating virtual nodes to facilitate global information exchange. In this study, we introduce the Hierarchical Support Graph (HSG), an extension of the virtual node concept created through recursive coarsening of the original graph. This approach provides a flexible framework for enhancing information flow in graphs, independent of the specific MPNN layers utilized. We present a theoretical analysis of HSGs, investigate their empirical performance, and demonstrate that HSGs can surpass other methods augmented with virtual nodes, achieving state-of-the-art results across multiple datasets. | [
"['Carlos Vonessen' 'Florian Grötschla' 'Roger Wattenhofer']"
] |
null | null | 2406.15856 | null | null | http://arxiv.org/pdf/2406.15856v1 | 2024-06-22T14:07:41Z | 2024-06-22T14:07:41Z | Injectivity of ReLU-layers: Perspectives from Frame Theory | Injectivity is the defining property of a mapping that ensures no information is lost and any input can be perfectly reconstructed from its output. By performing hard thresholding, the ReLU function naturally interferes with this property, making the injectivity analysis of ReLU-layers in neural networks a challenging yet intriguing task that has not yet been fully solved. This article establishes a frame theoretic perspective to approach this problem. The main objective is to develop the most general characterization of the injectivity behavior of ReLU-layers in terms of all three involved ingredients: (i) the weights, (ii) the bias, and (iii) the domain where the data is drawn from. Maintaining a focus on practical applications, we limit our attention to bounded domains and present two methods for numerically approximating a maximal bias for given weights and data domains. These methods provide sufficient conditions for the injectivity of a ReLU-layer on those domains and yield a novel practical methodology for studying the information loss in ReLU layers. Finally, we derive explicit reconstruction formulas based on the duality concept from frame theory. | [
"['Daniel Haider' 'Martin Ehler' 'Peter Balazs']"
] |
null | null | 2406.15873 | null | null | http://arxiv.org/pdf/2406.15873v1 | 2024-06-22T15:24:08Z | 2024-06-22T15:24:08Z | NeuralSCF: Neural network self-consistent fields for density functional
theory | Kohn-Sham density functional theory (KS-DFT) has found widespread application in accurate electronic structure calculations. However, it can be computationally demanding especially for large-scale simulations, motivating recent efforts toward its machine-learning (ML) acceleration. We propose a neural network self-consistent fields (NeuralSCF) framework that establishes the Kohn-Sham density map as a deep learning objective, which encodes the mechanics of the Kohn-Sham equations. Modeling this map with an SE(3)-equivariant graph transformer, NeuralSCF emulates the Kohn-Sham self-consistent iterations to obtain electron densities, from which other properties can be derived. NeuralSCF achieves state-of-the-art accuracy in electron density prediction and derived properties, featuring exceptional zero-shot generalization to a remarkable range of out-of-distribution systems. NeuralSCF reveals that learning from KS-DFT's intrinsic mechanics significantly enhances the model's accuracy and transferability, offering a promising stepping stone for accelerating electronic structure calculations through mechanics learning. | [
"['Feitong Song' 'Ji Feng']"
] |
null | null | 2406.15881 | null | null | http://arxiv.org/pdf/2406.15881v1 | 2024-06-22T16:05:34Z | 2024-06-22T16:05:34Z | Fast Tree-Field Integrators: From Low Displacement Rank to Topological
Transformers | We present a new class of fast polylog-linear algorithms based on the theory of structured matrices (in particular low displacement rank) for integrating tensor fields defined on weighted trees. Several applications of the resulting fast tree-field integrators (FTFIs) are presented, including (a) approximation of graph metrics with tree metrics, (b) graph classification, (c) modeling on meshes, and finally (d) Topological Transformers (TTs) (Choromanski et al., 2022) for images. For Topological Transformers, we propose new relative position encoding (RPE) masking mechanisms with as few as three extra learnable parameters per Transformer layer, leading to 1.0-1.5%+ accuracy gains. Importantly, most of FTFIs are exact methods, thus numerically equivalent to their brute-force counterparts. When applied to graphs with thousands of nodes, those exact algorithms provide 5.7-13x speedups. We also provide an extensive theoretical analysis of our methods. | [
"['Krzysztof Choromanski' 'Arijit Sehanobish' 'Somnath Basu Roy Chowdhury'\n 'Han Lin' 'Avinava Dubey' 'Tamas Sarlos' 'Snigdha Chaturvedi']"
] |
null | null | 2406.15888 | null | null | http://arxiv.org/pdf/2406.15888v1 | 2024-06-22T16:37:51Z | 2024-06-22T16:37:51Z | Real-time Speech Summarization for Medical Conversations | In doctor-patient conversations, identifying medically relevant information is crucial, posing the need for conversation summarization. In this work, we propose the first deployable real-time speech summarization system for real-world applications in industry, which generates a local summary after every N speech utterances within a conversation and a global summary after the end of a conversation. Our system could enhance user experience from a business standpoint, while also reducing computational costs from a technical perspective. Secondly, we present VietMed-Sum which, to our knowledge, is the first speech summarization dataset for medical conversations. Thirdly, we are the first to utilize LLM and human annotators collaboratively to create gold standard and synthetic summaries for medical conversation summarization. Finally, we present baseline results of state-of-the-art models on VietMed-Sum. All code, data (English-translated and Vietnamese) and models are available online: https://github.com/leduckhai/MultiMed | [
"['Khai Le-Duc' 'Khai-Nguyen Nguyen' 'Long Vo-Dang' 'Truong-Son Hy']"
] |
null | null | 2406.15890 | null | null | http://arxiv.org/pdf/2406.15890v1 | 2024-06-22T16:55:21Z | 2024-06-22T16:55:21Z | Language Alignment via Nash-learning and Adaptive feedback | Recent research has shown the potential of Nash Learning via Human Feedback for large language model alignment by incorporating the notion of a preference model in a minimax game setup. We take this idea further by casting the alignment as a mirror descent algorithm against the adaptive feedback of an improved opponent, thereby removing the need for learning a preference model or the existence of an annotated dataset altogether. The resulting algorithm, which we refer to as Language Alignment via Nash-learning and Adaptive feedback (LANA), is capable of self-alignment without the need for a human-annotated preference dataset. We support this statement with various experiments and mathematical discussion. | [
"['Ari Azarafrooz' 'Farshid Faal']"
] |
null | null | 2406.15893 | null | null | http://arxiv.org/abs/2406.15893v1 | 2024-06-22T17:04:24Z | 2024-06-22T17:04:24Z | Statistical Models of Top-$k$ Partial Orders | In many contexts involving ranked preferences, agents submit partial orders over available alternatives. Statistical models often treat these as marginal in the space of total orders, but this approach overlooks information contained in the list length itself. In this work, we introduce and taxonomize approaches for jointly modeling distributions over top-$k$ partial orders and list lengths $k$, considering two classes of approaches: composite models that view a partial order as a truncation of a total order, and augmented ranking models that model the construction of the list as a sequence of choice decisions, including the decision to stop. For composite models, we consider three dependency structures for joint modeling of order and truncation length. For augmented ranking models, we consider different assumptions on how the stop-token choice is modeled. Using data consisting of partial rankings from San Francisco school choice and San Francisco ranked choice elections, we evaluate how well the models predict observed data and generate realistic synthetic datasets. We find that composite models, explicitly modeling length as a categorical variable, produce synthetic datasets with accurate length distributions, and an augmented model with position-dependent item utilities jointly models length and preferences in the training data best, as measured by negative log loss. Methods from this work have significant implications on the simulation and evaluation of real-world social systems that solicit ranked preferences. | [
"['Amel Awadelkarim' 'Johan Ugander']"
] |
null | null | 2406.15897 | null | null | http://arxiv.org/pdf/2406.15897v2 | 2024-07-02T12:13:14Z | 2024-06-22T17:19:51Z | Fusing Audio and Metadata Embeddings Improves Language-based Audio
Retrieval | Matching raw audio signals with textual descriptions requires understanding the audio's content and the description's semantics and then drawing connections between the two modalities. This paper investigates a hybrid retrieval system that utilizes audio metadata as an additional clue to understand the content of audio signals before matching them with textual queries. We experimented with metadata often attached to audio recordings, such as keywords and natural-language descriptions, and we investigated late and mid-level fusion strategies to merge audio and metadata. Our hybrid approach with keyword metadata and late fusion improved the retrieval performance over a content-based baseline by 2.36 and 3.69 pp. mAP@10 on the ClothoV2 and AudioCaps benchmarks, respectively. | [
"['Paul Primus' 'Gerhard Widmer']"
] |
null | null | 2406.15898 | null | null | http://arxiv.org/pdf/2406.15898v1 | 2024-06-22T17:29:45Z | 2024-06-22T17:29:45Z | Defection-Free Collaboration between Competitors in a Learning System | We study collaborative learning systems in which the participants are competitors who will defect from the system if they lose revenue by collaborating. As such, we frame the system as a duopoly of competitive firms who are each engaged in training machine-learning models and selling their predictions to a market of consumers. We first examine a fully collaborative scheme in which both firms share their models with each other and show that this leads to a market collapse with the revenues of both firms going to zero. We next show that one-sided collaboration in which only the firm with the lower-quality model shares improves the revenue of both firms. Finally, we propose a more equitable, *defection-free* scheme in which both firms share with each other while losing no revenue, and we show that our algorithm converges to the Nash bargaining solution. | [
"['Mariel Werner' 'Sai Praneeth Karimireddy' 'Michael I. Jordan']"
] |
null | null | 2406.15904 | null | null | http://arxiv.org/pdf/2406.15904v1 | 2024-06-22T17:43:08Z | 2024-06-22T17:43:08Z | Learning When the Concept Shifts: Confounding, Invariance, and Dimension
Reduction | Practitioners often deploy a learned prediction model in a new environment where the joint distribution of covariate and response has shifted. In observational data, the distribution shift is often driven by unobserved confounding factors lurking in the environment, with the underlying mechanism unknown. Confounding can obfuscate the definition of the best prediction model (concept shift) and shift covariates to domains yet unseen (covariate shift). Therefore, a model maximizing prediction accuracy in the source environment could suffer a significant accuracy drop in the target environment. This motivates us to study the domain adaptation problem with observational data: given labeled covariate and response pairs from a source environment, and unlabeled covariates from a target environment, how can one predict the missing target response reliably? We root the adaptation problem in a linear structural causal model to address endogeneity and unobserved confounding. We study the necessity and benefit of leveraging exogenous, invariant covariate representations to cure concept shifts and improve target prediction. This further motivates a new representation learning method for adaptation that optimizes for a lower-dimensional linear subspace and, subsequently, a prediction model confined to that subspace. The procedure operates on a non-convex objective-that naturally interpolates between predictability and stability/invariance-constrained on the Stiefel manifold. We study the optimization landscape and prove that, when the regularization is sufficient, nearly all local optima align with an invariant linear subspace resilient to both concept and covariate shift. In terms of predictability, we show a model that uses the learned lower-dimensional subspace can incur a nearly ideal gap between target and source risk. Three real-world data sets are investigated to validate our method and theory. | [
"['Kulunu Dharmakeerthi' 'YoonHaeng Hur' 'Tengyuan Liang']"
] |
null | null | 2406.15916 | null | null | http://arxiv.org/pdf/2406.15916v1 | 2024-06-22T18:40:07Z | 2024-06-22T18:40:07Z | Credit Attribution and Stable Compression | Credit attribution is crucial across various fields. In academic research, proper citation acknowledges prior work and establishes original contributions. Similarly, in generative models, such as those trained on existing artworks or music, it is important to ensure that any generated content influenced by these works appropriately credits the original creators. We study credit attribution by machine learning algorithms. We propose new definitions--relaxations of Differential Privacy--that weaken the stability guarantees for a designated subset of $k$ datapoints. These $k$ datapoints can be used non-stably with permission from their owners, potentially in exchange for compensation. Meanwhile, the remaining datapoints are guaranteed to have no significant influence on the algorithm's output. Our framework extends well-studied notions of stability, including Differential Privacy ($k = 0$), differentially private learning with public data (where the $k$ public datapoints are fixed in advance), and stable sample compression (where the $k$ datapoints are selected adaptively by the algorithm). We examine the expressive power of these stability notions within the PAC learning framework, provide a comprehensive characterization of learnability for algorithms adhering to these principles, and propose directions and questions for future research. | [
"['Roi Livni' 'Shay Moran' 'Kobbi Nissim' 'Chirag Pabbaraju']"
] |
null | null | 2406.15927 | null | null | http://arxiv.org/pdf/2406.15927v1 | 2024-06-22T19:46:06Z | 2024-06-22T19:46:06Z | Semantic Entropy Probes: Robust and Cheap Hallucination Detection in
LLMs | We propose semantic entropy probes (SEPs), a cheap and reliable method for uncertainty quantification in Large Language Models (LLMs). Hallucinations, which are plausible-sounding but factually incorrect and arbitrary model generations, present a major challenge to the practical adoption of LLMs. Recent work by Farquhar et al. (2024) proposes semantic entropy (SE), which can detect hallucinations by estimating uncertainty in the space semantic meaning for a set of model generations. However, the 5-to-10-fold increase in computation cost associated with SE computation hinders practical adoption. To address this, we propose SEPs, which directly approximate SE from the hidden states of a single generation. SEPs are simple to train and do not require sampling multiple model generations at test time, reducing the overhead of semantic uncertainty quantification to almost zero. We show that SEPs retain high performance for hallucination detection and generalize better to out-of-distribution data than previous probing methods that directly predict model accuracy. Our results across models and tasks suggest that model hidden states capture SE, and our ablation studies give further insights into the token positions and model layers for which this is the case. | [
"['Jannik Kossen' 'Jiatong Han' 'Muhammed Razzak' 'Lisa Schut'\n 'Shreshth Malik' 'Yarin Gal']"
] |
null | null | 2406.15931 | null | null | http://arxiv.org/pdf/2406.15931v1 | 2024-06-22T20:14:56Z | 2024-06-22T20:14:56Z | Multistep Criticality Search and Power Shaping in Microreactors with
Reinforcement Learning | Reducing operation and maintenance costs is a key objective for advanced reactors in general and microreactors in particular. To achieve this reduction, developing robust autonomous control algorithms is essential to ensure safe and autonomous reactor operation. Recently, artificial intelligence and machine learning algorithms, specifically reinforcement learning (RL) algorithms, have seen rapid increased application to control problems, such as plasma control in fusion tokamaks and building energy management. In this work, we introduce the use of RL for intelligent control in nuclear microreactors. The RL agent is trained using proximal policy optimization (PPO) and advantage actor-critic (A2C), cutting-edge deep RL techniques, based on a high-fidelity simulation of a microreactor design inspired by the Westinghouse eVincitextsuperscript{TM} design. We utilized a Serpent model to generate data on drum positions, core criticality, and core power distribution for training a feedforward neural network surrogate model. This surrogate model was then used to guide a PPO and A2C control policies in determining the optimal drum position across various reactor burnup states, ensuring critical core conditions and symmetrical power distribution across all six core portions. The results demonstrate the excellent performance of PPO in identifying optimal drum positions, achieving a hextant power tilt ratio of approximately 1.002 (within the limit of $<$ 1.02) and maintaining criticality within a 10 pcm range. A2C did not provide as competitive of a performance as PPO in terms of performance metrics for all burnup steps considered in the cycle. Additionally, the results highlight the capability of well-trained RL control policies to quickly identify control actions, suggesting a promising approach for enabling real-time autonomous control through digital twins. | [
"['Majdi I. Radaideh' 'Leo Tunkle' 'Dean Price' 'Kamal Abdulraheem'\n 'Linyu Lin' 'Moutaz Elias']"
] |
null | null | 2406.15936 | null | null | http://arxiv.org/pdf/2406.15936v1 | 2024-06-22T20:52:17Z | 2024-06-22T20:52:17Z | An Automated SQL Query Grading System Using An Attention-Based
Convolutional Neural Network | Grading SQL queries can be a time-consuming, tedious and challenging task, especially as the number of student submissions increases. Several systems have been introduced in an attempt to mitigate these challenges, but those systems have their own limitations. This paper describes our novel approach to automating the process of grading SQL queries. Unlike previous approaches, we employ a unique convolutional neural network architecture that employs a parameter-sharing approach for different machine learning tasks that enables the architecture to induce different knowledge representations of the data to increase its potential for understanding SQL statements. | [
"['Donald R. Schwartz' 'Pablo Rivas']"
] |
null | null | 2406.15938 | null | null | http://arxiv.org/pdf/2406.15938v1 | 2024-06-22T20:57:12Z | 2024-06-22T20:57:12Z | RuleR: Improving LLM Controllability by Rule-based Data Recycling | Large language models (LLMs) still lack delicate controllability over their responses, which is critical to enhancing their performance and the user experience. However, curating supervised fine-tuning (SFT) datasets to improve LLM controllability usually relies on human experts or proprietary LLMs, which requires additional costs. To bridge this gap, we propose Rule-based Data Recycling (RuleR), a data augmentation method incorporating multiple constraints into the original data samples according to predefined rules, which creates new training tasks to consolidate the controllability of LLMs. Instead of creating new data from scratch, RuleR ``recycles'' existing data by simply applying rule-based edits to their responses and appending the rule-instructions in their original instructions. Experimental results demonstrate RuleR's effectiveness in improving LLM controllability while maintaining general instruction-following capabilities. The code will be released on https://github.com/MingLiiii/RuleR. | [
"['Ming Li' 'Han Chen' 'Chenguang Wang' 'Dang Nguyen' 'Dianqi Li'\n 'Tianyi Zhou']"
] |
null | null | 2406.15940 | null | null | http://arxiv.org/pdf/2406.15940v1 | 2024-06-22T21:12:57Z | 2024-06-22T21:12:57Z | Beyond Individual Facts: Investigating Categorical Knowledge Locality of
Taxonomy and Meronomy Concepts in GPT Models | The location of knowledge within Generative Pre-trained Transformer (GPT)-like models has seen extensive recent investigation. However, much of the work is focused towards determining locations of individual facts, with the end goal being the editing of facts that are outdated, erroneous, or otherwise harmful, without the time and expense of retraining the entire model. In this work, we investigate a broader view of knowledge location, that of concepts or clusters of related information, instead of disparate individual facts. To do this, we first curate a novel dataset, called DARC, that includes a total of 34 concepts of ~120K factual statements divided into two types of hierarchical categories, namely taxonomy and meronomy. Next, we utilize existing causal mediation analysis methods developed for determining regions of importance for individual facts and apply them to a series of related categories to provide detailed investigation into whether concepts are associated with distinct regions within these models. We find that related categories exhibit similar areas of importance in contrast to less similar categories. However, fine-grained localization of individual category subsets to specific regions is not apparent. | [
"['Christopher Burger' 'Yifan Hu' 'Thai Le']"
] |
null | null | 2406.15941 | null | null | http://arxiv.org/pdf/2406.15941v1 | 2024-06-22T21:14:24Z | 2024-06-22T21:14:24Z | Towards Exact Computation of Inductive Bias | Much research in machine learning involves finding appropriate inductive biases (e.g. convolutional neural networks, momentum-based optimizers, transformers) to promote generalization on tasks. However, quantification of the amount of inductive bias associated with these architectures and hyperparameters has been limited. We propose a novel method for efficiently computing the inductive bias required for generalization on a task with a fixed training data budget; formally, this corresponds to the amount of information required to specify well-generalizing models within a specific hypothesis space of models. Our approach involves modeling the loss distribution of random hypotheses drawn from a hypothesis space to estimate the required inductive bias for a task relative to these hypotheses. Unlike prior work, our method provides a direct estimate of inductive bias without using bounds and is applicable to diverse hypothesis spaces. Moreover, we derive approximation error bounds for our estimation approach in terms of the number of sampled hypotheses. Consistent with prior results, our empirical results demonstrate that higher dimensional tasks require greater inductive bias. We show that relative to other expressive model classes, neural networks as a model class encode large amounts of inductive bias. Furthermore, our measure quantifies the relative difference in inductive bias between different neural network architectures. Our proposed inductive bias metric provides an information-theoretic interpretation of the benefits of specific model architectures for certain tasks and provides a quantitative guide to developing tasks requiring greater inductive bias, thereby encouraging the development of more powerful inductive biases. | [
"['Akhilan Boopathy' 'William Yue' 'Jaedong Hwang' 'Abhiram Iyer'\n 'Ila Fiete']"
] |
null | null | 2406.15946 | null | null | http://arxiv.org/pdf/2406.15946v1 | 2024-06-22T21:49:12Z | 2024-06-22T21:49:12Z | LaneSegNet Design Study | With the increasing prevalence of autonomous vehicles, it is essential for computer vision algorithms to accurately assess road features in real-time. This study explores the LaneSegNet architecture, a new approach to lane topology prediction which integrates topological information with lane-line data to provide a more contextual understanding of road environments. The LaneSegNet architecture includes a feature extractor, lane encoder, lane decoder, and prediction head, leveraging components from ResNet-50, BEVFormer, and various attention mechanisms. We experimented with optimizations to the LaneSegNet architecture through feature extractor modification and transformer encoder-decoder stack modification. We found that modifying the encoder and decoder stacks offered an interesting tradeoff between training time and prediction accuracy, with certain combinations showing promising results. Our implementation, trained on a single NVIDIA Tesla A100 GPU, found that a 2:4 ratio reduced training time by 22.3% with only a 7.1% drop in mean average precision, while a 4:8 ratio increased training time by only 11.1% but improved mean average precision by a significant 23.7%. These results indicate that strategic hyperparameter tuning can yield substantial improvements depending on the resources of the user. This study provides valuable insights for optimizing LaneSegNet according to available computation power, making it more accessible for users with limited resources and increasing the capabilities for users with more powerful resources. | [
"['William Stevens' 'Vishal Urs' 'Karthik Selvaraj' 'Gabriel Torres'\n 'Gaurish Lakhanpal']"
] |
null | null | 2406.15958 | null | null | http://arxiv.org/pdf/2406.15958v1 | 2024-06-22T23:21:47Z | 2024-06-22T23:21:47Z | Bone Fracture Classification using Transfer Learning | The manual examination of X-ray images for fractures is a time-consuming process that is prone to human error. In this work, we introduce a robust yet simple training loop for the classification of fractures, which significantly outperforms existing methods. Our method achieves superior performance in less than ten epochs and utilizes the latest dataset to deliver the best-performing model for this task. We emphasize the importance of training deep learning models responsibly and efficiently, as well as the critical role of selecting high-quality datasets. | [
"['Shyam Gupta' 'Dhanisha Sharma']"
] |
null | null | 2406.15959 | null | null | http://arxiv.org/pdf/2406.15959v1 | 2024-06-22T23:25:54Z | 2024-06-22T23:25:54Z | A Nonoverlapping Domain Decomposition Method for Extreme Learning
Machines: Elliptic Problems | Extreme learning machine (ELM) is a methodology for solving partial differential equations (PDEs) using a single hidden layer feed-forward neural network. It presets the weight/bias coefficients in the hidden layer with random values, which remain fixed throughout the computation, and uses a linear least squares method for training the parameters of the output layer of the neural network. It is known to be much faster than Physics informed neural networks. However, classical ELM is still computationally expensive when a high level of representation is desired in the solution as this requires solving a large least squares system. In this paper, we propose a nonoverlapping domain decomposition method (DDM) for ELMs that not only reduces the training time of ELMs, but is also suitable for parallel computation. In numerical analysis, DDMs have been widely studied to reduce the time to obtain finite element solutions for elliptic PDEs through parallel computation. Among these approaches, nonoverlapping DDMs are attracting the most attention. Motivated by these methods, we introduce local neural networks, which are valid only at corresponding subdomains, and an auxiliary variable at the interface. We construct a system on the variable and the parameters of local neural networks. A Schur complement system on the interface can be derived by eliminating the parameters of the output layer. The auxiliary variable is then directly obtained by solving the reduced system after which the parameters for each local neural network are solved in parallel. A method for initializing the hidden layer parameters suitable for high approximation quality in large systems is also proposed. Numerical results that verify the acceleration performance of the proposed method with respect to the number of subdomains are presented. | [
"['Chang-Ock Lee' 'Youngkyu Lee' 'Byungeun Ryoo']"
] |
null | null | 2406.15960 | null | null | http://arxiv.org/pdf/2406.15960v1 | 2024-06-22T23:34:53Z | 2024-06-22T23:34:53Z | Fair Clustering: Critique, Caveats, and Future Directions | Clustering is a fundamental problem in machine learning and operations research. Therefore, given the fact that fairness considerations have become of paramount importance in algorithm design, fairness in clustering has received significant attention from the research community. The literature on fair clustering has resulted in a collection of interesting fairness notions and elaborate algorithms. In this paper, we take a critical view of fair clustering, identifying a collection of ignored issues such as the lack of a clear utility characterization and the difficulty in accounting for the downstream effects of a fair clustering algorithm in machine learning settings. In some cases, we demonstrate examples where the application of a fair clustering algorithm can have significant negative impacts on social welfare. We end by identifying a collection of steps that would lead towards more impactful research in fair clustering. | [
"['John Dickerson' 'Seyed A. Esmaeili' 'Jamie Morgenstern'\n 'Claire Jie Zhang']"
] |
null | null | 2406.15962 | null | null | http://arxiv.org/pdf/2406.15962v1 | 2024-06-23T00:01:03Z | 2024-06-23T00:01:03Z | Privacy Preserving Machine Learning for Electronic Health Records using
Federated Learning and Differential Privacy | An Electronic Health Record (EHR) is an electronic database used by healthcare providers to store patients' medical records which may include diagnoses, treatments, costs, and other personal information. Machine learning (ML) algorithms can be used to extract and analyze patient data to improve patient care. Patient records contain highly sensitive information, such as social security numbers (SSNs) and residential addresses, which introduces a need to apply privacy-preserving techniques for these ML models using federated learning and differential privacy. | [
"['Naif A. Ganadily' 'Han J. Xia']"
] |
null | null | 2406.15968 | null | null | http://arxiv.org/pdf/2406.15968v1 | 2024-06-23T00:23:13Z | 2024-06-23T00:23:13Z | ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods | The rapid scaling of large language models (LLMs) has raised concerns about the transparency and fair use of the pretraining data used for training them. Detecting such content is challenging due to the scale of the data and limited exposure of each instance during training. We propose ReCaLL (Relative Conditional Log-Likelihood), a novel membership inference attack (MIA) to detect LLMs' pretraining data by leveraging their conditional language modeling capabilities. ReCaLL examines the relative change in conditional log-likelihoods when prefixing target data points with non-member context. Our empirical findings show that conditioning member data on non-member prefixes induces a larger decrease in log-likelihood compared to non-member data. We conduct comprehensive experiments and show that ReCaLL achieves state-of-the-art performance on the WikiMIA dataset, even with random and synthetic prefixes, and can be further improved using an ensemble approach. Moreover, we conduct an in-depth analysis of LLMs' behavior with different membership contexts, providing insights into how LLMs leverage membership information for effective inference at both the sequence and token level. | [
"['Roy Xie' 'Junlin Wang' 'Ruomin Huang' 'Minxing Zhang' 'Rong Ge'\n 'Jian Pei' 'Neil Zhenqiang Gong' 'Bhuwan Dhingra']"
] |
null | null | 2406.15972 | null | null | http://arxiv.org/pdf/2406.15972v1 | 2024-06-23T00:32:06Z | 2024-06-23T00:32:06Z | EVCL: Elastic Variational Continual Learning with Weight Consolidation | Continual learning aims to allow models to learn new tasks without forgetting what has been learned before. This work introduces Elastic Variational Continual Learning with Weight Consolidation (EVCL), a novel hybrid model that integrates the variational posterior approximation mechanism of Variational Continual Learning (VCL) with the regularization-based parameter-protection strategy of Elastic Weight Consolidation (EWC). By combining the strengths of both methods, EVCL effectively mitigates catastrophic forgetting and enables better capture of dependencies between model parameters and task-specific data. Evaluated on five discriminative tasks, EVCL consistently outperforms existing baselines in both domain-incremental and task-incremental learning scenarios for deep discriminative models. | [
"['Hunar Batra' 'Ronald Clark']"
] |
null | null | 2406.15982 | null | null | http://arxiv.org/pdf/2406.15982v1 | 2024-06-23T02:21:48Z | 2024-06-23T02:21:48Z | Learning with Noisy Ground Truth: From 2D Classification to 3D
Reconstruction | Deep neural networks has been highly successful in data-intense computer vision applications, while such success relies heavily on the massive and clean data. In real-world scenarios, clean data sometimes is difficult to obtain. For example, in image classification and segmentation tasks, precise annotations of millions samples are generally very expensive and time-consuming. In 3D static scene reconstruction task, most NeRF related methods require the foundational assumption of the static scene (e.g. consistent lighting condition and persistent object positions), which is often violated in real-world scenarios. To address these problem, learning with noisy ground truth (LNGT) has emerged as an effective learning method and shows great potential. In this short survey, we propose a formal definition unify the analysis of LNGT LNGT in the context of different machine learning tasks (classification and regression). Based on this definition, we propose a novel taxonomy to classify the existing work according to the error decomposition with the fundamental definition of machine learning. Further, we provide in-depth analysis on memorization effect and insightful discussion about potential future research opportunities from 2D classification to 3D reconstruction, in the hope of providing guidance to follow-up research. | [
"['Yangdi Lu' 'Wenbo He']"
] |
null | null | 2406.16000 | null | null | http://arxiv.org/pdf/2406.16000v1 | 2024-06-23T03:26:47Z | 2024-06-23T03:26:47Z | Predicting Individual Depression Symptoms from Acoustic Features During
Speech | Current automatic depression detection systems provide predictions directly without relying on the individual symptoms/items of depression as denoted in the clinical depression rating scales. In contrast, clinicians assess each item in the depression rating scale in a clinical setting, thus implicitly providing a more detailed rationale for a depression diagnosis. In this work, we make a first step towards using the acoustic features of speech to predict individual items of the depression rating scale before obtaining the final depression prediction. For this, we use convolutional (CNN) and recurrent (long short-term memory (LSTM)) neural networks. We consider different approaches to learning the temporal context of speech. Further, we analyze two variants of voting schemes for individual item prediction and depression detection. We also include an animated visualization that shows an example of item prediction over time as the speech progresses. | [
"['Sebastian Rodriguez' 'Sri Harsha Dumpala' 'Katerina Dikaios'\n 'Sheri Rempel' 'Rudolf Uher' 'Sageev Oore']"
] |
null | null | 2406.16006 | null | null | http://arxiv.org/pdf/2406.16006v1 | 2024-06-23T04:23:15Z | 2024-06-23T04:23:15Z | Bounding-Box Inference for Error-Aware Model-Based Reinforcement
Learning | In model-based reinforcement learning, simulated experiences from the learned model are often treated as equivalent to experience from the real environment. However, when the model is inaccurate, it can catastrophically interfere with policy learning. Alternatively, the agent might learn about the model's accuracy and selectively use it only when it can provide reliable predictions. We empirically explore model uncertainty measures for selective planning and show that best results require distribution insensitive inference to estimate the uncertainty over model-based updates. To that end, we propose and evaluate bounding-box inference, which operates on bounding-boxes around sets of possible states and other quantities. We find that bounding-box inference can reliably support effective selective planning. | [
"['Erin J. Talvitie' 'Zilei Shao' 'Huiying Li' 'Jinghan Hu' 'Jacob Boerma'\n 'Rory Zhao' 'Xintong Wang']"
] |
null | null | 2406.16008 | null | null | http://arxiv.org/pdf/2406.16008v2 | 2024-07-03T17:40:00Z | 2024-06-23T04:35:42Z | Found in the Middle: Calibrating Positional Attention Bias Improves Long
Context Utilization | Large language models (LLMs), even when specifically trained to process long input contexts, struggle to capture relevant information located in the middle of their input. This phenomenon has been known as the lost-in-the-middle problem. In this work, we make three contributions. First, we set out to understand the factors that cause this phenomenon. In doing so, we establish a connection between lost-in-the-middle to LLMs' intrinsic attention bias: LLMs exhibit a U-shaped attention bias where the tokens at the beginning and at the end of its input receive higher attention, regardless of their relevance. Second, we mitigate this positional bias through a calibration mechanism, found-in-the-middle, that allows the model to attend to contexts faithfully according to their relevance, even though when they are in the middle. Third, we show found-in-the-middle not only achieves better performance in locating relevant information within a long context, but also eventually leads to improved retrieval-augmented generation (RAG) performance across various tasks, outperforming existing methods by up to 15 percentage points. These findings open up future directions in understanding LLM attention bias and its potential consequences. | [
"['Cheng-Yu Hsieh' 'Yung-Sung Chuang' 'Chun-Liang Li' 'Zifeng Wang'\n 'Long T. Le' 'Abhishek Kumar' 'James Glass' 'Alexander Ratner'\n 'Chen-Yu Lee' 'Ranjay Krishna' 'Tomas Pfister']"
] |
null | null | 2406.16026 | null | null | http://arxiv.org/pdf/2406.16026v2 | 2024-06-25T04:28:09Z | 2024-06-23T06:23:12Z | CEST-KAN: Kolmogorov-Arnold Networks for CEST MRI Data Analysis | Purpose: This study aims to propose and investigate the feasibility of using Kolmogorov-Arnold Network (KAN) for CEST MRI data analysis (CEST-KAN). Methods: CEST MRI data were acquired from twelve healthy volunteers at 3T. Data from ten subjects were used for training, while the remaining two were reserved for testing. The performance of multi-layer perceptron (MLP) and KAN models with the same network settings were evaluated and compared to the conventional multi-pool Lorentzian fitting (MPLF) method in generating water and multiple CEST contrasts, including amide, relayed nuclear Overhauser effect (rNOE), and magnetization transfer (MT). Results: The water and CEST maps generated by both MLP and KAN were visually comparable to the MPLF results. However, the KAN model demonstrated higher accuracy in extrapolating the CEST fitting metrics, as evidenced by the smaller validation loss during training and smaller absolute error during testing. Voxel-wise correlation analysis showed that all four CEST fitting metrics generated by KAN consistently exhibited higher Pearson coefficients than the MLP results, indicating superior performance. Moreover, the KAN models consistently outperformed the MLP models in varying hidden layer numbers despite longer training time. Conclusion: In this study, we demonstrated for the first time the feasibility of utilizing KAN for CEST MRI data analysis, highlighting its superiority over MLP in this task. The findings suggest that CEST-KAN has the potential to be a robust and reliable post-analysis tool for CEST MRI in clinical settings. | [
"['Jiawen Wang' 'Pei Cai' 'Ziyan Wang' 'Huabin Zhang' 'Jianpan Huang']"
] |
null | null | 2406.16028 | null | null | http://arxiv.org/pdf/2406.16028v2 | 2024-07-15T04:36:30Z | 2024-06-23T06:32:27Z | TimeAutoDiff: Combining Autoencoder and Diffusion model for time series
tabular data synthesizing | In this paper, we leverage the power of latent diffusion models to generate synthetic time series tabular data. Along with the temporal and feature correlations, the heterogeneous nature of the feature in the table has been one of the main obstacles in time series tabular data modeling. We tackle this problem by combining the ideas of the variational auto-encoder (VAE) and the denoising diffusion probabilistic model (DDPM). Our model named as texttt{TimeAutoDiff} has several key advantages including (1) Generality: the ability to handle the broad spectrum of time series tabular data from single to multi-sequence datasets; (2) Good fidelity and utility guarantees: numerical experiments on six publicly available datasets demonstrating significant improvements over state-of-the-art models in generating time series tabular data, across four metrics measuring fidelity and utility; (3) Fast sampling speed: entire time series data generation as opposed to the sequential data sampling schemes implemented in the existing diffusion-based models, eventually leading to significant improvements in sampling speed, (4) Entity conditional generation: the first implementation of conditional generation of multi-sequence time series tabular data with heterogenous features in the literature, enabling scenario exploration across multiple scientific and engineering domains. Codes are in preparation for release to the public, but available upon request. | [
"['Namjoon Suh' 'Yuning Yang' 'Din-Yin Hsieh' 'Qitong Luan' 'Shirong Xu'\n 'Shixiang Zhu' 'Guang Cheng']"
] |
null | null | 2406.16032 | null | null | http://arxiv.org/pdf/2406.16032v1 | 2024-06-23T06:52:33Z | 2024-06-23T06:52:33Z | Effect of Random Learning Rate: Theoretical Analysis of SGD Dynamics in
Non-Convex Optimization via Stationary Distribution | We consider a variant of the stochastic gradient descent (SGD) with a random learning rate and reveal its convergence properties. SGD is a widely used stochastic optimization algorithm in machine learning, especially deep learning. Numerous studies reveal the convergence properties of SGD and its simplified variants. Among these, the analysis of convergence using a stationary distribution of updated parameters provides generalizable results. However, to obtain a stationary distribution, the update direction of the parameters must not degenerate, which limits the applicable variants of SGD. In this study, we consider a novel SGD variant, Poisson SGD, which has degenerated parameter update directions and instead utilizes a random learning rate. Consequently, we demonstrate that a distribution of a parameter updated by Poisson SGD converges to a stationary distribution under weak assumptions on a loss function. Based on this, we further show that Poisson SGD finds global minima in non-convex optimization problems and also evaluate the generalization error using this method. As a proof technique, we approximate the distribution by Poisson SGD with that of the bouncy particle sampler (BPS) and derive its stationary distribution, using the theoretical advance of the piece-wise deterministic Markov process (PDMP). | [
"['Naoki Yoshida' 'Shogo Nakakita' 'Masaaki Imaizumi']"
] |
null | null | 2406.16035 | null | null | http://arxiv.org/pdf/2406.16035v1 | 2024-06-23T06:57:07Z | 2024-06-23T06:57:07Z | Meta-FL: A Novel Meta-Learning Framework for Optimizing Heterogeneous
Model Aggregation in Federated Learning | Federated Learning (FL) enables collaborative model training across diverse entities while safeguarding data privacy. However, FL faces challenges such as data heterogeneity and model diversity. The Meta-Federated Learning (Meta-FL) framework has been introduced to tackle these challenges. Meta-FL employs an optimization-based Meta-Aggregator to navigate the complexities of heterogeneous model updates. The Meta-Aggregator enhances the global model's performance by leveraging meta-features, ensuring a tailored aggregation that accounts for each local model's accuracy. Empirical evaluation across four healthcare-related datasets demonstrates the Meta-FL framework's adaptability, efficiency, scalability, and robustness, outperforming conventional FL approaches. Furthermore, Meta-FL's remarkable efficiency and scalability are evident in its achievement of superior accuracy with fewer communication rounds and its capacity to manage expanding federated networks without compromising performance. | [
"['Zahir Alsulaimawi']"
] |
null | null | 2406.16045 | null | null | http://arxiv.org/pdf/2406.16045v1 | 2024-06-23T08:16:44Z | 2024-06-23T08:16:44Z | Combine and Conquer: A Meta-Analysis on Data Shift and
Out-of-Distribution Detection | This paper introduces a universal approach to seamlessly combine out-of-distribution (OOD) detection scores. These scores encompass a wide range of techniques that leverage the self-confidence of deep learning models and the anomalous behavior of features in the latent space. Not surprisingly, combining such a varied population using simple statistics proves inadequate. To overcome this challenge, we propose a quantile normalization to map these scores into p-values, effectively framing the problem into a multi-variate hypothesis test. Then, we combine these tests using established meta-analysis tools, resulting in a more effective detector with consolidated decision boundaries. Furthermore, we create a probabilistic interpretable criterion by mapping the final statistics into a distribution with known parameters. Through empirical investigation, we explore different types of shifts, each exerting varying degrees of impact on data. Our results demonstrate that our approach significantly improves overall robustness and performance across diverse OOD detection scenarios. Notably, our framework is easily extensible for future developments in detection scores and stands as the first to combine decision boundaries in this context. The code and artifacts associated with this work are publicly availablefootnote{url{https://github.com/edadaltocg/detectors}}. | [
"['Eduardo Dadalto' 'Florence Alberge' 'Pierre Duhamel' 'Pablo Piantanida']"
] |
null | null | 2406.16052 | null | null | http://arxiv.org/abs/2406.16052v1 | 2024-06-23T09:06:52Z | 2024-06-23T09:06:52Z | Pivotal Auto-Encoder via Self-Normalizing ReLU | Sparse auto-encoders are useful for extracting low-dimensional representations from high-dimensional data. However, their performance degrades sharply when the input noise at test time differs from the noise employed during training. This limitation hinders the applicability of auto-encoders in real-world scenarios where the level of noise in the input is unpredictable. In this paper, we formalize single hidden layer sparse auto-encoders as a transform learning problem. Leveraging the transform modeling interpretation, we propose an optimization problem that leads to a predictive model invariant to the noise level at test time. In other words, the same pre-trained model is able to generalize to different noise levels. The proposed optimization algorithm, derived from the square root lasso, is translated into a new, computationally efficient auto-encoding architecture. After proving that our new method is invariant to the noise level, we evaluate our approach by training networks using the proposed architecture for denoising tasks. Our experimental results demonstrate that the trained models yield a significant improvement in stability against varying types of noise compared to commonly used architectures. | [
"['Nelson Goldenstein' 'Jeremias Sulam' 'Yaniv Romano']"
] |
null | null | 2406.16061 | null | null | http://arxiv.org/pdf/2406.16061v1 | 2024-06-23T09:51:06Z | 2024-06-23T09:51:06Z | PORT: Preference Optimization on Reasoning Traces | Preference optimization methods have been successfully applied to improve not only the alignment of large language models (LLMs) with human values, but also specific natural language tasks such as summarization and stylistic continuations. This paper proposes using preference optimization methods on Chain-of-Thought steps in order to improve the reasoning performances of language models. While the chosen answers are obtained from datasets that include reasoning traces, we propose two complementary schemes for generating rejected answers: digit corruption, and weak LLM prompting. Our approach leads to increased accuracy on the GSM8K, AQuA-RAT, and ARC benchmarks for Falcon2-11B and Mistral-7B. For example, the approach can lead to up to a relative 8.47% increase in accuracy on the GSM8K benchmark without any extra annotations. This work suggests that spending resources on creating more datasets of reasoning traces would further boost LLM performances on informal reasoning tasks. | [
"['Salem Lahlou' 'Abdalgader Abubaker' 'Hakim Hacid']"
] |
null | null | 2406.16077 | null | null | http://arxiv.org/abs/2406.16077v1 | 2024-06-23T11:09:21Z | 2024-06-23T11:09:21Z | Detecting Abnormal Operations in Concentrated Solar Power Plants from
Irregular Sequences of Thermal Images | Concentrated Solar Power (CSP) plants store energy by heating a storage medium with an array of mirrors that focus sunlight onto solar receivers atop a central tower. Operating at high temperatures these receivers face risks such as freezing, deformation, and corrosion, leading to operational failures, downtime, or costly equipment damage. We study the problem of anomaly detection (AD) in sequences of thermal images collected over a year from an operational CSP plant. These images are captured at irregular intervals ranging from one to five minutes throughout the day by infrared cameras mounted on solar receivers. Our goal is to develop a method to extract useful representations from high-dimensional thermal images for AD. It should be able to handle temporal features of the data, which include irregularity, temporal dependency between images and non-stationarity due to a strong daily seasonal pattern. The co-occurrence of low-temperature anomalies that resemble normal images from the start and the end of the operational cycle with high-temperature anomalies poses an additional challenge. We first evaluate state-of-the-art deep image-based AD methods, which have been shown to be effective in deriving meaningful image representations for the detection of anomalies. Then, we introduce a forecasting-based AD method that predicts future thermal images from past sequences and timestamps via a deep sequence model. This method effectively captures specific temporal data features and distinguishes between difficult-to-detect temperature-based anomalies. Our experiments demonstrate the effectiveness of our approach compared to multiple SOTA baselines across multiple evaluation metrics. We have also successfully deployed our solution on five months of unseen data, providing critical insights for the maintenance of the CSP plant. Our code is available at: https://tinyurl.com/ForecastAD | [
"['Sukanya Patra' 'Nicolas Sournac' 'Souhaib Ben Taieb']"
] |
null | null | 2406.16087 | null | null | http://arxiv.org/pdf/2406.16087v2 | 2024-07-07T03:20:26Z | 2024-06-23T12:02:17Z | Imperative Learning: A Self-supervised Neural-Symbolic Learning
Framework for Robot Autonomy | Data-driven methods such as reinforcement and imitation learning have achieved remarkable success in robot autonomy. However, their data-centric nature still hinders them from generalizing well to ever-changing environments. Moreover, collecting large datasets for robotic tasks is often impractical and expensive. To overcome these challenges, we introduce a new self-supervised neural-symbolic (NeSy) computational framework, imperative learning (IL), for robot autonomy, leveraging the generalization abilities of symbolic reasoning. The framework of IL consists of three primary components: a neural module, a reasoning engine, and a memory system. We formulate IL as a special bilevel optimization (BLO), which enables reciprocal learning over the three modules. This overcomes the label-intensive obstacles associated with data-driven approaches and takes advantage of symbolic reasoning concerning logical reasoning, physical principles, geometric analysis, etc. We discuss several optimization techniques for IL and verify their effectiveness in five distinct robot autonomy tasks including path planning, rule induction, optimal control, visual odometry, and multi-robot routing. Through various experiments, we show that IL can significantly enhance robot autonomy capabilities and we anticipate that it will catalyze further research across diverse domains. | [
"['Chen Wang' 'Kaiyi Ji' 'Junyi Geng' 'Zhongqiang Ren' 'Taimeng Fu'\n 'Fan Yang' 'Yifan Guo' 'Haonan He' 'Xiangyu Chen' 'Zitong Zhan'\n 'Qiwei Du' 'Shaoshu Su' 'Bowen Li' 'Yuheng Qiu' 'Yi Du' 'Qihang Li'\n 'Yifan Yang' 'Xiao Lin' 'Zhipeng Zhao']"
] |
null | null | 2406.16093 | null | null | http://arxiv.org/pdf/2406.16093v1 | 2024-06-23T12:14:37Z | 2024-06-23T12:14:37Z | Towards Natural Language-Driven Assembly Using Foundation Models | Large Language Models (LLMs) and strong vision models have enabled rapid research and development in the field of Vision-Language-Action models that enable robotic control. The main objective of these methods is to develop a generalist policy that can control robots with various embodiments. However, in industrial robotic applications such as automated assembly and disassembly, some tasks, such as insertion, demand greater accuracy and involve intricate factors like contact engagement, friction handling, and refined motor skills. Implementing these skills using a generalist policy is challenging because these policies might integrate further sensory data, including force or torque measurements, for enhanced precision. In our method, we present a global control policy based on LLMs that can transfer the control policy to a finite set of skills that are specifically trained to perform high-precision tasks through dynamic context switching. The integration of LLMs into this framework underscores their significance in not only interpreting and processing language inputs but also in enriching the control mechanisms for diverse and intricate robotic operations. | [
"['Omkar Joglekar' 'Tal Lancewicki' 'Shir Kozlovsky' 'Vladimir Tchuiev'\n 'Zohar Feldman' 'Dotan Di Castro']"
] |
null | null | 2406.16121 | null | null | http://arxiv.org/pdf/2406.16121v1 | 2024-06-23T14:24:14Z | 2024-06-23T14:24:14Z | Diffusion Spectral Representation for Reinforcement Learning | Diffusion-based models have achieved notable empirical successes in reinforcement learning (RL) due to their expressiveness in modeling complex distributions. Despite existing methods being promising, the key challenge of extending existing methods for broader real-world applications lies in the computational cost at inference time, i.e., sampling from a diffusion model is considerably slow as it often requires tens to hundreds of iterations to generate even one sample. To circumvent this issue, we propose to leverage the flexibility of diffusion models for RL from a representation learning perspective. In particular, by exploiting the connection between diffusion model and energy-based model, we develop Diffusion Spectral Representation (Diff-SR), a coherent algorithm framework that enables extracting sufficient representations for value functions in Markov decision processes (MDP) and partially observable Markov decision processes (POMDP). We further demonstrate how Diff-SR facilitates efficient policy optimization and practical algorithms while explicitly bypassing the difficulty and inference cost of sampling from the diffusion model. Finally, we provide comprehensive empirical studies to verify the benefits of Diff-SR in delivering robust and advantageous performance across various benchmarks with both fully and partially observable settings. | [
"['Dmitry Shribak' 'Chen-Xiao Gao' 'Yitong Li' 'Chenjun Xiao' 'Bo Dai']"
] |
null | null | 2406.16135 | null | null | http://arxiv.org/pdf/2406.16135v1 | 2024-06-23T15:15:17Z | 2024-06-23T15:15:17Z | Crosslingual Capabilities and Knowledge Barriers in Multilingual Large
Language Models | Large language models (LLMs) are typically multilingual due to pretraining on diverse multilingual corpora. But can these models relate corresponding concepts across languages, effectively being crosslingual? This study evaluates six state-of-the-art LLMs on inherently crosslingual tasks. We observe that while these models show promising surface-level crosslingual abilities on machine translation and embedding space analyses, they struggle with deeper crosslingual knowledge transfer, revealing a crosslingual knowledge barrier in both general (MMLU benchmark) and domain-specific (Harry Potter quiz) contexts. We observe that simple inference-time mitigation methods offer only limited improvement. On the other hand, we propose fine-tuning of LLMs on mixed-language data, which effectively reduces these gaps, even when using out-of-domain datasets like WikiText. Our findings suggest the need for explicit optimization to unlock the full crosslingual potential of LLMs. Our code is publicly available at https://github.com/google-research/crosslingual-knowledge-barriers. | [
"['Lynn Chua' 'Badih Ghazi' 'Yangsibo Huang' 'Pritish Kamath' 'Ravi Kumar'\n 'Pasin Manurangsi' 'Amer Sinha' 'Chulin Xie' 'Chiyuan Zhang']"
] |
null | null | 2406.16145 | null | null | http://arxiv.org/pdf/2406.16145v1 | 2024-06-23T15:52:23Z | 2024-06-23T15:52:23Z | Predefined Prototypes for Intra-Class Separation and Disentanglement | Prototypical Learning is based on the idea that there is a point (which we call prototype) around which the embeddings of a class are clustered. It has shown promising results in scenarios with little labeled data or to design explainable models. Typically, prototypes are either defined as the average of the embeddings of a class or are designed to be trainable. In this work, we propose to predefine prototypes following human-specified criteria, which simplify the training pipeline and brings different advantages. Specifically, in this work we explore two of these advantages: increasing the inter-class separability of embeddings and disentangling embeddings with respect to different variance factors, which can translate into the possibility of having explainable predictions. Finally, we propose different experiments that help to understand our proposal and demonstrate empirically the mentioned advantages. | [
"['Antonio Almudévar' 'Théo Mariotte' 'Alfonso Ortega' 'Marie Tahon'\n 'Luis Vicente' 'Antonio Miguel' 'Eduardo Lleida']"
] |
null | null | 2406.16148 | null | null | http://arxiv.org/pdf/2406.16148v1 | 2024-06-23T16:04:26Z | 2024-06-23T16:04:26Z | Towards Open Respiratory Acoustic Foundation Models: Pretraining and
Benchmarking | Respiratory audio, such as coughing and breathing sounds, has predictive power for a wide range of healthcare applications, yet is currently under-explored. The main problem for those applications arises from the difficulty in collecting large labeled task-specific data for model development. Generalizable respiratory acoustic foundation models pretrained with unlabeled data would offer appealing advantages and possibly unlock this impasse. However, given the safety-critical nature of healthcare applications, it is pivotal to also ensure openness and replicability for any proposed foundation model solution. To this end, we introduce OPERA, an OPEn Respiratory Acoustic foundation model pretraining and benchmarking system, as the first approach answering this need. We curate large-scale respiratory audio datasets (~136K samples, 440 hours), pretrain three pioneering foundation models, and build a benchmark consisting of 19 downstream respiratory health tasks for evaluation. Our pretrained models demonstrate superior performance (against existing acoustic models pretrained with general audio on 16 out of 19 tasks) and generalizability (to unseen datasets and new respiratory audio modalities). This highlights the great promise of respiratory acoustic foundation models and encourages more studies using OPERA as an open resource to accelerate research on respiratory audio for health. The system is accessible from https://github.com/evelyn0414/OPERA. | [
"['Yuwei Zhang' 'Tong Xia' 'Jing Han' 'Yu Wu' 'Georgios Rizos' 'Yang Liu'\n 'Mohammed Mosuily' 'Jagmohan Chauhan' 'Cecilia Mascolo']"
] |
null | null | 2406.16151 | null | null | http://arxiv.org/pdf/2406.16151v1 | 2024-06-23T16:22:40Z | 2024-06-23T16:22:40Z | Monte Carlo Planning for Stochastic Control on Constrained Markov
Decision Processes | In the world of stochastic control, especially in economics and engineering, Markov Decision Processes (MDPs) can effectively model various stochastic decision processes, from asset management to transportation optimization. These underlying MDPs, upon closer examination, often reveal a specifically constrained causal structure concerning the transition and reward dynamics. By exploiting this structure, we can obtain a reduction in the causal representation of the problem setting, allowing us to solve of the optimal value function more efficiently. This work defines an MDP framework, the texttt{SD-MDP}, where we disentangle the causal structure of MDPs' transition and reward dynamics, providing distinct partitions on the temporal causal graph. With this stochastic reduction, the texttt{SD-MDP} reflects a general class of resource allocation problems. This disentanglement further enables us to derive theoretical guarantees on the estimation error of the value function under an optimal policy by allowing independent value estimation from Monte Carlo sampling. Subsequently, by integrating this estimator into well-known Monte Carlo planning algorithms, such as Monte Carlo Tree Search (MCTS), we derive bounds on the simple regret of the algorithm. Finally, we quantify the policy improvement of MCTS under the texttt{SD-MDP} framework by demonstrating that the MCTS planning algorithm achieves higher expected reward (lower costs) under a constant simulation budget, on a tangible economic example based on maritime refuelling. | [
"['Larkin Liu' 'Shiqi Liu' 'Matej Jusup']"
] |
null | null | 2406.16166 | null | null | http://arxiv.org/pdf/2406.16166v1 | 2024-06-23T17:01:14Z | 2024-06-23T17:01:14Z | Composite Material Design for Optimized Fracture Toughness Using Machine
Learning | This paper investigates the optimization of 2D and 3D composite structures using machine learning (ML) techniques, focusing on fracture toughness and crack propagation in the Double Cantilever Beam (DCB) test. By exploring the intricate relationship between microstructural arrangements and macroscopic properties of composites, the study demonstrates the potential of ML as a powerful tool to expedite the design optimization process, offering notable advantages over traditional finite element analysis. The research encompasses four distinct cases, examining crack propagation and fracture toughness in both 2D and 3D composite models. Through the application of ML algorithms, the study showcases the capability for rapid and accurate exploration of vast design spaces in composite materials. The findings highlight the efficiency of ML in predicting mechanical behaviors with limited training data, paving the way for broader applications in composite design and optimization. This work contributes to advancing the understanding of ML's role in enhancing the efficiency of composite material design processes. | [
"['Mohammad Naqizadeh Jahromi' 'Mohammad Ravandi']"
] |
null | null | 2406.16168 | null | null | http://arxiv.org/pdf/2406.16168v1 | 2024-06-23T17:19:26Z | 2024-06-23T17:19:26Z | An All-MLP Sequence Modeling Architecture That Excels at Copying | Recent work demonstrated Transformers' ability to efficiently copy strings of exponential sizes, distinguishing them from other architectures. We present the Causal Relation Network (CausalRN), an all-MLP sequence modeling architecture that can match Transformers on the copying task. Extending Relation Networks (RNs), we implemented key innovations to support autoregressive sequence modeling while maintaining computational feasibility. We discovered that exponentially-activated RNs are reducible to linear time complexity, and pre-activation normalization induces an infinitely growing memory pool, similar to a KV cache. In ablation study, we found both exponential activation and pre-activation normalization are indispensable for Transformer-level copying. Our findings provide new insights into what actually constitutes strong in-context retrieval. | [
"['Chenwei Cui' 'Zehao Yan' 'Gedeon Muhawenayo' 'Hannah Kerner']"
] |
null | null | 2406.16176 | null | null | http://arxiv.org/pdf/2406.16176v1 | 2024-06-23T18:01:56Z | 2024-06-23T18:01:56Z | GraphEval2000: Benchmarking and Improving Large Language Models on Graph
Datasets | Large language models (LLMs) have achieved remarkable success in natural language processing (NLP), demonstrating significant capabilities in processing and understanding text data. However, recent studies have identified limitations in LLMs' ability to reason about graph-structured data. To address this gap, we introduce GraphEval2000, the first comprehensive graph dataset, comprising 40 graph data structure problems along with 2000 test cases. Additionally, we introduce an evaluation framework based on GraphEval2000, designed to assess the graph reasoning abilities of LLMs through coding challenges. Our dataset categorizes test cases into four primary and four sub-categories, ensuring a comprehensive evaluation. We evaluate eight popular LLMs on GraphEval2000, revealing that LLMs exhibit a better understanding of directed graphs compared to undirected ones. While private LLMs consistently outperform open-source models, the performance gap is narrowing. Furthermore, to improve the usability of our evaluation framework, we propose Structured Symbolic Decomposition (SSD), an instruction-based method designed to enhance LLM performance on GraphEval2000. Results show that SSD improves the performance of GPT-3.5, GPT-4, and GPT-4o on complex graph problems, with an increase of 11.11%, 33.37%, and 33.37%, respectively. | [
"['Qiming Wu' 'Zichen Chen' 'Will Corcoran' 'Misha Sra' 'Ambuj K. Singh']"
] |
null | null | 2406.16187 | null | null | http://arxiv.org/pdf/2406.16187v1 | 2024-06-23T18:43:46Z | 2024-06-23T18:43:46Z | Evaluation and Comparison of Emotionally Evocative Image Augmentation
Methods | Experiments in affective computing are based on stimulus datasets that, in the process of standardization, receive metadata describing which emotions each stimulus evokes. In this paper, we explore an approach to creating stimulus datasets for affective computing using generative adversarial networks (GANs). Traditional dataset preparation methods are costly and time consuming, prompting our investigation of alternatives. We conducted experiments with various GAN architectures, including Deep Convolutional GAN, Conditional GAN, Auxiliary Classifier GAN, Progressive Augmentation GAN, and Wasserstein GAN, alongside data augmentation and transfer learning techniques. Our findings highlight promising advances in the generation of emotionally evocative synthetic images, suggesting significant potential for future research and improvements in this domain. | [
"['Jan Ignatowicz' 'Krzysztof Kutt' 'Grzegorz J. Nalepa']"
] |
null | null | 2406.16191 | null | null | http://arxiv.org/pdf/2406.16191v1 | 2024-06-23T18:56:46Z | 2024-06-23T18:56:46Z | Accelerating Matrix Diagonalization through Decision Transformers with
Epsilon-Greedy Optimization | This paper introduces a novel framework for matrix diagonalization, recasting it as a sequential decision-making problem and applying the power of Decision Transformers (DTs). Our approach determines optimal pivot selection during diagonalization with the Jacobi algorithm, leading to significant speedups compared to the traditional max-element Jacobi method. To bolster robustness, we integrate an epsilon-greedy strategy, enabling success in scenarios where deterministic approaches fail. This work demonstrates the effectiveness of DTs in complex computational tasks and highlights the potential of reimagining mathematical operations through a machine learning lens. Furthermore, we establish the generalizability of our method by using transfer learning to diagonalize matrices of smaller sizes than those trained. | [
"['Kshitij Bhatta' 'Geigh Zollicoffer' 'Manish Bhattarai' 'Phil Romero'\n 'Christian F. A. Negre' 'Anders M. N. Niklasson' 'Adetokunbo Adedoyin']"
] |
null | null | 2406.16193 | null | null | http://arxiv.org/pdf/2406.16193v1 | 2024-06-23T19:14:38Z | 2024-06-23T19:14:38Z | Semi-Variance Reduction for Fair Federated Learning | Ensuring fairness in a Federated Learning (FL) system, i.e., a satisfactory performance for all of the participating diverse clients, is an important and challenging problem. There are multiple fair FL algorithms in the literature, which have been relatively successful in providing fairness. However, these algorithms mostly emphasize on the loss functions of worst-off clients to improve their performance, which often results in the suppression of well-performing ones. As a consequence, they usually sacrifice the system's overall average performance for achieving fairness. Motivated by this and inspired by two well-known risk modeling methods in Finance, Mean-Variance and Mean-Semi-Variance, we propose and study two new fair FL algorithms, Variance Reduction (VRed) and Semi-Variance Reduction (SemiVRed). VRed encourages equality between clients' loss functions by penalizing their variance. In contrast, SemiVRed penalizes the discrepancy of only the worst-off clients' loss functions from the average loss. Through extensive experiments on multiple vision and language datasets, we show that, SemiVRed achieves SoTA performance in scenarios with heterogeneous data distributions and improves both fairness and system overall average performance. | [
"['Saber Malekmohammadi']"
] |
null | null | 2406.16198 | null | null | http://arxiv.org/pdf/2406.16198v1 | 2024-06-23T19:33:19Z | 2024-06-23T19:33:19Z | Hardware-Aware Neural Dropout Search for Reliable Uncertainty Prediction
on FPGA | The increasing deployment of artificial intelligence (AI) for critical decision-making amplifies the necessity for trustworthy AI, where uncertainty estimation plays a pivotal role in ensuring trustworthiness. Dropout-based Bayesian Neural Networks (BayesNNs) are prominent in this field, offering reliable uncertainty estimates. Despite their effectiveness, existing dropout-based BayesNNs typically employ a uniform dropout design across different layers, leading to suboptimal performance. Moreover, as diverse applications require tailored dropout strategies for optimal performance, manually optimizing dropout configurations for various applications is both error-prone and labor-intensive. To address these challenges, this paper proposes a novel neural dropout search framework that automatically optimizes both the dropout-based BayesNNs and their hardware implementations on FPGA. We leverage one-shot supernet training with an evolutionary algorithm for efficient dropout optimization. A layer-wise dropout search space is introduced to enable the automatic design of dropout-based BayesNNs with heterogeneous dropout configurations. Extensive experiments demonstrate that our proposed framework can effectively find design configurations on the Pareto frontier. Compared to manually-designed dropout-based BayesNNs on GPU, our search approach produces FPGA designs that can achieve up to 33X higher energy efficiency. Compared to state-of-the-art FPGA designs of BayesNN, the solutions from our approach can achieve higher algorithmic performance and energy efficiency. | [
"['Zehuan Zhang' 'Hongxiang Fan' 'Hao Mark Chen' 'Lukasz Dudziak'\n 'Wayne Luk']"
] |
null | null | 2406.16200 | null | null | http://arxiv.org/pdf/2406.16200v1 | 2024-06-23T19:37:13Z | 2024-06-23T19:37:13Z | Towards unlocking the mystery of adversarial fragility of neural
networks | In this paper, we study the adversarial robustness of deep neural networks for classification tasks. We look at the smallest magnitude of possible additive perturbations that can change the output of a classification algorithm. We provide a matrix-theoretic explanation of the adversarial fragility of deep neural network for classification. In particular, our theoretical results show that neural network's adversarial robustness can degrade as the input dimension $d$ increases. Analytically we show that neural networks' adversarial robustness can be only $1/sqrt{d}$ of the best possible adversarial robustness. Our matrix-theoretic explanation is consistent with an earlier information-theoretic feature-compression-based explanation for the adversarial fragility of neural networks. | [
"['Jingchao Gao' 'Raghu Mudumbai' 'Xiaodong Wu' 'Jirong Yi' 'Catherine Xu'\n 'Hui Xie' 'Weiyu Xu']"
] |
null | null | 2406.16201 | null | null | http://arxiv.org/pdf/2406.16201v1 | 2024-06-23T19:40:11Z | 2024-06-23T19:40:11Z | Blind Baselines Beat Membership Inference Attacks for Foundation Models | Membership inference (MI) attacks try to determine if a data sample was used to train a machine learning model. For foundation models trained on unknown Web data, MI attacks can be used to detect copyrighted training materials, measure test set contamination, or audit machine unlearning. Unfortunately, we find that evaluations of MI attacks for foundation models are flawed, because they sample members and non-members from different distributions. For 8 published MI evaluation datasets, we show that blind attacks -- that distinguish the member and non-member distributions without looking at any trained model -- outperform state-of-the-art MI attacks. Existing evaluations thus tell us nothing about membership leakage of a foundation model's training data. | [
"['Debeshee Das' 'Jie Zhang' 'Florian Tramèr']"
] |
null | null | 2406.16206 | null | null | http://arxiv.org/pdf/2406.16206v1 | 2024-06-23T20:03:55Z | 2024-06-23T20:03:55Z | Zero-Inflated Tweedie Boosted Trees with CatBoost for Insurance Loss
Analytics | In this paper, we explore advanced modifications to the Tweedie regression model in order to address its limitations in modeling aggregate claims for various types of insurance such as automobile, health, and liability. Traditional Tweedie models, while effective in capturing the probability and magnitude of claims, usually fall short in accurately representing the large incidence of zero claims. Our recommended approach involves a refined modeling of the zero-claim process, together with the integration of boosting methods in order to help leverage an iterative process to enhance predictive accuracy. Despite the inherent slowdown in learning algorithms due to this iteration, several efficient implementation techniques that also help precise tuning of parameter like XGBoost, LightGBM, and CatBoost have emerged. Nonetheless, we chose to utilize CatBoost, a efficient boosting approach that effectively handles categorical and other special types of data. The core contribution of our paper is the assembly of separate modeling for zero claims and the application of tree-based boosting ensemble methods within a CatBoost framework, assuming that the inflated probability of zero is a function of the mean parameter. The efficacy of our enhanced Tweedie model is demonstrated through the application of an insurance telematics dataset, which presents the additional complexity of compositional feature variables. Our modeling results reveal a marked improvement in model performance, showcasing its potential to deliver more accurate predictions suitable for insurance claim analytics. | [
"['Banghee So' 'Emiliano A. Valdez']"
] |
null | null | 2406.16213 | null | null | http://arxiv.org/pdf/2406.16213v1 | 2024-06-23T20:34:18Z | 2024-06-23T20:34:18Z | Provable Statistical Rates for Consistency Diffusion Models | Diffusion models have revolutionized various application domains, including computer vision and audio generation. Despite the state-of-the-art performance, diffusion models are known for their slow sample generation due to the extensive number of steps involved. In response, consistency models have been developed to merge multiple steps in the sampling process, thereby significantly boosting the speed of sample generation without compromising quality. This paper contributes towards the first statistical theory for consistency models, formulating their training as a distribution discrepancy minimization problem. Our analysis yields statistical estimation rates based on the Wasserstein distance for consistency models, matching those of vanilla diffusion models. Additionally, our results encompass the training of consistency models through both distillation and isolation methods, demystifying their underlying advantage. | [
"['Zehao Dou' 'Minshuo Chen' 'Mengdi Wang' 'Zhuoran Yang']"
] |
null | null | 2406.16218 | null | null | http://arxiv.org/pdf/2406.16218v1 | 2024-06-23T21:05:31Z | 2024-06-23T21:05:31Z | Trace is the New AutoDiff -- Unlocking Efficient Optimization of
Computational Workflows | We study a class of optimization problems motivated by automating the design and update of AI systems like coding assistants, robots, and copilots. We propose an end-to-end optimization framework, Trace, which treats the computational workflow of an AI system as a graph akin to neural networks, based on a generalization of back-propagation. Optimization of computational workflows often involves rich feedback (e.g. console output or user's responses), heterogeneous parameters (e.g. prompts, hyper-parameters, codes), and intricate objectives (beyond maximizing a score). Moreover, its computation graph can change dynamically with the inputs and parameters. We frame a new mathematical setup of iterative optimization, Optimization with Trace Oracle (OPTO), to capture and abstract these properties so as to design optimizers that work across many domains. In OPTO, an optimizer receives an execution trace along with feedback on the computed output and updates parameters iteratively. Trace is the tool to implement OPTO in practice. Trace has a Python interface that efficiently converts a computational workflow into an OPTO instance using a PyTorch-like interface. Using Trace, we develop a general-purpose LLM-based optimizer called OptoPrime that can effectively solve OPTO problems. In empirical studies, we find that OptoPrime is capable of first-order numerical optimization, prompt optimization, hyper-parameter tuning, robot controller design, code debugging, etc., and is often competitive with specialized optimizers for each domain. We believe that Trace, OptoPrime and the OPTO framework will enable the next generation of interactive agents that automatically adapt using various kinds of feedback. Website: https://microsoft.github.io/Trace | [
"['Ching-An Cheng' 'Allen Nie' 'Adith Swaminathan']"
] |
null | null | 2406.16220 | null | null | http://arxiv.org/pdf/2406.16220v1 | 2024-06-23T21:25:06Z | 2024-06-23T21:25:06Z | Learning Run-time Safety Monitors for Machine Learning Components | For machine learning components used as part of autonomous systems (AS) in carrying out critical tasks it is crucial that assurance of the models can be maintained in the face of post-deployment changes (such as changes in the operating environment of the system). A critical part of this is to be able to monitor when the performance of the model at runtime (as a result of changes) poses a safety risk to the system. This is a particularly difficult challenge when ground truth is unavailable at runtime. In this paper we introduce a process for creating safety monitors for ML components through the use of degraded datasets and machine learning. The safety monitor that is created is deployed to the AS in parallel to the ML component to provide a prediction of the safety risk associated with the model output. We demonstrate the viability of our approach through some initial experiments using publicly available speed sign datasets. | [
"['Ozan Vardal' 'Richard Hawkins' 'Colin Paterson' 'Chiara Picardi'\n 'Daniel Omeiza' 'Lars Kunze' 'Ibrahim Habli']"
] |
null | null | 2406.16221 | null | null | http://arxiv.org/pdf/2406.16221v1 | 2024-06-23T21:28:50Z | 2024-06-23T21:28:50Z | F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting
with Proxy Data | Demand prediction is a crucial task for e-commerce and physical retail businesses, especially during high-stake sales events. However, the limited availability of historical data from these peak periods poses a significant challenge for traditional forecasting methods. In this paper, we propose a novel approach that leverages strategically chosen proxy data reflective of potential sales patterns from similar entities during non-peak periods, enriched by features learned from a graph neural networks (GNNs)-based forecasting model, to predict demand during peak events. We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm that leverages proxy data from non-peak periods and GNN-generated relational metadata to learn feature-specific layer parameters, thereby adapting to demand forecasts for peak events. Theoretically, we show that by considering domain similarities through task-specific metadata, our model achieves improved generalization, where the excess risk decreases as the number of training tasks increases. Empirical evaluations on large-scale industrial datasets demonstrate the superiority of our approach. Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset. | [
"['Zexing Xu' 'Linjun Zhang' 'Sitan Yang' 'Rasoul Etesami' 'Hanghang Tong'\n 'Huan Zhang' 'Jiawei Han']"
] |
null | null | 2406.16227 | null | null | http://arxiv.org/pdf/2406.16227v1 | 2024-06-23T21:45:04Z | 2024-06-23T21:45:04Z | VICatMix: variational Bayesian clustering and variable selection for
discrete biomedical data | Effective clustering of biomedical data is crucial in precision medicine, enabling accurate stratifiction of patients or samples. However, the growth in availability of high-dimensional categorical data, including `omics data, necessitates computationally efficient clustering algorithms. We present VICatMix, a variational Bayesian finite mixture model designed for the clustering of categorical data. The use of variational inference (VI) in its training allows the model to outperform competitors in term of efficiency, while maintaining high accuracy. VICatMix furthermore performs variable selection, enhancing its performance on high-dimensional, noisy data. The proposed model incorporates summarisation and model averaging to mitigate poor local optima in VI, allowing for improved estimation of the true number of clusters simultaneously with feature saliency. We demonstrate the performance of VICatMix with both simulated and real-world data, including applications to datasets from The Cancer Genome Atlas (TCGA), showing its use in cancer subtyping and driver gene discovery. We demonstrate VICatMix's utility in integrative cluster analysis with different `omics datasets, enabling the discovery of novel subtypes. textbf{Availability:} VICatMix is freely available as an R package, incorporating C++ for faster computation, at url{https://github.com/j-ackierao/VICatMix}. | [
"['Paul D. W. Kirk' 'Jackie Rao']"
] |
null | null | 2406.16231 | null | null | http://arxiv.org/pdf/2406.16231v1 | 2024-06-23T22:05:52Z | 2024-06-23T22:05:52Z | Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental
Learning Method | Domain incremental learning (DIL) poses a significant challenge in real-world scenarios, as models need to be sequentially trained on diverse domains over time, all the while avoiding catastrophic forgetting. Mitigating representation drift, which refers to the phenomenon of learned representations undergoing changes as the model adapts to new tasks, can help alleviate catastrophic forgetting. In this study, we propose a novel DIL method named DARE, featuring a three-stage training process: Divergence, Adaptation, and REfinement. This process gradually adapts the representations associated with new tasks into the feature space spanned by samples from previous tasks, simultaneously integrating task-specific decision boundaries. Additionally, we introduce a novel strategy for buffer sampling and demonstrate the effectiveness of our proposed method, combined with this sampling strategy, in reducing representation drift within the feature encoder. This contribution effectively alleviates catastrophic forgetting across multiple DIL benchmarks. Furthermore, our approach prevents sudden representation drift at task boundaries, resulting in a well-calibrated DIL model that maintains the performance on previous tasks. | [
"['Kishaan Jeeveswaran' 'Elahe Arani' 'Bahram Zonooz']"
] |
null | null | 2406.16232 | null | null | http://arxiv.org/pdf/2406.16232v1 | 2024-06-23T22:06:25Z | 2024-06-23T22:06:25Z | Jacobian Descent for Multi-Objective Optimization | Many optimization problems are inherently multi-objective. To address them, we formalize Jacobian descent (JD), a direct generalization of gradient descent for vector-valued functions. Each step of this algorithm relies on a Jacobian matrix consisting of one gradient per objective. The aggregator, responsible for reducing this matrix into an update vector, characterizes JD. While the multi-task learning literature already contains a variety of aggregators, they often lack some natural properties. In particular, the update should not conflict with any objective and should scale proportionally to the norm of each gradient. We propose a new aggregator specifically designed to satisfy this. Emphasizing conflict between objectives, we then highlight direct applications for our methods. Most notably, we introduce instance-wise risk minimization (IWRM), a learning paradigm in which the loss of each training example is considered a separate objective. On simple image classification tasks, IWRM exhibits promising results compared to the direct minimization of the average loss. The performance of our aggregator in those experiments also corroborates our theoretical findings. Lastly, as speed is the main limitation of JD, we provide a path towards a more efficient implementation. | [
"['Pierre Quinton' 'Valérian Rey']"
] |
null | null | 2406.16235 | null | null | http://arxiv.org/pdf/2406.16235v1 | 2024-06-23T22:53:47Z | 2024-06-23T22:53:47Z | Preference Tuning For Toxicity Mitigation Generalizes Across Languages | Detoxifying multilingual Large Language Models (LLMs) has become crucial due to their increasing global use. In this work, we explore zero-shot cross-lingual generalization of preference tuning in detoxifying LLMs. Unlike previous studies that show limited cross-lingual generalization for other safety tasks, we demonstrate that Direct Preference Optimization (DPO) training with only English data can significantly reduce toxicity in multilingual open-ended generations. For example, the probability of mGPT-1.3B generating toxic continuations drops from 46.8% to 3.9% across 17 different languages after training. Our results also extend to other multilingual LLMs, such as BLOOM, Llama3, and Aya-23. Using mechanistic interpretability tools like causal intervention and activation analysis, we identified the dual multilinguality property of MLP layers in LLMs, which explains the cross-lingual generalization of DPO. Finally, we show that bilingual sentence retrieval can predict the cross-lingual transferability of DPO preference tuning. | [
"['Xiaochen Li' 'Zheng-Xin Yong' 'Stephen H. Bach']"
] |
null | null | 2406.16241 | null | null | http://arxiv.org/pdf/2406.16241v1 | 2024-06-23T23:36:26Z | 2024-06-23T23:36:26Z | Position: Benchmarking is Limited in Reinforcement Learning Research | Novel reinforcement learning algorithms, or improvements on existing ones, are commonly justified by evaluating their performance on benchmark environments and are compared to an ever-changing set of standard algorithms. However, despite numerous calls for improvements, experimental practices continue to produce misleading or unsupported claims. One reason for the ongoing substandard practices is that conducting rigorous benchmarking experiments requires substantial computational time. This work investigates the sources of increased computation costs in rigorous experiment designs. We show that conducting rigorous performance benchmarks will likely have computational costs that are often prohibitive. As a result, we argue for using an additional experimentation paradigm to overcome the limitations of benchmarking. | [
"['Scott M. Jordan' 'Adam White' 'Bruno Castro da Silva' 'Martha White'\n 'Philip S. Thomas']"
] |
null | null | 2406.16249 | null | null | http://arxiv.org/pdf/2406.16249v1 | 2024-06-24T01:09:33Z | 2024-06-24T01:09:33Z | An Optimal Tightness Bound for the Simulation Lemma | We present a bound for value-prediction error with respect to model misspecification that is tight, including constant factors. This is a direct improvement of the "simulation lemma," a foundational result in reinforcement learning. We demonstrate that existing bounds are quite loose, becoming vacuous for large discount factors, due to the suboptimal treatment of compounding probability errors. By carefully considering this quantity on its own, instead of as a subcomponent of value error, we derive a bound that is sub-linear with respect to transition function misspecification. We then demonstrate broader applicability of this technique, improving a similar bound in the related subfield of hierarchical abstraction. | [
"['Sam Lobel' 'Ronald Parr']"
] |
null | null | 2406.16252 | null | null | http://arxiv.org/pdf/2406.16252v2 | 2024-06-25T03:17:40Z | 2024-06-24T01:22:54Z | Graph-Augmented LLMs for Personalized Health Insights: A Case Study in
Sleep Analysis | Health monitoring systems have revolutionized modern healthcare by enabling the continuous capture of physiological and behavioral data, essential for preventive measures and early health intervention. While integrating this data with Large Language Models (LLMs) has shown promise in delivering interactive health advice, traditional methods like Retrieval-Augmented Generation (RAG) and fine-tuning often fail to fully utilize the complex, multi-dimensional, and temporally relevant data from wearable devices. These conventional approaches typically provide limited actionable and personalized health insights due to their inadequate capacity to dynamically integrate and interpret diverse health data streams. In response, this paper introduces a graph-augmented LLM framework designed to significantly enhance the personalization and clarity of health insights. Utilizing a hierarchical graph structure, the framework captures inter and intra-patient relationships, enriching LLM prompts with dynamic feature importance scores derived from a Random Forest Model. The effectiveness of this approach is demonstrated through a sleep analysis case study involving 20 college students during the COVID-19 lockdown, highlighting the potential of our model to generate actionable and personalized health insights efficiently. We leverage another LLM to evaluate the insights for relevance, comprehensiveness, actionability, and personalization, addressing the critical need for models that process and interpret complex health data effectively. Our findings show that augmenting prompts with our framework yields significant improvements in all 4 criteria. Through our framework, we can elicit well-crafted, more thoughtful responses tailored to a specific patient. | [
"['Ajan Subramanian' 'Zhongqi Yang' 'Iman Azimi' 'Amir M. Rahmani']"
] |
null | null | 2406.16254 | null | null | http://arxiv.org/pdf/2406.16254v1 | 2024-06-24T01:31:03Z | 2024-06-24T01:31:03Z | Confidence Regulation Neurons in Language Models | Despite their widespread use, the mechanisms by which large language models (LLMs) represent and regulate uncertainty in next-token predictions remain largely unexplored. This study investigates two critical components believed to influence this uncertainty: the recently discovered entropy neurons and a new set of components that we term token frequency neurons. Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits. Our work shows that entropy neurons operate by writing onto an unembedding null space, allowing them to impact the residual stream norm with minimal direct effect on the logits themselves. We observe the presence of entropy neurons across a range of models, up to 7 billion parameters. On the other hand, token frequency neurons, which we discover and describe here for the first time, boost or suppress each token's logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution. Finally, we present a detailed case study where entropy neurons actively manage confidence in the setting of induction, i.e. detecting and continuing repeated subsequences. | [
"['Alessandro Stolfo' 'Ben Wu' 'Wes Gurnee' 'Yonatan Belinkov'\n 'Xingyi Song' 'Mrinmaya Sachan' 'Neel Nanda']"
] |
null | null | 2406.16255 | null | null | http://arxiv.org/pdf/2406.16255v2 | 2024-06-30T03:55:48Z | 2024-06-24T01:37:18Z | Uncertainty-Aware Reward-Free Exploration with General Function
Approximation | Mastering multiple tasks through exploration and learning in an environment poses a significant challenge in reinforcement learning (RL). Unsupervised RL has been introduced to address this challenge by training policies with intrinsic rewards rather than extrinsic rewards. However, current intrinsic reward designs and unsupervised RL algorithms often overlook the heterogeneous nature of collected samples, thereby diminishing their sample efficiency. To overcome this limitation, in this paper, we propose a reward-free RL algorithm called alg. The key idea behind our algorithm is an uncertainty-aware intrinsic reward for exploring the environment and an uncertainty-weighted learning process to handle heterogeneous uncertainty in different samples. Theoretically, we show that in order to find an $epsilon$-optimal policy, GFA-RFE needs to collect $tilde{O} (H^2 log N_{mathcal F} (epsilon) mathrm{dim} (mathcal F) / epsilon^2 )$ number of episodes, where $mathcal F$ is the value function class with covering number $N_{mathcal F} (epsilon)$ and generalized eluder dimension $mathrm{dim} (mathcal F)$. Such a result outperforms all existing reward-free RL algorithms. We further implement and evaluate GFA-RFE across various domains and tasks in the DeepMind Control Suite. Experiment results show that GFA-RFE outperforms or is comparable to the performance of state-of-the-art unsupervised RL algorithms. | [
"['Junkai Zhang' 'Weitong Zhang' 'Dongruo Zhou' 'Quanquan Gu']"
] |
null | null | 2406.16257 | null | null | http://arxiv.org/pdf/2406.16257v1 | 2024-06-24T01:45:13Z | 2024-06-24T01:45:13Z | Towards Scalable Exact Machine Unlearning Using Parameter-Efficient
Fine-Tuning | Machine unlearning is the process of efficiently removing the influence of a training data instance from a trained machine learning model without retraining it from scratch. A popular subclass of unlearning approaches is exact machine unlearning, which focuses on techniques that explicitly guarantee the removal of the influence of a data instance from a model. Exact unlearning approaches use a machine learning model in which individual components are trained on disjoint subsets of the data. During deletion, exact unlearning approaches only retrain the affected components rather than the entire model. While existing approaches reduce retraining costs, it can still be expensive for an organization to retrain a model component as it requires halting a system in production, which leads to service failure and adversely impacts customers. To address these challenges, we introduce an exact unlearning framework -- Sequence-aware Sharded Sliced Training (S3T), designed to enhance the deletion capabilities of an exact unlearning system while minimizing the impact on model's performance. At the core of S3T, we utilize a lightweight parameter-efficient fine-tuning approach that enables parameter isolation by sequentially training layers with disjoint data slices. This enables efficient unlearning by simply deactivating the layers affected by data deletion. Furthermore, to reduce the retraining cost and improve model performance, we train the model on multiple data sequences, which allows S3T to handle an increased number of deletion requests. Both theoretically and empirically, we demonstrate that S3T attains superior deletion capabilities and enhanced performance compared to baselines across a wide range of settings. | [
"['Somnath Basu Roy Chowdhury' 'Krzysztof Choromanski' 'Arijit Sehanobish'\n 'Avinava Dubey' 'Snigdha Chaturvedi']"
] |
null | null | 2406.16258 | null | null | http://arxiv.org/pdf/2406.16258v1 | 2024-06-24T01:51:09Z | 2024-06-24T01:51:09Z | MEReQ: Max-Ent Residual-Q Inverse RL for Sample-Efficient Alignment from
Intervention | Aligning robot behavior with human preferences is crucial for deploying embodied AI agents in human-centered environments. A promising solution is interactive imitation learning from human intervention, where a human expert observes the policy's execution and provides interventions as feedback. However, existing methods often fail to utilize the prior policy efficiently to facilitate learning, thus hindering sample efficiency. In this work, we introduce MEReQ (Maximum-Entropy Residual-Q Inverse Reinforcement Learning), designed for sample-efficient alignment from human intervention. Instead of inferring the complete human behavior characteristics, MEReQ infers a residual reward function that captures the discrepancy between the human expert's and the prior policy's underlying reward functions. It then employs Residual Q-Learning (RQL) to align the policy with human preferences using this residual reward function. Extensive evaluations on simulated and real-world tasks demonstrate that MEReQ achieves sample-efficient policy alignment from human intervention. | [
"['Yuxin Chen' 'Chen Tang' 'Chenran Li' 'Ran Tian' 'Peter Stone'\n 'Masayoshi Tomizuka' 'Wei Zhan']"
] |
null | null | 2406.16270 | null | null | http://arxiv.org/pdf/2406.16270v1 | 2024-06-24T02:31:00Z | 2024-06-24T02:31:00Z | Learning-Based Heavy Hitters and Flow Frequency Estimation in Streams | Identifying heavy hitters and estimating the frequencies of flows are fundamental tasks in various network domains. Existing approaches to this challenge can broadly be categorized into two groups, hashing-based and competing-counter-based. The Count-Min sketch is a standard example of a hashing-based algorithm, and the Space Saving algorithm is an example of a competing-counter algorithm. Recent works have explored the use of machine learning to enhance algorithms for frequency estimation problems, under the algorithms with prediction framework. However, these works have focused solely on the hashing-based approach, which may not be best for identifying heavy hitters. In this paper, we present the first learned competing-counter-based algorithm, called LSS, for identifying heavy hitters, top k, and flow frequency estimation that utilizes the well-known Space Saving algorithm. We provide theoretical insights into how and to what extent our approach can improve upon Space Saving, backed by experimental results on both synthetic and real-world datasets. Our evaluation demonstrates that LSS can enhance the accuracy and efficiency of Space Saving in identifying heavy hitters, top k, and estimating flow frequencies. | [
"['Rana Shahout' 'Michael Mitzenmacher']"
] |
null | null | 2406.16282 | null | null | http://arxiv.org/pdf/2406.16282v1 | 2024-06-24T03:09:15Z | 2024-06-24T03:09:15Z | Reducing Fine-Tuning Memory Overhead by Approximate and Memory-Sharing
Backpropagation | Fine-tuning pretrained large models to downstream tasks is an important problem, which however suffers from huge memory overhead due to large-scale parameters. This work strives to reduce memory overhead in fine-tuning from perspectives of activation function and layer normalization. To this end, we propose the Approximate Backpropagation (Approx-BP) theory, which provides the theoretical feasibility of decoupling the forward and backward passes. We apply our Approx-BP theory to backpropagation training and derive memory-efficient alternatives of GELU and SiLU activation functions, which use derivative functions of ReLUs in the backward pass while keeping their forward pass unchanged. In addition, we introduce a Memory-Sharing Backpropagation strategy, which enables the activation memory to be shared by two adjacent layers, thereby removing activation memory usage redundancy. Our method neither induces extra computation nor reduces training efficiency. We conduct extensive experiments with pretrained vision and language models, and the results demonstrate that our proposal can reduce up to $sim$$30%$ of the peak memory usage. Our code is released at https://github.com/yyyyychen/LowMemoryBP. | [
"['Yuchen Yang' 'Yingdong Shi' 'Cheems Wang' 'Xiantong Zhen' 'Yuxuan Shi'\n 'Jun Xu']"
] |
null | null | 2406.16295 | null | null | http://arxiv.org/pdf/2406.16295v1 | 2024-06-24T03:37:51Z | 2024-06-24T03:37:51Z | Relaxing Continuous Constraints of Equivariant Graph Neural Networks for
Physical Dynamics Learning | Incorporating Euclidean symmetries (e.g. rotation equivariance) as inductive biases into graph neural networks has improved their generalization ability and data efficiency in unbounded physical dynamics modeling. However, in various scientific and engineering applications, the symmetries of dynamics are frequently discrete due to the boundary conditions. Thus, existing GNNs either overlook necessary symmetry, resulting in suboptimal representation ability, or impose excessive equivariance, which fails to generalize to unobserved symmetric dynamics. In this work, we propose a general Discrete Equivariant Graph Neural Network (DEGNN) that guarantees equivariance to a given discrete point group. Specifically, we show that such discrete equivariant message passing could be constructed by transforming geometric features into permutation-invariant embeddings. Through relaxing continuous equivariant constraints, DEGNN can employ more geometric feature combinations to approximate unobserved physical object interaction functions. Two implementation approaches of DEGNN are proposed based on ranking or pooling permutation-invariant functions. We apply DEGNN to various physical dynamics, ranging from particle, molecular, crowd to vehicle dynamics. In twenty scenarios, DEGNN significantly outperforms existing state-of-the-art approaches. Moreover, we show that DEGNN is data efficient, learning with less data, and can generalize across scenarios such as unobserved orientation. | [
"['Zinan Zheng' 'Yang Liu' 'Jia Li' 'Jianhua Yao' 'Yu Rong']"
] |
null | null | 2406.16300 | null | null | http://arxiv.org/pdf/2406.16300v1 | 2024-06-24T03:53:30Z | 2024-06-24T03:53:30Z | Landscaping Linear Mode Connectivity | The presence of linear paths in parameter space between two different network solutions in certain cases, i.e., linear mode connectivity (LMC), has garnered interest from both theoretical and practical fronts. There has been significant research that either practically designs algorithms catered for connecting networks by adjusting for the permutation symmetries as well as some others that more theoretically construct paths through which networks can be connected. Yet, the core reasons for the occurrence of LMC, when in fact it does occur, in the highly non-convex loss landscapes of neural networks are far from clear. In this work, we take a step towards understanding it by providing a model of how the loss landscape needs to behave topographically for LMC (or the lack thereof) to manifest. Concretely, we present a `mountainside and ridge' perspective that helps to neatly tie together different geometric features that can be spotted in the loss landscape along the training runs. We also complement this perspective by providing a theoretical analysis of the barrier height, for which we provide empirical support, and which additionally extends as a faithful predictor of layer-wise LMC. We close with a toy example that provides further intuition on how barriers arise in the first place, all in all, showcasing the larger aim of the work -- to provide a working model of the landscape and its topography for the occurrence of LMC. | [
"['Sidak Pal Singh' 'Linara Adilova' 'Michael Kamp' 'Asja Fischer'\n 'Bernhard Schölkopf' 'Thomas Hofmann']"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.