categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.17202 | null | null | http://arxiv.org/pdf/2405.17202v2 | 2024-06-07T18:24:13Z | 2024-05-27T14:24:47Z | Efficient multi-prompt evaluation of LLMs | Most popular benchmarks for comparing LLMs rely on a limited set of prompt templates, which may not fully capture the LLMs' abilities and can affect the reproducibility of results on leaderboards. Many recent works empirically verify prompt sensitivity and advocate for changes in LLM evaluation. In this paper, we consider the problem of estimating the performance distribution across many prompt variants instead of finding a single prompt to evaluate with. We introduce PromptEval, a method for estimating performance across a large set of prompts borrowing strength across prompts and examples to produce accurate estimates under practical evaluation budgets. The resulting distribution can be used to obtain performance quantiles to construct various robust performance metrics (e.g., top 95% quantile or median). We prove that PromptEval consistently estimates the performance distribution and demonstrate its efficacy empirically on three prominent LLM benchmarks: MMLU, BIG-bench Hard, and LMentry. For example, PromptEval can accurately estimate performance quantiles across 100 prompt templates on MMLU with a budget equivalent to two single-prompt evaluations. Our code and data can be found at https://github.com/felipemaiapolo/prompt-eval. | [
"['Felipe Maia Polo' 'Ronald Xu' 'Lucas Weber' 'Mírian Silva'\n 'Onkar Bhardwaj' 'Leshem Choshen' 'Allysson Flavio Melo de Oliveira'\n 'Yuekai Sun' 'Mikhail Yurochkin']"
] |
null | null | 2405.17206 | null | null | http://arxiv.org/pdf/2405.17206v1 | 2024-05-21T16:06:51Z | 2024-05-21T16:06:51Z | A Novel Fusion Architecture for PD Detection Using Semi-Supervised
Speech Embeddings | We present a framework to recognize Parkinson's disease (PD) through an English pangram utterance speech collected using a web application from diverse recording settings and environments, including participants' homes. Our dataset includes a global cohort of 1306 participants, including 392 diagnosed with PD. Leveraging the diversity of the dataset, spanning various demographic properties (such as age, sex, and ethnicity), we used deep learning embeddings derived from semi-supervised models such as Wav2Vec 2.0, WavLM, and ImageBind representing the speech dynamics associated with PD. Our novel fusion model for PD classification, which aligns different speech embeddings into a cohesive feature space, demonstrated superior performance over standard concatenation-based fusion models and other baselines (including models built on traditional acoustic features). In a randomized data split configuration, the model achieved an Area Under the Receiver Operating Characteristic Curve (AUROC) of 88.94% and an accuracy of 85.65%. Rigorous statistical analysis confirmed that our model performs equitably across various demographic subgroups in terms of sex, ethnicity, and age, and remains robust regardless of disease duration. Furthermore, our model, when tested on two entirely unseen test datasets collected from clinical settings and from a PD care center, maintained AUROC scores of 82.12% and 78.44%, respectively. This affirms the model's robustness and it's potential to enhance accessibility and health equity in real-world applications. | [
"['Tariq Adnan' 'Abdelrahman Abdelkader' 'Zipei Liu' 'Ekram Hossain'\n 'Sooyong Park' 'MD Saiful Islam' 'Ehsan Hoque']"
] |
null | null | 2405.17209 | null | null | http://arxiv.org/pdf/2405.17209v1 | 2024-05-23T01:14:22Z | 2024-05-23T01:14:22Z | How Do Transformers "Do" Physics? Investigating the Simple Harmonic
Oscillator | How do transformers model physics? Do transformers model systems with interpretable analytical solutions, or do they create "alien physics" that are difficult for humans to decipher? We take a step in demystifying this larger puzzle by investigating the simple harmonic oscillator (SHO), $ddot{x}+2gamma dot{x}+omega_0^2x=0$, one of the most fundamental systems in physics. Our goal is to identify the methods transformers use to model the SHO, and to do so we hypothesize and evaluate possible methods by analyzing the encoding of these methods' intermediates. We develop four criteria for the use of a method within the simple testbed of linear regression, where our method is $y = wx$ and our intermediate is $w$: (1) Can the intermediate be predicted from hidden states? (2) Is the intermediate's encoding quality correlated with model performance? (3) Can the majority of variance in hidden states be explained by the intermediate? (4) Can we intervene on hidden states to produce predictable outcomes? Armed with these two correlational (1,2), weak causal (3) and strong causal (4) criteria, we determine that transformers use known numerical methods to model trajectories of the simple harmonic oscillator, specifically the matrix exponential method. Our analysis framework can conveniently extend to high-dimensional linear systems and nonlinear systems, which we hope will help reveal the "world model" hidden in transformers. | [
"['Subhash Kantamneni' 'Ziming Liu' 'Max Tegmark']"
] |
null | null | 2405.17211 | null | null | http://arxiv.org/pdf/2405.17211v1 | 2024-05-27T14:33:06Z | 2024-05-27T14:33:06Z | Spectral-Refiner: Fine-Tuning of Accurate Spatiotemporal Neural Operator
for Turbulent Flows | Recent advancements in operator-type neural networks have shown promising results in approximating the solutions of spatiotemporal Partial Differential Equations (PDEs). However, these neural networks often entail considerable training expenses, and may not always achieve the desired accuracy required in many scientific and engineering disciplines. In this paper, we propose a new Spatiotemporal Fourier Neural Operator (SFNO) that learns maps between Bochner spaces, and a new learning framework to address these issues. This new paradigm leverages wisdom from traditional numerical PDE theory and techniques to refine the pipeline of commonly adopted end-to-end neural operator training and evaluations. Specifically, in the learning problems for the turbulent flow modeling by the Navier-Stokes Equations (NSE), the proposed architecture initiates the training with a few epochs for SFNO, concluding with the freezing of most model parameters. Then, the last linear spectral convolution layer is fine-tuned without the frequency truncation. The optimization uses a negative Sobolev norm for the first time as the loss in operator learning, defined through a reliable functional-type emph{a posteriori} error estimator whose evaluation is almost exact thanks to the Parseval identity. This design allows the neural operators to effectively tackle low-frequency errors while the relief of the de-aliasing filter addresses high-frequency errors. Numerical experiments on commonly used benchmarks for the 2D NSE demonstrate significant improvements in both computational efficiency and accuracy, compared to end-to-end evaluation and traditional numerical PDE solvers. | [
"['Shuhao Cao' 'Francesco Brarda' 'Ruipeng Li' 'Yuanzhe Xi']"
] |
null | null | 2405.17216 | null | null | http://arxiv.org/pdf/2405.17216v1 | 2024-05-27T14:35:10Z | 2024-05-27T14:35:10Z | Autoformalizing Euclidean Geometry | Autoformalization involves automatically translating informal math into formal theorems and proofs that are machine-verifiable. Euclidean geometry provides an interesting and controllable domain for studying autoformalization. In this paper, we introduce a neuro-symbolic framework for autoformalizing Euclidean geometry, which combines domain knowledge, SMT solvers, and large language models (LLMs). One challenge in Euclidean geometry is that informal proofs rely on diagrams, leaving gaps in texts that are hard to formalize. To address this issue, we use theorem provers to fill in such diagrammatic information automatically, so that the LLM only needs to autoformalize the explicit textual steps, making it easier for the model. We also provide automatic semantic evaluation for autoformalized theorem statements. We construct LeanEuclid, an autoformalization benchmark consisting of problems from Euclid's Elements and the UniGeo dataset formalized in the Lean proof assistant. Experiments with GPT-4 and GPT-4V show the capability and limitations of state-of-the-art LLMs on autoformalizing geometry problems. The data and code are available at https://github.com/loganrjmurphy/LeanEuclid. | [
"['Logan Murphy' 'Kaiyu Yang' 'Jialiang Sun' 'Zhaoyu Li' 'Anima Anandkumar'\n 'Xujie Si']"
] |
null | null | 2405.17222 | null | null | http://arxiv.org/pdf/2405.17222v2 | 2024-05-28T09:24:49Z | 2024-05-27T14:40:03Z | A Retrospective of the Tutorial on Opportunities and Challenges of
Online Deep Learning | Machine learning algorithms have become indispensable in today's world. They support and accelerate the way we make decisions based on the data at hand. This acceleration means that data structures that were valid at one moment could no longer be valid in the future. With these changing data structures, it is necessary to adapt machine learning (ML) systems incrementally to the new data. This is done with the use of online learning or continuous ML technologies. While deep learning technologies have shown exceptional performance on predefined datasets, they have not been widely applied to online, streaming, and continuous learning. In this retrospective of our tutorial titled Opportunities and Challenges of Online Deep Learning held at ECML PKDD 2023, we provide a brief overview of the opportunities but also the potential pitfalls for the application of neural networks in online learning environments using the frameworks River and Deep-River. | [
"['Cedric Kulbach' 'Lucas Cazzonelli' 'Hoang-Anh Ngo'\n 'Minh-Huong Le-Nguyen' 'Albert Bifet']"
] |
null | null | 2405.17233 | null | null | http://arxiv.org/pdf/2405.17233v2 | 2024-06-03T02:46:53Z | 2024-05-27T14:49:39Z | CLAQ: Pushing the Limits of Low-Bit Post-Training Quantization for LLMs | Parameter quantization for Large Language Models (LLMs) has attracted increasing attentions recently in reducing memory costs and improving computational efficiency. Early approaches have been widely adopted. However, the existing methods suffer from poor performance in low-bit (such as 2 to 3 bits) scenarios. In this paper, we present a novel and effective Column-Level Adaptive weight Quantization (CLAQ) framework by introducing three different types of adaptive strategies for LLM quantization. Firstly, a K-Means clustering based algorithm is proposed that allows dynamic generation of quantization centroids for each column of a parameter matrix. Secondly, we design an outlier-guided adaptive precision search strategy which can dynamically assign varying bit-widths to different columns. Finally, a dynamic outlier reservation scheme is developed to retain some parameters in their original float point precision, in trade off of boosted model performance. Experiments on various mainstream open source LLMs including LLaMA-1, LLaMA-2 and Yi demonstrate that our methods achieve the state-of-the-art results across different bit settings, especially in extremely low-bit scenarios. Code is available at https://github.com/fayuge/CLAQ. | [
"['Haoyu Wang' 'Bei Liu' 'Hang Shao' 'Bo Xiao' 'Ke Zeng' 'Guanglu Wan'\n 'Yanmin Qian']"
] |
null | null | 2405.17234 | null | null | http://arxiv.org/pdf/2405.17234v5 | 2024-06-26T07:59:40Z | 2024-05-27T14:50:42Z | Benchmarking General-Purpose In-Context Learning | In-context learning (ICL) empowers generative models to address new tasks effectively and efficiently on the fly, without relying on any artificially crafted optimization techniques. In this paper, we study extending ICL to address a broader range of tasks with an extended learning horizon and higher improvement potential, namely General-Purpose In-Context Learning (GPICL). To this end, we introduce two lightweight benchmarks specifically crafted to train and evaluate GPICL functionalities. Each benchmark encompasses a vast number of tasks characterized by significant task variance, facilitating meta-training that minimizes inductive bias. These tasks are also crafted to promote long-horizon in-context learning through continuous generation and interaction. These characteristics necessitate the models to leverage contexts and history interactions to enhance their capabilities, across domains such as language modeling, decision-making, and world modeling. Our experiments on the baseline models demonstrate that meta-training with minimal inductive bias and ICL from the ground up is feasible across all the domains we've discussed. Additionally, our findings indicate that the scale of parameters alone may not be crucial for ICL or GPICL, suggesting alternative approaches such as increasing the scale of contexts and memory states. | [
"['Fan Wang' 'Chuan Lin' 'Yang Cao' 'Yu Kang']"
] |
null | null | 2405.17243 | null | null | http://arxiv.org/pdf/2405.17243v1 | 2024-05-27T14:58:24Z | 2024-05-27T14:58:24Z | Surprise-Adaptive Intrinsic Motivation for Unsupervised Reinforcement
Learning | Both entropy-minimizing and entropy-maximizing (curiosity) objectives for unsupervised reinforcement learning (RL) have been shown to be effective in different environments, depending on the environment's level of natural entropy. However, neither method alone results in an agent that will consistently learn intelligent behavior across environments. In an effort to find a single entropy-based method that will encourage emergent behaviors in any environment, we propose an agent that can adapt its objective online, depending on the entropy conditions by framing the choice as a multi-armed bandit problem. We devise a novel intrinsic feedback signal for the bandit, which captures the agent's ability to control the entropy in its environment. We demonstrate that such agents can learn to control entropy and exhibit emergent behaviors in both high- and low-entropy regimes and can learn skillful behaviors in benchmark tasks. Videos of the trained agents and summarized findings can be found on our project page https://sites.google.com/view/surprise-adaptive-agents | [
"['Adriana Hugessen' 'Roger Creus Castanyer' 'Faisal Mohamed'\n 'Glen Berseth']"
] |
null | null | 2405.17245 | null | null | http://arxiv.org/pdf/2405.17245v1 | 2024-05-27T15:01:04Z | 2024-05-27T15:01:04Z | Galaxy: A Resource-Efficient Collaborative Edge AI System for In-situ
Transformer Inference | Transformer-based models have unlocked a plethora of powerful intelligent applications at the edge, such as voice assistant in smart home. Traditional deployment approaches offload the inference workloads to the remote cloud server, which would induce substantial pressure on the backbone network as well as raise users' privacy concerns. To address that, in-situ inference has been recently recognized for edge intelligence, but it still confronts significant challenges stemming from the conflict between intensive workloads and limited on-device computing resources. In this paper, we leverage our observation that many edge environments usually comprise a rich set of accompanying trusted edge devices with idle resources and propose Galaxy, a collaborative edge AI system that breaks the resource walls across heterogeneous edge devices for efficient Transformer inference acceleration. Galaxy introduces a novel hybrid model parallelism to orchestrate collaborative inference, along with a heterogeneity-aware parallelism planning for fully exploiting the resource potential. Furthermore, Galaxy devises a tile-based fine-grained overlapping of communication and computation to mitigate the impact of tensor synchronizations on inference latency under bandwidth-constrained edge environments. Extensive evaluation based on prototype implementation demonstrates that Galaxy remarkably outperforms state-of-the-art approaches under various edge environment setups, achieving up to 2.5x end-to-end latency reduction. | [
"['Shengyuan Ye' 'Jiangsu Du' 'Liekang Zeng' 'Wenzhong Ou' 'Xiaowen Chu'\n 'Yutong Lu' 'Xu Chen']"
] |
null | null | 2405.17247 | null | null | http://arxiv.org/pdf/2405.17247v1 | 2024-05-27T15:01:23Z | 2024-05-27T15:01:23Z | An Introduction to Vision-Language Modeling | Following the recent popularity of Large Language Models (LLMs), several attempts have been made to extend them to the visual domain. From having a visual assistant that could guide us through unfamiliar environments to generative models that produce images using only a high-level text description, the vision-language model (VLM) applications will significantly impact our relationship with technology. However, there are many challenges that need to be addressed to improve the reliability of those models. While language is discrete, vision evolves in a much higher dimensional space in which concepts cannot always be easily discretized. To better understand the mechanics behind mapping vision to language, we present this introduction to VLMs which we hope will help anyone who would like to enter the field. First, we introduce what VLMs are, how they work, and how to train them. Then, we present and discuss approaches to evaluate VLMs. Although this work primarily focuses on mapping images to language, we also discuss extending VLMs to videos. | [
"['Florian Bordes' 'Richard Yuanzhe Pang' 'Anurag Ajay' 'Alexander C. Li'\n 'Adrien Bardes' 'Suzanne Petryk' 'Oscar Mañas' 'Zhiqiu Lin'\n 'Anas Mahmoud' 'Bargav Jayaraman' 'Mark Ibrahim' 'Melissa Hall'\n 'Yunyang Xiong' 'Jonathan Lebensold' 'Candace Ross' 'Srihari Jayakumar'\n 'Chuan Guo' 'Diane Bouchacourt' 'Haider Al-Tahan' 'Karthik Padthe'\n 'Vasu Sharma' 'Hu Xu' 'Xiaoqing Ellen Tan' 'Megan Richards'\n 'Samuel Lavoie' 'Pietro Astolfi' 'Reyhane Askari Hemmat' 'Jun Chen'\n 'Kushal Tirumala' 'Rim Assouel' 'Mazda Moayeri' 'Arjang Talattof'\n 'Kamalika Chaudhuri' 'Zechun Liu' 'Xilun Chen' 'Quentin Garrido'\n 'Karen Ullrich' 'Aishwarya Agrawal' 'Kate Saenko' 'Asli Celikyilmaz'\n 'Vikas Chandra']"
] |
null | null | 2405.17248 | null | null | http://arxiv.org/pdf/2405.17248v1 | 2024-05-27T15:03:21Z | 2024-05-27T15:03:21Z | Transformer In-Context Learning for Categorical Data | Recent research has sought to understand Transformers through the lens of in-context learning with functional data. We extend that line of work with the goal of moving closer to language models, considering categorical outcomes, nonlinear underlying models, and nonlinear attention. The contextual data are of the form $textsf{C}=(x_1,c_1,dots,x_N,c_{N})$ where each $c_iin{0,dots,C-1}$ is drawn from a categorical distribution that depends on covariates $x_iinmathbb{R}^d$. Contextual outcomes in the $m$th set of contextual data, $textsf{C}_m$, are modeled in terms of latent function $f_m(x)intextsf{F}$, where $textsf{F}$ is a functional class with $(C-1)$-dimensional vector output. The probability of observing class $cin{0,dots,C-1}$ is modeled in terms of the output components of $f_m(x)$ via the softmax. The Transformer parameters may be trained with $M$ contextual examples, ${textsf{C}_m}_{m=1,M}$, and the trained model is then applied to new contextual data $textsf{C}_{M+1}$ for new $f_{M+1}(x)intextsf{F}$. The goal is for the Transformer to constitute the probability of each category $cin{0,dots,C-1}$ for a new query $x_{N_{M+1}+1}$. We assume each component of $f_m(x)$ resides in a reproducing kernel Hilbert space (RKHS), specifying $textsf{F}$. Analysis and an extensive set of experiments suggest that on its forward pass the Transformer (with attention defined by the RKHS kernel) implements a form of gradient descent of the underlying function, connected to the latent vector function associated with the softmax. We present what is believed to be the first real-world demonstration of this few-shot-learning methodology, using the ImageNet dataset. | [
"['Aaron T. Wang' 'Ricardo Henao' 'Lawrence Carin']"
] |
null | null | 2405.17253 | null | null | http://arxiv.org/abs/2405.17253v1 | 2024-05-27T15:07:57Z | 2024-05-27T15:07:57Z | Gaussian Embedding of Temporal Networks | Representing the nodes of continuous-time temporal graphs in a low-dimensional latent space has wide-ranging applications, from prediction to visualization. Yet, analyzing continuous-time relational data with timestamped interactions introduces unique challenges due to its sparsity. Merely embedding nodes as trajectories in the latent space overlooks this sparsity, emphasizing the need to quantify uncertainty around the latent positions. In this paper, we propose TGNE (textbf{T}emporal textbf{G}aussian textbf{N}etwork textbf{E}mbedding), an innovative method that bridges two distinct strands of literature: the statistical analysis of networks via Latent Space Models (LSM)cite{Hoff2002} and temporal graph machine learning. TGNE embeds nodes as piece-wise linear trajectories of Gaussian distributions in the latent space, capturing both structural information and uncertainty around the trajectories. We evaluate TGNE's effectiveness in reconstructing the original graph and modelling uncertainty. The results demonstrate that TGNE generates competitive time-varying embedding locations compared to common baselines for reconstructing unobserved edge interactions based on observed edges. Furthermore, the uncertainty estimates align with the time-varying degree distribution in the network, providing valuable insights into the temporal dynamics of the graph. To facilitate reproducibility, we provide an open-source implementation of TGNE at url{https://github.com/aida-ugent/tgne}. | [
"['Raphaël Romero' 'Jefrey Lijffijt' 'Riccardo Rastelli' 'Marco Corneli'\n 'Tijl De Bie']"
] |
null | null | 2405.17258 | null | null | http://arxiv.org/pdf/2405.17258v1 | 2024-05-27T15:15:08Z | 2024-05-27T15:15:08Z | $\textit{Trans-LoRA}$: towards data-free Transferable Parameter
Efficient Finetuning | Low-rank adapters (LoRA) and their variants are popular parameter-efficient fine-tuning (PEFT) techniques that closely match full model fine-tune performance while requiring only a small number of additional parameters. These additional LoRA parameters are specific to the base model being adapted. When the base model needs to be deprecated and replaced with a new one, all the associated LoRA modules need to be re-trained. Such re-training requires access to the data used to train the LoRA for the original base model. This is especially problematic for commercial cloud applications where the LoRA modules and the base models are hosted by service providers who may not be allowed to host proprietary client task data. To address this challenge, we propose $textit{Trans-LoRA}$ -- a novel method for lossless, nearly data-free transfer of LoRAs across base models. Our approach relies on synthetic data to transfer LoRA modules. Using large language models, we design a synthetic data generator to approximate the data-generating process of the $textit{observed}$ task data subset. Training on the resulting synthetic dataset transfers LoRA modules to new models. We show the effectiveness of our approach using both LLama and Gemma model families. Our approach achieves lossless (mostly improved) LoRA transfer between models within and across different base model families, and even between different PEFT methods, on a wide variety of tasks. | [
"['Runqian Wang' 'Soumya Ghosh' 'David Cox' 'Diego Antognini' 'Aude Oliva'\n 'Rogerio Feris' 'Leonid Karlinsky']"
] |
null | null | 2405.17260 | null | null | http://arxiv.org/pdf/2405.17260v1 | 2024-05-27T15:18:12Z | 2024-05-27T15:18:12Z | Accelerating Simulation of Two-Phase Flows with Neural PDE Surrogates | Simulation is a powerful tool to better understand physical systems, but generally requires computationally expensive numerical methods. Downstream applications of such simulations can become computationally infeasible if they require many forward solves, for example in the case of inverse design with many degrees of freedom. In this work, we investigate and extend neural PDE solvers as a tool to aid in scaling simulations for two-phase flow problems, and simulations of oil expulsion from a pore specifically. We extend existing numerical methods for this problem to a more complex setting involving varying geometries of the domain to generate a challenging dataset. Further, we investigate three prominent neural PDE solver methods, namely the UNet, DRN and U-FNO, and extend them for characteristics of the oil-expulsion problem: (1) spatial conditioning on the geometry; (2) periodicity in the boundary; (3) approximate mass conservation. We scale all methods and benchmark their speed-accuracy trade-off, evaluate qualitative properties, and perform an ablation study. We find that the investigated methods can accurately model the droplet dynamics with up to three orders of magnitude speed-up, that our extensions improve performance over the baselines, and that the introduced varying geometries constitute a significantly more challenging setting over the previously considered oil expulsion problem. | [
"['Yoeri Poels' 'Koen Minartz' 'Harshit Bansal' 'Vlado Menkovski']"
] |
null | null | 2405.17264 | null | null | http://arxiv.org/pdf/2405.17264v1 | 2024-05-27T15:22:58Z | 2024-05-27T15:22:58Z | On the Noise Robustness of In-Context Learning for Text Generation | Large language models (LLMs) have shown impressive performance on downstream tasks by in-context learning (ICL), which heavily relies on the quality of demonstrations selected from a large set of annotated examples. Recent works claim that in-context learning is robust to noisy demonstrations in text classification. In this work, we show that, on text generation tasks, noisy annotations significantly hurt the performance of in-context learning. To circumvent the issue, we propose a simple and effective approach called Local Perplexity Ranking (LPR), which replaces the "noisy" candidates with their nearest neighbors that are more likely to be clean. Our method is motivated by analyzing the perplexity deviation caused by noisy labels and decomposing perplexity into inherent perplexity and matching perplexity. Our key idea behind LPR is thus to decouple the matching perplexity by performing the ranking among the neighbors in semantic space. Our approach can prevent the selected demonstrations from including mismatched input-label pairs while preserving the effectiveness of the original selection methods. Extensive experiments demonstrate the effectiveness of LPR, improving the EM score by up to 18.75 on common benchmarks with noisy annotations. | [
"['Hongfu Gao' 'Feipeng Zhang' 'Wenyu Jiang' 'Jun Shu' 'Feng Zheng'\n 'Hongxin Wei']"
] |
null | null | 2405.17267 | null | null | http://arxiv.org/pdf/2405.17267v1 | 2024-05-27T15:25:32Z | 2024-05-27T15:25:32Z | FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning
and Logit Distillation | Federated learning (FL) is a popular privacy-preserving paradigm that enables distributed clients to collaboratively train models with a central server while keeping raw data locally. In practice, distinct model architectures, varying data distributions, and limited resources across local clients inevitably cause model performance degradation and a slowdown in convergence speed. However, existing FL methods can only solve some of the above heterogeneous challenges and have obvious performance limitations. Notably, a unified framework has not yet been explored to overcome these challenges. Accordingly, we propose FedHPL, a parameter-efficient unified $textbf{Fed}$erated learning framework for $textbf{H}$eterogeneous settings based on $textbf{P}$rompt tuning and $textbf{L}$ogit distillation. Specifically, we employ a local prompt tuning scheme that leverages a few learnable visual prompts to efficiently fine-tune the frozen pre-trained foundation model for downstream tasks, thereby accelerating training and improving model performance under limited local resources and data heterogeneity. Moreover, we design a global logit distillation scheme to handle the model heterogeneity and guide the local training. In detail, we leverage logits to implicitly capture local knowledge and design a weighted knowledge aggregation mechanism to generate global client-specific logits. We provide a theoretical guarantee on the generalization error bound for FedHPL. The experiments on various benchmark datasets under diverse settings of models and data demonstrate that our framework outperforms state-of-the-art FL approaches, with less computation overhead and training rounds. | [
"['Yuting Ma' 'Lechao Cheng' 'Yaxiong Wang' 'Zhun Zhong' 'Xiaohua Xu'\n 'Meng Wang']"
] |
null | null | 2405.17272 | null | null | http://arxiv.org/pdf/2405.17272v2 | 2024-06-06T10:30:24Z | 2024-05-27T15:33:16Z | DPN: Decoupling Partition and Navigation for Neural Solvers of Min-max
Vehicle Routing Problems | The min-max vehicle routing problem (min-max VRP) traverses all given customers by assigning several routes and aims to minimize the length of the longest route. Recently, reinforcement learning (RL)-based sequential planning methods have exhibited advantages in solving efficiency and optimality. However, these methods fail to exploit the problem-specific properties in learning representations, resulting in less effective features for decoding optimal routes. This paper considers the sequential planning process of min-max VRPs as two coupled optimization tasks: customer partition for different routes and customer navigation in each route (i.e., partition and navigation). To effectively process min-max VRP instances, we present a novel attention-based Partition-and-Navigation encoder (P&N Encoder) that learns distinct embeddings for partition and navigation. Furthermore, we utilize an inherent symmetry in decoding routes and develop an effective agent-permutation-symmetric (APS) loss function. Experimental results demonstrate that the proposed Decoupling-Partition-Navigation (DPN) method significantly surpasses existing learning-based methods in both single-depot and multi-depot min-max VRPs. Our code is available at | [
"['Zhi Zheng' 'Shunyu Yao' 'Zhenkun Wang' 'Xialiang Tong' 'Mingxuan Yuan'\n 'Ke Tang']"
] |
null | null | 2405.17277 | null | null | http://arxiv.org/pdf/2405.17277v1 | 2024-05-27T15:39:45Z | 2024-05-27T15:39:45Z | Gradients of Functions of Large Matrices | Tuning scientific and probabilistic machine learning models -- for example, partial differential equations, Gaussian processes, or Bayesian neural networks -- often relies on evaluating functions of matrices whose size grows with the data set or the number of parameters. While the state-of-the-art for evaluating these quantities is almost always based on Lanczos and Arnoldi iterations, the present work is the first to explain how to differentiate these workhorses of numerical linear algebra efficiently. To get there, we derive previously unknown adjoint systems for Lanczos and Arnoldi iterations, implement them in JAX, and show that the resulting code can compete with Diffrax when it comes to differentiating PDEs, GPyTorch for selecting Gaussian process models and beats standard factorisation methods for calibrating Bayesian neural networks. All this is achieved without any problem-specific code optimisation. Find the code at https://github.com/pnkraemer/experiments-lanczos-adjoints and install the library with pip install matfree. | [
"['Nicholas Krämer' 'Pablo Moreno-Muñoz' 'Hrittik Roy' 'Søren Hauberg']"
] |
null | null | 2405.17282 | null | null | http://arxiv.org/pdf/2405.17282v1 | 2024-05-27T15:46:52Z | 2024-05-27T15:46:52Z | R-ODE: Ricci Curvature Tells When You Will be Informed | Information diffusion prediction is fundamental to understand the structure and organization of the online social networks, and plays a crucial role to blocking rumor spread, influence maximization, political propaganda, etc. So far, most existing solutions primarily predict the next user who will be informed with historical cascades, but ignore an important factor in the diffusion process - the time. Such limitation motivates us to pose the problem of the time-aware personalized information diffusion prediction for the first time, telling the time when the target user will be informed. In this paper, we address this problem from a fresh geometric perspective of Ricci curvature, and propose a novel Ricci-curvature regulated Ordinary Differential Equation (R-ODE). In the diffusion process, R-ODE considers that the inter-correlated users are organized in a dynamic system in the representation space, and the cascades give the observations sampled from the continuous realm. At each infection time, the message diffuses along the largest Ricci curvature, signifying less transportation effort. In the continuous realm, the message triggers users' movement, whose trajectory in the space is parameterized by an ODE with graph neural network. Consequently, R-ODE predicts the infection time of a target user by the movement trajectory learnt from the observations. Extensive experiments evaluate the personalized time prediction ability of R-ODE, and show R-ODE outperforms the state-of-the-art baselines. | [
"['Li Sun' 'Jingbin Hu' 'Mengjie Li' 'Hao Peng']"
] |
null | null | 2405.17283 | null | null | http://arxiv.org/pdf/2405.17283v2 | 2024-05-28T12:06:28Z | 2024-05-27T15:47:03Z | Recurrent Complex-Weighted Autoencoders for Unsupervised Object
Discovery | Current state-of-the-art synchrony-based models encode object bindings with complex-valued activations and compute with real-valued weights in feedforward architectures. We argue for the computational advantages of a recurrent architecture with complex-valued weights. We propose a fully convolutional autoencoder, SynCx, that performs iterative constraint satisfaction: at each iteration, a hidden layer bottleneck encodes statistically regular configurations of features in particular phase relationships; over iterations, local constraints propagate and the model converges to a globally consistent configuration of phase assignments. Binding is achieved simply by the matrix-vector product operation between complex-valued weights and activations, without the need for additional mechanisms that have been incorporated into current synchrony-based models. SynCx outperforms or is strongly competitive with current models for unsupervised object discovery. SynCx also avoids certain systematic grouping errors of current models, such as the inability to separate similarly colored objects without additional supervision. | [
"['Anand Gopalakrishnan' 'Aleksandar Stanić' 'Jürgen Schmidhuber'\n 'Michael Curtis Mozer']"
] |
null | null | 2405.17287 | null | null | http://arxiv.org/pdf/2405.17287v1 | 2024-05-27T15:52:27Z | 2024-05-27T15:52:27Z | Opinion-Guided Reinforcement Learning | Human guidance is often desired in reinforcement learning to improve the performance of the learning agent. However, human insights are often mere opinions and educated guesses rather than well-formulated arguments. While opinions are subject to uncertainty, e.g., due to partial informedness or ignorance about a problem, they also emerge earlier than hard evidence could be produced. Thus, guiding reinforcement learning agents through opinions offers the potential for more performant learning processes, but comes with the challenge of modeling and managing opinions in a formal way. In this article, we present a method to guide reinforcement learning agents through opinions. To this end, we provide an end-to-end method to model and manage advisors' opinions. To assess the utility of the approach, we evaluate it with synthetic and human advisors, at different levels of uncertainty, and under multiple advise strategies. Our results indicate that opinions, even if uncertain, improve the performance of reinforcement learning agents, resulting in higher rewards, more efficient exploration, and a better reinforced policy. Although we demonstrate our approach in a simplified topological running example, our approach is applicable to complex problems with higher dimensions as well. | [
"['Kyanna Dagenais' 'Istvan David']"
] |
null | null | 2405.17293 | null | null | http://arxiv.org/pdf/2405.17293v1 | 2024-05-27T15:58:34Z | 2024-05-27T15:58:34Z | Efficient Ensembles Improve Training Data Attribution | Training data attribution (TDA) methods aim to quantify the influence of individual training data points on the model predictions, with broad applications in data-centric AI, such as mislabel detection, data selection, and copyright compensation. However, existing methods in this field, which can be categorized as retraining-based and gradient-based, have struggled with the trade-off between computational efficiency and attribution efficacy. Retraining-based methods can accurately attribute complex non-convex models but are computationally prohibitive, while gradient-based methods are efficient but often fail for non-convex models. Recent research has shown that augmenting gradient-based methods with ensembles of multiple independently trained models can achieve significantly better attribution efficacy. However, this approach remains impractical for very large-scale applications. In this work, we discover that expensive, fully independent training is unnecessary for ensembling the gradient-based methods, and we propose two efficient ensemble strategies, DROPOUT ENSEMBLE and LORA ENSEMBLE, alternative to naive independent ensemble. These strategies significantly reduce training time (up to 80%), serving time (up to 60%), and space cost (up to 80%) while maintaining similar attribution efficacy to the naive independent ensemble. Our extensive experimental results demonstrate that the proposed strategies are effective across multiple TDA methods on diverse datasets and models, including generative settings, significantly advancing the Pareto frontier of TDA methods with better computational efficiency and attribution efficacy. | [
"['Junwei Deng' 'Ting-Wei Li' 'Shichang Zhang' 'Jiaqi Ma']"
] |
null | null | 2405.17299 | null | null | http://arxiv.org/pdf/2405.17299v1 | 2024-05-27T16:00:45Z | 2024-05-27T16:00:45Z | Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data | Simplicity bias, the propensity of deep models to over-rely on simple features, has been identified as a potential reason for limited out-of-distribution generalization of neural networks (Shah et al., 2020). Despite the important implications, this phenomenon has been theoretically confirmed and characterized only under strong dataset assumptions, such as linear separability (Lyu et al., 2021). In this work, we characterize simplicity bias for general datasets in the context of two-layer neural networks initialized with small weights and trained with gradient flow. Specifically, we prove that in the early training phases, network features cluster around a few directions that do not depend on the size of the hidden layer. Furthermore, for datasets with an XOR-like pattern, we precisely identify the learned features and demonstrate that simplicity bias intensifies during later training stages. These results indicate that features learned in the middle stages of training may be more useful for OOD transfer. We support this hypothesis with experiments on image data. | [
"['Nikita Tsoy' 'Nikola Konstantinov']"
] |
null | null | 2405.17309 | null | null | http://arxiv.org/pdf/2405.17309v1 | 2024-05-27T16:10:49Z | 2024-05-27T16:10:49Z | Survey of Graph Neural Network for Internet of Things and NextG Networks | The exponential increase in Internet of Things (IoT) devices coupled with 6G pushing towards higher data rates and connected devices has sparked a surge in data. Consequently, harnessing the full potential of data-driven machine learning has become one of the important thrusts. In addition to the advancement in wireless technology, it is important to efficiently use the resources available and meet the users' requirements. Graph Neural Networks (GNNs) have emerged as a promising paradigm for effectively modeling and extracting insights which inherently exhibit complex network structures due to its high performance and accuracy, scalability, adaptability, and resource efficiency. There is a lack of a comprehensive survey that focuses on the applications and advances GNN has made in the context of IoT and Next Generation (NextG) networks. To bridge that gap, this survey starts by providing a detailed description of GNN's terminologies, architecture, and the different types of GNNs. Then we provide a comprehensive survey of the advancements in applying GNNs for IoT from the perspective of data fusion and intrusion detection. Thereafter, we survey the impact GNN has made in improving spectrum awareness. Next, we provide a detailed account of how GNN has been leveraged for networking and tactical systems. Through this survey, we aim to provide a comprehensive resource for researchers to learn more about GNN in the context of wireless networks, and understand its state-of-the-art use cases while contrasting to other machine learning approaches. Finally, we also discussed the challenges and wide range of future research directions to further motivate the use of GNN for IoT and NextG Networks. | [
"['Sabarish Krishna Moorthy' 'Jithin Jagannath']"
] |
null | null | 2405.17311 | null | null | http://arxiv.org/pdf/2405.17311v2 | 2024-06-07T12:40:44Z | 2024-05-27T16:11:49Z | Probabilistic Graph Rewiring via Virtual Nodes | Message-passing graph neural networks (MPNNs) have emerged as a powerful paradigm for graph-based machine learning. Despite their effectiveness, MPNNs face challenges such as under-reaching and over-squashing, where limited receptive fields and structural bottlenecks hinder information flow in the graph. While graph transformers hold promise in addressing these issues, their scalability is limited due to quadratic complexity regarding the number of nodes, rendering them impractical for larger graphs. Here, we propose implicitly rewired message-passing neural networks (IPR-MPNNs), a novel approach that integrates implicit probabilistic graph rewiring into MPNNs. By introducing a small number of virtual nodes, i.e., adding additional nodes to a given graph and connecting them to existing nodes, in a differentiable, end-to-end manner, IPR-MPNNs enable long-distance message propagation, circumventing quadratic complexity. Theoretically, we demonstrate that IPR-MPNNs surpass the expressiveness of traditional MPNNs. Empirically, we validate our approach by showcasing its ability to mitigate under-reaching and over-squashing effects, achieving state-of-the-art performance across multiple graph datasets. Notably, IPR-MPNNs outperform graph transformers while maintaining significantly faster computational efficiency. | [
"['Chendi Qian' 'Andrei Manolache' 'Christopher Morris' 'Mathias Niepert']"
] |
null | null | 2405.17324 | null | null | http://arxiv.org/pdf/2405.17324v1 | 2024-05-27T16:23:34Z | 2024-05-27T16:23:34Z | Leveraging Offline Data in Linear Latent Bandits | Sequential decision-making domains such as recommender systems, healthcare and education often have unobserved heterogeneity in the population that can be modeled using latent bandits $-$ a framework where an unobserved latent state determines the model for a trajectory. While the latent bandit framework is compelling, the extent of its generality is unclear. We first address this by establishing a de Finetti theorem for decision processes, and show that $textit{every}$ exchangeable and coherent stateless decision process is a latent bandit. The latent bandit framework lends itself particularly well to online learning with offline datasets, a problem of growing interest in sequential decision-making. One can leverage offline latent bandit data to learn a complex model for each latent state, so that an agent can simply learn the latent state online to act optimally. We focus on a linear model for a latent bandit with $d_A$-dimensional actions, where the latent states lie in an unknown $d_K$-dimensional subspace for $d_K ll d_A$. We present SOLD, a novel principled method to learn this subspace from short offline trajectories with guarantees. We then provide two methods to leverage this subspace online: LOCAL-UCB and ProBALL-UCB. We demonstrate that LOCAL-UCB enjoys $tilde O(min(d_Asqrt{T}, d_Ksqrt{T}(1+sqrt{d_AT/d_KN})))$ regret guarantees, where the effective dimension is lower when the size $N$ of the offline dataset is larger. ProBALL-UCB enjoys a slightly weaker guarantee, but is more practical and computationally efficient. Finally, we establish the efficacy of our methods using experiments on both synthetic data and real-life movie recommendation data from MovieLens. | [
"['Chinmaya Kausik' 'Kevin Tan' 'Ambuj Tewari']"
] |
null | null | 2405.17325 | null | null | http://arxiv.org/pdf/2405.17325v1 | 2024-05-27T16:23:50Z | 2024-05-27T16:23:50Z | Novel Approaches for ML-Assisted Particle Track Reconstruction and Hit
Clustering | Track reconstruction is a vital aspect of High-Energy Physics (HEP) and plays a critical role in major experiments. In this study, we delve into unexplored avenues for particle track reconstruction and hit clustering. Firstly, we enhance the algorithmic design effort by utilising a simplified simulator (REDVID) to generate training data that is specifically composed for simplicity. We demonstrate the effectiveness of this data in guiding the development of optimal network architectures. Additionally, we investigate the application of image segmentation networks for this task, exploring their potential for accurate track reconstruction. Moreover, we approach the task from a different perspective by treating it as a hit sequence to track sequence translation problem. Specifically, we explore the utilisation of Transformer architectures for tracking purposes. Our preliminary findings are covered in detail. By considering this novel approach, we aim to uncover new insights and potential advancements in track reconstruction. This research sheds light on previously unexplored methods and provides valuable insights for the field of particle track reconstruction and hit clustering in HEP. | [
"['Uraz Odyurt' 'Nadezhda Dobreva' 'Zef Wolffs' 'Yue Zhao'\n 'Antonio Ferrer Sánchez' 'Roberto Ruiz de Austri Bazan'\n 'José D. Martín-Guerrero' 'Ana-Lucia Varbanescu' 'Sascha Caron']"
] |
null | null | 2405.17333 | null | null | http://arxiv.org/pdf/2405.17333v1 | 2024-05-27T16:34:18Z | 2024-05-27T16:34:18Z | Conditioning on Time is All You Need for Synthetic Survival Data
Generation | Synthetic data generation holds considerable promise, offering avenues to enhance privacy, fairness, and data accessibility. Despite the availability of various methods for generating synthetic tabular data, challenges persist, particularly in specialized applications such as survival analysis. One significant obstacle in survival data generation is censoring, which manifests as not knowing the precise timing of observed (target) events for certain instances. Existing methods face difficulties in accurately reproducing the real distribution of event times for both observed (uncensored) events and censored events, i.e., the generated event-time distributions do not accurately match the underlying distributions of the real data. So motivated, we propose a simple paradigm to produce synthetic survival data by generating covariates conditioned on event times (and censoring indicators), thus allowing one to reuse existing conditional generative models for tabular data without significant computational overhead, and without making assumptions about the (usually unknown) generation mechanism underlying censoring. We evaluate this method via extensive experiments on real-world datasets. Our methodology outperforms multiple competitive baselines at generating survival data, while improving the performance of downstream survival models trained on it and tested on real data. | [
"['Mohd Ashhad' 'Ricardo Henao']"
] |
null | null | 2405.17339 | null | null | http://arxiv.org/pdf/2405.17339v1 | 2024-05-27T16:42:51Z | 2024-05-27T16:42:51Z | Physics-Informed Real NVP for Satellite Power System Fault Detection | The unique challenges posed by the space environment, characterized by extreme conditions and limited accessibility, raise the need for robust and reliable techniques to identify and prevent satellite faults. Fault detection methods in the space sector are required to ensure mission success and to protect valuable assets. In this context, this paper proposes an Artificial Intelligence (AI) based fault detection methodology and evaluates its performance on ADAPT (Advanced Diagnostics and Prognostics Testbed), an Electrical Power System (EPS) dataset, crafted in laboratory by NASA. Our study focuses on the application of a physics-informed (PI) real-valued non-volume preserving (Real NVP) model for fault detection in space systems. The efficacy of this method is systematically compared against other AI approaches such as Gated Recurrent Unit (GRU) and Autoencoder-based techniques. Results show that our physics-informed approach outperforms existing methods of fault detection, demonstrating its suitability for addressing the unique challenges of satellite EPS sub-system faults. Furthermore, we unveil the competitive advantage of physics-informed loss in AI models to address specific space needs, namely robustness, reliability, and power constraints, crucial for space exploration and satellite missions. | [
"['Carlo Cena' 'Umberto Albertin' 'Mauro Martini' 'Silvia Bucci'\n 'Marcello Chiaberge']"
] |
null | null | 2405.17346 | null | null | http://arxiv.org/pdf/2405.17346v1 | 2024-05-27T16:49:29Z | 2024-05-27T16:49:29Z | Prompt Optimization with Human Feedback | Large language models (LLMs) have demonstrated remarkable performances in various tasks. However, the performance of LLMs heavily depends on the input prompt, which has given rise to a number of recent works on prompt optimization. However, previous works often require the availability of a numeric score to assess the quality of every prompt. Unfortunately, when a human user interacts with a black-box LLM, attaining such a score is often infeasible and unreliable. Instead, it is usually significantly easier and more reliable to obtain preference feedback from a human user, i.e., showing the user the responses generated from a pair of prompts and asking the user which one is preferred. Therefore, in this paper, we study the problem of prompt optimization with human feedback (POHF), in which we aim to optimize the prompt for a black-box LLM using only human preference feedback. Drawing inspiration from dueling bandits, we design a theoretically principled strategy to select a pair of prompts to query for preference feedback in every iteration, and hence introduce our algorithm named automated POHF (APOHF). We apply our APOHF algorithm to various tasks, including optimizing user instructions, prompt optimization for text-to-image generative models, and response optimization with human feedback (i.e., further refining the response using a variant of our APOHF). The results demonstrate that our APOHF can efficiently find a good prompt using a small number of preference feedback instances. Our code can be found at url{https://github.com/xqlin98/APOHF}. | [
"['Xiaoqiang Lin' 'Zhongxiang Dai' 'Arun Verma' 'See-Kiong Ng'\n 'Patrick Jaillet' 'Bryan Kian Hsiang Low']"
] |
null | null | 2405.17352 | null | null | http://arxiv.org/pdf/2405.17352v1 | 2024-05-27T16:55:48Z | 2024-05-27T16:55:48Z | Assessing the significance of longitudinal data in Alzheimer's Disease
forecasting | In this study, we employ a transformer encoder model to characterize the significance of longitudinal patient data for forecasting the progression of Alzheimer's Disease (AD). Our model, Longitudinal Forecasting Model for Alzheimer's Disease (LongForMAD), harnesses the comprehensive temporal information embedded in sequences of patient visits that incorporate multimodal data, providing a deeper understanding of disease progression than can be drawn from single-visit data alone. We present an empirical analysis across two patient groups-Cognitively Normal (CN) and Mild Cognitive Impairment (MCI)-over a span of five follow-up years. Our findings reveal that models incorporating more extended patient histories can outperform those relying solely on present information, suggesting a deeper historical context is critical in enhancing predictive accuracy for future AD progression. Our results support the incorporation of longitudinal data in clinical settings to enhance the early detection and monitoring of AD. Our code is available at url{https://github.com/batuhankmkaraman/LongForMAD}. | [
"['Batuhan K. Karaman' 'Mert R. Sabuncu']"
] |
null | null | 2405.17358 | null | null | http://arxiv.org/pdf/2405.17358v3 | 2024-05-30T07:54:40Z | 2024-05-27T17:02:35Z | Rethinking Transformers in Solving POMDPs | Sequential decision-making algorithms such as reinforcement learning (RL) in real-world scenarios inevitably face environments with partial observability. This paper scrutinizes the effectiveness of a popular architecture, namely Transformers, in Partially Observable Markov Decision Processes (POMDPs) and reveals its theoretical limitations. We establish that regular languages, which Transformers struggle to model, are reducible to POMDPs. This poses a significant challenge for Transformers in learning POMDP-specific inductive biases, due to their lack of inherent recurrence found in other models like RNNs. This paper casts doubt on the prevalent belief in Transformers as sequence models for RL and proposes to introduce a point-wise recurrent structure. The Deep Linear Recurrent Unit (LRU) emerges as a well-suited alternative for Partially Observable RL, with empirical results highlighting the sub-optimal performance of the Transformer and considerable strength of LRU. | [
"['Chenhao Lu' 'Ruizhe Shi' 'Yuyao Liu' 'Kaizhe Hu' 'Simon S. Du'\n 'Huazhe Xu']"
] |
null | null | 2405.17366 | null | null | http://arxiv.org/pdf/2405.17366v1 | 2024-05-27T17:19:02Z | 2024-05-27T17:19:02Z | EM-GANSim: Real-time and Accurate EM Simulation Using Conditional GANs
for 3D Indoor Scenes | We present a novel machine-learning (ML) approach (EM-GANSim) for real-time electromagnetic (EM) propagation that is used for wireless communication simulation in 3D indoor environments. Our approach uses a modified conditional Generative Adversarial Network (GAN) that incorporates encoded geometry and transmitter location while adhering to the electromagnetic propagation theory. The overall physically-inspired learning is able to predict the power distribution in 3D scenes, which is represented using heatmaps. Our overall accuracy is comparable to ray tracing-based EM simulation, as evidenced by lower mean squared error values. Furthermore, our GAN-based method drastically reduces the computation time, achieving a 5X speedup on complex benchmarks. In practice, it can compute the signal strength in a few milliseconds on any location in 3D indoor environments. We also present a large dataset of 3D models and EM ray tracing-simulated heatmaps. To the best of our knowledge, EM-GANSim is the first real-time algorithm for EM simulation in complex 3D indoor environments. We plan to release the code and the dataset. | [
"['Ruichen Wang' 'Dinesh Manocha']"
] |
null | null | 2405.17370 | null | null | http://arxiv.org/pdf/2405.17370v1 | 2024-05-27T17:26:36Z | 2024-05-27T17:26:36Z | Model-Agnostic Zeroth-Order Policy Optimization for Meta-Learning of
Ergodic Linear Quadratic Regulators | Meta-learning has been proposed as a promising machine learning topic in recent years, with important applications to image classification, robotics, computer games, and control systems. In this paper, we study the problem of using meta-learning to deal with uncertainty and heterogeneity in ergodic linear quadratic regulators. We integrate the zeroth-order optimization technique with a typical meta-learning method, proposing an algorithm that omits the estimation of policy Hessian, which applies to tasks of learning a set of heterogeneous but similar linear dynamic systems. The induced meta-objective function inherits important properties of the original cost function when the set of linear dynamic systems are meta-learnable, allowing the algorithm to optimize over a learnable landscape without projection onto the feasible set. We provide a convergence result for the exact gradient descent process by analyzing the boundedness and smoothness of the gradient for the meta-objective, which justify the proposed algorithm with gradient estimation error being small. We also provide a numerical example to corroborate this perspective. | [
"['Yunian Pan' 'Quanyan Zhu']"
] |
null | null | 2405.17372 | null | null | http://arxiv.org/pdf/2405.17372v1 | 2024-05-27T17:28:25Z | 2024-05-27T17:28:25Z | BehaviorGPT: Smart Agent Simulation for Autonomous Driving with
Next-Patch Prediction | Simulating realistic interactions among traffic agents is crucial for efficiently validating the safety of autonomous driving systems. Existing leading simulators primarily use an encoder-decoder structure to encode the historical trajectories for future simulation. However, such a paradigm complicates the model architecture, and the manual separation of history and future trajectories leads to low data utilization. To address these challenges, we propose Behavior Generative Pre-trained Transformers (BehaviorGPT), a decoder-only, autoregressive architecture designed to simulate the sequential motion of multiple agents. Crucially, our approach discards the traditional separation between "history" and "future," treating each time step as the "current" one, resulting in a simpler, more parameter- and data-efficient design that scales seamlessly with data and computation. Additionally, we introduce the Next-Patch Prediction Paradigm (NP3), which enables models to reason at the patch level of trajectories and capture long-range spatial-temporal interactions. BehaviorGPT ranks first across several metrics on the Waymo Sim Agents Benchmark, demonstrating its exceptional performance in multi-agent and agent-map interactions. We outperformed state-of-the-art models with a realism score of 0.741 and improved the minADE metric to 1.540, with an approximately 91.6% reduction in model parameters. | [
"['Zikang Zhou' 'Haibo Hu' 'Xinhong Chen' 'Jianping Wang' 'Nan Guan'\n 'Kui Wu' 'Yung-Hui Li' 'Yu-Kai Huang' 'Chun Jason Xue']"
] |
null | null | 2405.17374 | null | null | http://arxiv.org/pdf/2405.17374v2 | 2024-05-28T04:58:52Z | 2024-05-27T17:31:56Z | Navigating the Safety Landscape: Measuring Risks in Finetuning Large
Language Models | Safety alignment is the key to guiding the behaviors of large language models (LLMs) that are in line with human preferences and restrict harmful behaviors at inference time, but recent studies show that it can be easily compromised by finetuning with only a few adversarially designed training examples. We aim to measure the risks in finetuning LLMs through navigating the LLM safety landscape. We discover a new phenomenon observed universally in the model parameter space of popular open-source LLMs, termed as "safety basin": randomly perturbing model weights maintains the safety level of the original aligned model in its local neighborhood. Our discovery inspires us to propose the new VISAGE safety metric that measures the safety in LLM finetuning by probing its safety landscape. Visualizing the safety landscape of the aligned model enables us to understand how finetuning compromises safety by dragging the model away from the safety basin. LLM safety landscape also highlights the system prompt's critical role in protecting a model, and that such protection transfers to its perturbed variants within the safety basin. These observations from our safety landscape research provide new insights for future work on LLM safety community. | [
"['ShengYun Peng' 'Pin-Yu Chen' 'Matthew Hull' 'Duen Horng Chau']"
] |
null | null | 2405.17377 | null | null | http://arxiv.org/pdf/2405.17377v1 | 2024-05-27T17:33:03Z | 2024-05-27T17:33:03Z | How Does Perfect Fitting Affect Representation Learning? On the Training
Dynamics of Representations in Deep Neural Networks | In this paper, we elucidate how representations in deep neural networks (DNNs) evolve during training. We focus on overparameterized learning settings where the training continues much after the trained DNN starts to perfectly fit its training data. We examine the evolution of learned representations along the entire training process, including its perfect fitting regime, and with respect to the epoch-wise double descent phenomenon. We explore the representational similarity of DNN layers, each layer with respect to its own representations throughout the training process. For this, we use two similarity metrics: (1) The centered kernel alignment (CKA) similarity; (2) Similarity of decision regions of linear classifier probes that we train for the DNN layers. Our extensive experiments discover training dynamics patterns that can emerge in layers depending on the relative layer-depth, DNN width, and architecture. We show that representations at the deeper layers evolve much more in the training when an epoch-wise double descent occurs. For Vision Transformer, we show that the perfect fitting threshold creates a transition in the evolution of representations across all the encoder blocks. | [
"['Yuval Sharon' 'Yehuda Dar']"
] |
null | null | 2405.17378 | null | null | http://arxiv.org/pdf/2405.17378v1 | 2024-05-27T17:36:01Z | 2024-05-27T17:36:01Z | RTL-Repo: A Benchmark for Evaluating LLMs on Large-Scale RTL Design
Projects | Large Language Models (LLMs) have demonstrated potential in assisting with Register Transfer Level (RTL) design tasks. Nevertheless, there remains to be a significant gap in benchmarks that accurately reflect the complexity of real-world RTL projects. To address this, this paper presents RTL-Repo, a benchmark specifically designed to evaluate LLMs on large-scale RTL design projects. RTL-Repo includes a comprehensive dataset of more than 4000 Verilog code samples extracted from public GitHub repositories, with each sample providing the full context of the corresponding repository. We evaluate several state-of-the-art models on the RTL-Repo benchmark, including GPT-4, GPT-3.5, Starcoder2, alongside Verilog-specific models like VeriGen and RTLCoder, and compare their performance in generating Verilog code for complex projects. The RTL-Repo benchmark provides a valuable resource for the hardware design community to assess and compare LLMs' performance in real-world RTL design scenarios and train LLMs specifically for Verilog code generation in complex, multi-file RTL projects. RTL-Repo is open-source and publicly available on Github. | [
"['Ahmed Allam' 'Mohamed Shalan']"
] |
null | null | 2405.17382 | null | null | http://arxiv.org/pdf/2405.17382v1 | 2024-05-27T17:38:33Z | 2024-05-27T17:38:33Z | ReMoDetect: Reward Models Recognize Aligned LLM's Generations | The remarkable capabilities and easy accessibility of large language models (LLMs) have significantly increased societal risks (e.g., fake news generation), necessitating the development of LLM-generated text (LGT) detection methods for safe usage. However, detecting LGTs is challenging due to the vast number of LLMs, making it impractical to account for each LLM individually; hence, it is crucial to identify the common characteristics shared by these models. In this paper, we draw attention to a common feature of recent powerful LLMs, namely the alignment training, i.e., training LLMs to generate human-preferable texts. Our key finding is that as these aligned LLMs are trained to maximize the human preferences, they generate texts with higher estimated preferences even than human-written texts; thus, such texts are easily detected by using the reward model (i.e., an LLM trained to model human preference distribution). Based on this finding, we propose two training schemes to further improve the detection ability of the reward model, namely (i) continual preference fine-tuning to make the reward model prefer aligned LGTs even further and (ii) reward modeling of Human/LLM mixed texts (a rephrased texts from human-written texts using aligned LLMs), which serves as a median preference text corpus between LGTs and human-written texts to learn the decision boundary better. We provide an extensive evaluation by considering six text domains across twelve aligned LLMs, where our method demonstrates state-of-the-art results. Code is available at https://github.com/hyunseoklee-ai/reward_llm_detect. | [
"['Hyunseok Lee' 'Jihoon Tack' 'Jinwoo Shin']"
] |
null | null | 2405.17391 | null | null | http://arxiv.org/pdf/2405.17391v1 | 2024-05-27T17:44:33Z | 2024-05-27T17:44:33Z | Dataset-learning duality and emergent criticality | In artificial neural networks, the activation dynamics of non-trainable variables is strongly coupled to the learning dynamics of trainable variables. During the activation pass, the boundary neurons (e.g., input neurons) are mapped to the bulk neurons (e.g., hidden neurons), and during the learning pass, both bulk and boundary neurons are mapped to changes in trainable variables (e.g., weights and biases). For example, in feed-forward neural networks, forward propagation is the activation pass and backward propagation is the learning pass. We show that a composition of the two maps establishes a duality map between a subspace of non-trainable boundary variables (e.g., dataset) and a tangent subspace of trainable variables (i.e., learning). In general, the dataset-learning duality is a complex non-linear map between high-dimensional spaces, but in a learning equilibrium, the problem can be linearized and reduced to many weakly coupled one-dimensional problems. We use the duality to study the emergence of criticality, or the power-law distributions of fluctuations of the trainable variables. In particular, we show that criticality can emerge in the learning system even from the dataset in a non-critical state, and that the power-law distribution can be modified by changing either the activation function or the loss function. | [
"['Ekaterina Kukleva' 'Vitaly Vanchurin']"
] |
null | null | 2405.17394 | null | null | http://arxiv.org/pdf/2405.17394v2 | 2024-06-02T19:43:55Z | 2024-05-27T17:46:57Z | The Expressive Capacity of State Space Models: A Formal Language
Perspective | Recently, recurrent models based on linear state space models (SSMs) have shown promising performance in language modeling (LM), competititve with transformers. However, there is little understanding of the in-principle abilities of such models, which could provide useful guidance to the search for better LM architectures. We present a comprehensive theoretical study of the capacity of such SSMs as it compares to that of transformers and traditional RNNs. We find that SSMs and transformers have overlapping but distinct strengths. In star-free state tracking, SSMs implement straightforward and exact solutions to problems that transformers struggle to represent exactly. They can also model bounded hierarchical structure with optimal memory even without simulating a stack. On the other hand, we identify a design choice in current SSMs that limits their expressive power. We discuss implications for SSM and LM research, and verify results empirically on a recent SSM, Mamba. | [
"['Yash Sarrof' 'Yana Veitsman' 'Michael Hahn']"
] |
null | null | 2405.17399 | null | null | http://arxiv.org/pdf/2405.17399v1 | 2024-05-27T17:49:18Z | 2024-05-27T17:49:18Z | Transformers Can Do Arithmetic with the Right Embeddings | The poor performance of transformers on arithmetic tasks seems to stem in large part from their inability to keep track of the exact position of each digit inside of a large span of digits. We mend this problem by adding an embedding to each digit that encodes its position relative to the start of the number. In addition to the boost these embeddings provide on their own, we show that this fix enables architectural modifications such as input injection and recurrent layers to improve performance even further. With positions resolved, we can study the logical extrapolation ability of transformers. Can they solve arithmetic problems that are larger and more complex than those in their training data? We find that training on only 20 digit numbers with a single GPU for one day, we can reach state-of-the-art performance, achieving up to 99% accuracy on 100 digit addition problems. Finally, we show that these gains in numeracy also unlock improvements on other multi-step reasoning tasks including sorting and multiplication. | [
"['Sean McLeish' 'Arpit Bansal' 'Alex Stein' 'Neel Jain'\n 'John Kirchenbauer' 'Brian R. Bartoldson' 'Bhavya Kailkhura'\n 'Abhinav Bhatele' 'Jonas Geiping' 'Avi Schwarzschild' 'Tom Goldstein']"
] |
null | null | 2405.17401 | null | null | http://arxiv.org/pdf/2405.17401v1 | 2024-05-27T17:51:08Z | 2024-05-27T17:51:08Z | RB-Modulation: Training-Free Personalization of Diffusion Models using
Stochastic Optimal Control | We propose Reference-Based Modulation (RB-Modulation), a new plug-and-play solution for training-free personalization of diffusion models. Existing training-free approaches exhibit difficulties in (a) style extraction from reference images in the absence of additional style or content text descriptions, (b) unwanted content leakage from reference style images, and (c) effective composition of style and content. RB-Modulation is built on a novel stochastic optimal controller where a style descriptor encodes the desired attributes through a terminal cost. The resulting drift not only overcomes the difficulties above, but also ensures high fidelity to the reference style and adheres to the given text prompt. We also introduce a cross-attention-based feature aggregation scheme that allows RB-Modulation to decouple content and style from the reference image. With theoretical justification and empirical evidence, our framework demonstrates precise extraction and control of content and style in a training-free manner. Further, our method allows a seamless composition of content and style, which marks a departure from the dependency on external adapters or ControlNets. | [
"['Litu Rout' 'Yujia Chen' 'Nataniel Ruiz' 'Abhishek Kumar'\n 'Constantine Caramanis' 'Sanjay Shakkottai' 'Wen-Sheng Chu']"
] |
null | null | 2405.17403 | null | null | http://arxiv.org/pdf/2405.17403v1 | 2024-05-27T17:51:36Z | 2024-05-27T17:51:36Z | A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion
Model Training | Training diffusion models is always a computation-intensive task. In this paper, we introduce a novel speed-up method for diffusion model training, called, which is based on a closer look at time steps. Our key findings are: i) Time steps can be empirically divided into acceleration, deceleration, and convergence areas based on the process increment. ii) These time steps are imbalanced, with many concentrated in the convergence area. iii) The concentrated steps provide limited benefits for diffusion training. To address this, we design an asymmetric sampling strategy that reduces the frequency of steps from the convergence area while increasing the sampling probability for steps from other areas. Additionally, we propose a weighting strategy to emphasize the importance of time steps with rapid-change process increments. As a plug-and-play and architecture-agnostic approach, SpeeD consistently achieves 3-times acceleration across various diffusion architectures, datasets, and tasks. Notably, due to its simple design, our approach significantly reduces the cost of diffusion model training with minimal overhead. Our research enables more researchers to train diffusion models at a lower cost. | [
"['Kai Wang' 'Yukun Zhou' 'Mingjia Shi' 'Zhihang Yuan' 'Yuzhang Shang'\n 'Xiaojiang Peng' 'Hanwang Zhang' 'Yang You']"
] |
null | null | 2405.17404 | null | null | http://arxiv.org/pdf/2405.17404v1 | 2024-05-27T17:52:12Z | 2024-05-27T17:52:12Z | Spectral Greedy Coresets for Graph Neural Networks | The ubiquity of large-scale graphs in node-classification tasks significantly hinders the real-world applications of Graph Neural Networks (GNNs). Node sampling, graph coarsening, and dataset condensation are effective strategies for enhancing data efficiency. However, owing to the interdependence of graph nodes, coreset selection, which selects subsets of the data examples, has not been successfully applied to speed up GNN training on large graphs, warranting special treatment. This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs (i.e., neighborhood subgraphs around a node) based on their spectral embeddings. We decompose the coreset selection problem for GNNs into two phases: a coarse selection of widely spread ego graphs and a refined selection to diversify their topologies. We design a greedy algorithm that approximately optimizes both objectives. Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs. Extensive experiments on ten datasets demonstrate that SGGC outperforms other coreset methods by a wide margin, generalizes well across GNN architectures, and is much faster than graph condensation. | [
"['Mucong Ding' 'Yinhan He' 'Jundong Li' 'Furong Huang']"
] |
null | null | 2405.17406 | null | null | http://arxiv.org/pdf/2405.17406v2 | 2024-06-03T11:32:11Z | 2024-05-27T17:55:05Z | Deep Learning Calabi-Yau four folds with hybrid and recurrent neural
network architectures | In this work, we report the results of applying deep learning based on hybrid convolutional-recurrent and purely recurrent neural network architectures to the dataset of almost one million complete intersection Calabi-Yau four-folds (CICY4) to machine-learn their four Hodge numbers $h^{1,1}, h^{2,1}, h^{3,1}, h^{2,2}$. In particular, we explored and experimented with twelve different neural network models, nine of which are convolutional-recurrent (CNN-RNN) hybrids with the RNN unit being either GRU (Gated Recurrent Unit) or Long Short Term Memory (LSTM). The remaining four models are purely recurrent neural networks based on LSTM. In terms of the $h^{1,1}, h^{2,1}, h^{3,1}, h^{2,2}$ prediction accuracies, at 72% training ratio, our best performing individual model is CNN-LSTM-400, a hybrid CNN-LSTM with the LSTM hidden size of 400, which obtained 99.74%, 98.07%, 95.19%, 81.01%, our second best performing individual model is LSTM-448, an LSTM-based model with the hidden size of 448, which obtained 99.74%, 97.51%, 94.24%, and 78.63%. These results were improved by forming ensembles of the top two, three or even four models. Our best ensemble, consisting of the top four models, achieved the accuracies of 99.84%, 98.71%, 96.26%, 85.03%. At 80% training ratio, the top two performing models LSTM-448 and LSTM-424 are both LSTM-based with the hidden sizes of 448 and 424. Compared with the 72% training ratio, there is a significant improvement of accuracies, which reached 99.85%, 98.66%, 96.26%, 84.77% for the best individual model and 99.90%, 99.03%, 97.97%, 87.34% for the best ensemble. | [
"['H. L. Dao']"
] |
null | null | 2405.17412 | null | null | http://arxiv.org/pdf/2405.17412v1 | 2024-05-27T17:57:12Z | 2024-05-27T17:57:12Z | Towards One Model for Classical Dimensionality Reduction: A
Probabilistic Perspective on UMAP and t-SNE | This paper shows that the dimensionality reduction methods, UMAP and t-SNE, can be approximately recast as MAP inference methods corresponding to a generalized Wishart-based model introduced in ProbDR. This interpretation offers deeper theoretical insights into these algorithms, while introducing tools with which similar dimensionality reduction methods can be studied. | [
"['Aditya Ravuri' 'Neil D. Lawrence']"
] |
null | null | 2405.17416 | null | null | http://arxiv.org/pdf/2405.17416v1 | 2024-05-27T17:58:23Z | 2024-05-27T17:58:23Z | A Recipe for Unbounded Data Augmentation in Visual Reinforcement
Learning | $Q$-learning algorithms are appealing for real-world applications due to their data-efficiency, but they are very prone to overfitting and training instabilities when trained from visual observations. Prior work, namely SVEA, finds that selective application of data augmentation can improve the visual generalization of RL agents without destabilizing training. We revisit its recipe for data augmentation, and find an assumption that limits its effectiveness to augmentations of a photometric nature. Addressing these limitations, we propose a generalized recipe, SADA, that works with wider varieties of augmentations. We benchmark its effectiveness on DMC-GB2 -- our proposed extension of the popular DMControl Generalization Benchmark -- as well as tasks from Meta-World and the Distracting Control Suite, and find that our method, SADA, greatly improves training stability and generalization of RL agents across a diverse set of augmentations. Visualizations, code, and benchmark: see https://aalmuzairee.github.io/SADA/ | [
"['Abdulaziz Almuzairee' 'Nicklas Hansen' 'Henrik I. Christensen']"
] |
null | null | 2405.17419 | null | null | http://arxiv.org/pdf/2405.17419v1 | 2024-05-27T17:59:02Z | 2024-05-27T17:59:02Z | MultiOOD: Scaling Out-of-Distribution Detection for Multiple Modalities | Detecting out-of-distribution (OOD) samples is important for deploying machine learning models in safety-critical applications such as autonomous driving and robot-assisted surgery. Existing research has mainly focused on unimodal scenarios on image data. However, real-world applications are inherently multimodal, which makes it essential to leverage information from multiple modalities to enhance the efficacy of OOD detection. To establish a foundation for more realistic Multimodal OOD Detection, we introduce the first-of-its-kind benchmark, MultiOOD, characterized by diverse dataset sizes and varying modality combinations. We first evaluate existing unimodal OOD detection algorithms on MultiOOD, observing that the mere inclusion of additional modalities yields substantial improvements. This underscores the importance of utilizing multiple modalities for OOD detection. Based on the observation of Modality Prediction Discrepancy between in-distribution (ID) and OOD data, and its strong correlation with OOD performance, we propose the Agree-to-Disagree (A2D) algorithm to encourage such discrepancy during training. Moreover, we introduce a novel outlier synthesis method, NP-Mix, which explores broader feature spaces by leveraging the information from nearest neighbor classes and complements A2D to strengthen OOD detection performance. Extensive experiments on MultiOOD demonstrate that training with A2D and NP-Mix improves existing OOD detection algorithms by a large margin. Our source code and MultiOOD benchmark are available at https://github.com/donghao51/MultiOOD. | [
"['Hao Dong' 'Yue Zhao' 'Eleni Chatzi' 'Olga Fink']"
] |
null | null | 2405.17420 | null | null | http://arxiv.org/pdf/2405.17420v1 | 2024-05-27T17:59:04Z | 2024-05-27T17:59:04Z | Survival of the Fittest Representation: A Case Study with Modular
Addition | When a neural network can learn multiple distinct algorithms to solve a task, how does it "choose" between them during training? To approach this question, we take inspiration from ecology: when multiple species coexist, they eventually reach an equilibrium where some survive while others die out. Analogously, we suggest that a neural network at initialization contains many solutions (representations and algorithms), which compete with each other under pressure from resource constraints, with the "fittest" ultimately prevailing. To investigate this Survival of the Fittest hypothesis, we conduct a case study on neural networks performing modular addition, and find that these networks' multiple circular representations at different Fourier frequencies undergo such competitive dynamics, with only a few circles surviving at the end. We find that the frequencies with high initial signals and gradients, the "fittest," are more likely to survive. By increasing the embedding dimension, we also observe more surviving frequencies. Inspired by the Lotka-Volterra equations describing the dynamics between species, we find that the dynamics of the circles can be nicely characterized by a set of linear differential equations. Our results with modular addition show that it is possible to decompose complicated representations into simpler components, along with their basic interactions, to offer insight on the training dynamics of representations. | [
"['Xiaoman Delores Ding' 'Zifan Carl Guo' 'Eric J. Michaud' 'Ziming Liu'\n 'Max Tegmark']"
] |
null | null | 2405.17422 | null | null | http://arxiv.org/pdf/2405.17422v1 | 2024-05-27T17:59:23Z | 2024-05-27T17:59:23Z | Hardness-Aware Scene Synthesis for Semi-Supervised 3D Object Detection | 3D object detection aims to recover the 3D information of concerning objects and serves as the fundamental task of autonomous driving perception. Its performance greatly depends on the scale of labeled training data, yet it is costly to obtain high-quality annotations for point cloud data. While conventional methods focus on generating pseudo-labels for unlabeled samples as supplements for training, the structural nature of 3D point cloud data facilitates the composition of objects and backgrounds to synthesize realistic scenes. Motivated by this, we propose a hardness-aware scene synthesis (HASS) method to generate adaptive synthetic scenes to improve the generalization of the detection models. We obtain pseudo-labels for unlabeled objects and generate diverse scenes with different compositions of objects and backgrounds. As the scene synthesis is sensitive to the quality of pseudo-labels, we further propose a hardness-aware strategy to reduce the effect of low-quality pseudo-labels and maintain a dynamic pseudo-database to ensure the diversity and quality of synthetic scenes. Extensive experimental results on the widely used KITTI and Waymo datasets demonstrate the superiority of the proposed HASS method, which outperforms existing semi-supervised learning methods on 3D object detection. Code: https://github.com/wzzheng/HASS. | [
"['Shuai Zeng' 'Wenzhao Zheng' 'Jiwen Lu' 'Haibin Yan']"
] |
null | null | 2405.17425 | null | null | http://arxiv.org/pdf/2405.17425v1 | 2024-05-27T17:59:35Z | 2024-05-27T17:59:35Z | From Neurons to Neutrons: A Case Study in Interpretability | Mechanistic Interpretability (MI) promises a path toward fully understanding how neural networks make their predictions. Prior work demonstrates that even when trained to perform simple arithmetic, models can implement a variety of algorithms (sometimes concurrently) depending on initialization and hyperparameters. Does this mean neuron-level interpretability techniques have limited applicability? We argue that high-dimensional neural networks can learn low-dimensional representations of their training data that are useful beyond simply making good predictions. Such representations can be understood through the mechanistic interpretability lens and provide insights that are surprisingly faithful to human-derived domain knowledge. This indicates that such approaches to interpretability can be useful for deriving a new understanding of a problem from models trained to solve it. As a case study, we extract nuclear physics concepts by studying models trained to reproduce nuclear data. | [
"['Ouail Kitouni' 'Niklas Nolte' 'Víctor Samuel Pérez-Díaz'\n 'Sokratis Trifinopoulos' 'Mike Williams']"
] |
null | null | 2405.17428 | null | null | http://arxiv.org/pdf/2405.17428v1 | 2024-05-27T17:59:45Z | 2024-05-27T17:59:45Z | NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding
Models | Decoder-only large language model (LLM)-based embedding models are beginning to outperform BERT or T5-based embedding models in general-purpose text embedding tasks, including dense vector-based retrieval. In this work, we introduce the NV-Embed model with a variety of architectural designs and training procedures to significantly enhance the performance of LLM as a versatile embedding model, while maintaining its simplicity and reproducibility. For model architecture, we propose a latent attention layer to obtain pooled embeddings, which consistently improves retrieval and downstream task accuracy compared to mean pooling or using the last <EOS> token embedding from LLMs. To enhance representation learning, we remove the causal attention mask of LLMs during contrastive training. For model training, we introduce a two-stage contrastive instruction-tuning method. It first applies contrastive training with instructions on retrieval datasets, utilizing in-batch negatives and curated hard negative examples. At stage-2, it blends various non-retrieval datasets into instruction tuning, which not only enhances non-retrieval task accuracy but also improves retrieval performance. Combining these techniques, our NV-Embed model, using only publicly available data, has achieved a record-high score of 69.32, ranking No. 1 on the Massive Text Embedding Benchmark (MTEB) (as of May 24, 2024), with 56 tasks, encompassing retrieval, reranking, classification, clustering, and semantic textual similarity tasks. Notably, our model also attains the highest score of 59.36 on 15 retrieval tasks in the MTEB benchmark (also known as BEIR). We will open-source the model at: https://huggingface.co/nvidia/NV-Embed-v1. | [
"['Chankyu Lee' 'Rajarshi Roy' 'Mengyao Xu' 'Jonathan Raiman'\n 'Mohammad Shoeybi' 'Bryan Catanzaro' 'Wei Ping']"
] |
null | null | 2405.17430 | null | null | http://arxiv.org/pdf/2405.17430v1 | 2024-05-27T17:59:56Z | 2024-05-27T17:59:56Z | Matryoshka Multimodal Models | Large Multimodal Models (LMMs) such as LLaVA have shown strong performance in visual-linguistic reasoning. These models first embed images into a fixed large number of visual tokens and then feed them into a Large Language Model (LLM). However, this design causes an excessive number of tokens for dense visual scenarios such as high-resolution images and videos, leading to great inefficiency. While token pruning/merging methods do exist, they produce a single length output for each image and do not afford flexibility in trading off information density v.s. efficiency. Inspired by the concept of Matryoshka Dolls, we propose M3: Matryoshka Multimodal Models, which learns to represent visual content as nested sets of visual tokens that capture information across multiple coarse-to-fine granularities. Our approach offers several unique benefits for LMMs: (1) One can explicitly control the visual granularity per test instance during inference, e.g. , adjusting the number of tokens used to represent an image based on the anticipated complexity or simplicity of the content; (2) M3 provides a framework for analyzing the granularity needed for existing datasets, where we find that COCO-style benchmarks only need around ~9 visual tokens to obtain accuracy similar to that of using all 576 tokens; (3) Our approach provides a foundation to explore the best trade-off between performance and visual token length at sample level, where our investigation reveals that a large gap exists between the oracle upper bound and current fixed-scale representations. | [
"['Mu Cai' 'Jianwei Yang' 'Jianfeng Gao' 'Yong Jae Lee']"
] |
null | null | 2405.17436 | null | null | http://arxiv.org/pdf/2405.17436v1 | 2024-05-02T01:36:13Z | 2024-05-02T01:36:13Z | Intelligent Hybrid Resource Allocation in MEC-assisted RAN Slicing
Network | In this paper, we aim to maximize the SSR for heterogeneous service demands in the cooperative MEC-assisted RAN slicing system by jointly considering the multi-node computing resources cooperation and allocation, the transmission resource blocks (RBs) allocation, and the time-varying dynamicity of the system. To this end, we abstract the system into a weighted undirected topology graph and, then propose a recurrent graph reinforcement learning (RGRL) algorithm to intelligently learn the optimal hybrid RA policy. Therein, the graph neural network (GCN) and the deep deterministic policy gradient (DDPG) is combined to effectively extract spatial features from the equivalent topology graph. Furthermore, a novel time recurrent reinforcement learning framework is designed in the proposed RGRL algorithm by incorporating the action output of the policy network at the previous moment into the state input of the policy network at the subsequent moment, so as to cope with the time-varying and contextual network environment. In addition, we explore two use case scenarios to discuss the universal superiority of the proposed RGRL algorithm. Simulation results demonstrate the superiority of the proposed algorithm in terms of the average SSR, the performance stability, and the network complexity. | [
"['Chong Zheng' 'Yongming Huang' 'Cheng Zhang' 'Tony Q. S. Quek']"
] |
null | null | 2405.17438 | null | null | http://arxiv.org/pdf/2405.17438v1 | 2024-05-07T18:55:50Z | 2024-05-07T18:55:50Z | An LLM-Tool Compiler for Fused Parallel Function Calling | State-of-the-art sequential reasoning in Large Language Models (LLMs) has expanded the capabilities of Copilots beyond conversational tasks to complex function calling, managing thousands of API calls. However, the tendency of compositional prompting to segment tasks into multiple steps, each requiring a round-trip to the GPT APIs, leads to increased system latency and costs. Although recent advancements in parallel function calling have improved tool execution per API call, they may necessitate more detailed in-context instructions and task breakdown at the prompt level, resulting in higher engineering and production costs. Inspired by the hardware design principles of multiply-add (MAD) operations, which fuse multiple arithmetic operations into a single task from the compiler's perspective, we propose LLM-Tool Compiler, which selectively fuses similar types of tool operations under a single function at runtime, presenting them as a unified task to the LLM. This selective fusion inherently enhances parallelization and efficiency. Benchmarked on a large-scale Copilot platform, LLM-Tool Compiler achieves up to four times more parallel calls than existing methods, reducing token costs and latency by up to 40% and 12%, respectively. | [
"['Simranjit Singh' 'Andreas Karatzas' 'Michael Fore'\n 'Iraklis Anagnostopoulos' 'Dimitrios Stamoulis']"
] |
null | null | 2405.17439 | null | null | http://arxiv.org/pdf/2405.17439v1 | 2024-05-09T03:07:59Z | 2024-05-09T03:07:59Z | An Overview of Machine Learning-Enabled Optimization for Reconfigurable
Intelligent Surfaces-Aided 6G Networks: From Reinforcement Learning to Large
Language Models | Reconfigurable intelligent surface (RIS) becomes a promising technique for 6G networks by reshaping signal propagation in smart radio environments. However, it also leads to significant complexity for network management due to the large number of elements and dedicated phase-shift optimization. In this work, we provide an overview of machine learning (ML)-enabled optimization for RIS-aided 6G networks. In particular, we focus on various reinforcement learning (RL) techniques, e.g., deep Q-learning, multi-agent reinforcement learning, transfer reinforcement learning, hierarchical reinforcement learning, and offline reinforcement learning. Different from existing studies, this work further discusses how large language models (LLMs) can be combined with RL to handle network optimization problems. It shows that LLM offers new opportunities to enhance the capabilities of RL algorithms in terms of generalization, reward function design, multi-modal information processing, etc. Finally, we identify the future challenges and directions of ML-enabled optimization for RIS-aided 6G networks. | [
"['Hao Zhou' 'Chengming Hu' 'Xue Liu']"
] |
null | null | 2405.17440 | null | null | http://arxiv.org/pdf/2405.17440v1 | 2024-05-13T03:19:47Z | 2024-05-13T03:19:47Z | CataLM: Empowering Catalyst Design Through Large Language Models | The field of catalysis holds paramount importance in shaping the trajectory of sustainable development, prompting intensive research efforts to leverage artificial intelligence (AI) in catalyst design. Presently, the fine-tuning of open-source large language models (LLMs) has yielded significant breakthroughs across various domains such as biology and healthcare. Drawing inspiration from these advancements, we introduce CataLM Cata}lytic Language Model), a large language model tailored to the domain of electrocatalytic materials. Our findings demonstrate that CataLM exhibits remarkable potential for facilitating human-AI collaboration in catalyst knowledge exploration and design. To the best of our knowledge, CataLM stands as the pioneering LLM dedicated to the catalyst domain, offering novel avenues for catalyst discovery and development. | [
"['Ludi Wang' 'Xueqing Chen' 'Yi Du' 'Yuanchun Zhou' 'Yang Gao'\n 'Wenjuan Cui']"
] |
null | null | 2405.17442 | null | null | http://arxiv.org/pdf/2405.17442v1 | 2024-05-15T22:34:52Z | 2024-05-15T22:34:52Z | Leveraging Machine Learning for Accurate IoT Device Identification in
Dynamic Wireless Contexts | Identifying IoT devices is crucial for network monitoring, security enforcement, and inventory tracking. However, most existing identification methods rely on deep packet inspection, which raises privacy concerns and adds computational complexity. More importantly, existing works overlook the impact of wireless channel dynamics on the accuracy of layer-2 features, thereby limiting their effectiveness in real-world scenarios. In this work, we define and use the latency of specific probe-response packet exchanges, referred to as "device latency," as the main feature for device identification. Additionally, we reveal the critical impact of wireless channel dynamics on the accuracy of device identification based on device latency. Specifically, this work introduces "accumulation score" as a novel approach to capturing fine-grained channel dynamics and their impact on device latency when training machine learning models. We implement the proposed methods and measure the accuracy and overhead of device identification in real-world scenarios. The results confirm that by incorporating the accumulation score for balanced data collection and training machine learning algorithms, we achieve an F1 score of over 97% for device identification, even amidst wireless channel dynamics, a significant improvement over the 75% F1 score achieved by disregarding the impact of channel dynamics on data collection and device latency. | [
"['Bhagyashri Tushir' 'Vikram K Ramanna' 'Yuhong Liu' 'Behnam Dezfouli']"
] |
null | null | 2405.17444 | null | null | http://arxiv.org/pdf/2405.17444v1 | 2024-05-18T02:36:29Z | 2024-05-18T02:36:29Z | Towards Gradient-based Time-Series Explanations through a SpatioTemporal
Attention Network | In this paper, we explore the feasibility of using a transformer-based, spatiotemporal attention network (STAN) for gradient-based time-series explanations. First, we trained the STAN model for video classifications using the global and local views of data and weakly supervised labels on time-series data (i.e. the type of an activity). We then leveraged a gradient-based XAI technique (e.g. saliency map) to identify salient frames of time-series data. According to the experiments using the datasets of four medically relevant activities, the STAN model demonstrated its potential to identify important frames of videos. | [
"['Min Hun Lee']"
] |
null | null | 2405.17445 | null | null | http://arxiv.org/pdf/2405.17445v1 | 2024-05-20T13:26:57Z | 2024-05-20T13:26:57Z | On margin-based generalization prediction in deep neural networks | Understanding generalization in deep neural networks is an active area of research. A promising avenue of exploration has been that of margin measurements: the shortest distance to the decision boundary for a given sample or that sample's representation internal to the network. Margin-based complexity measures have been shown to be correlated with the generalization ability of deep neural networks in some circumstances but not others. The reasons behind the success or failure of these metrics are currently unclear. In this study, we examine margin-based generalization prediction methods in different settings. We motivate why these metrics sometimes fail to accurately predict generalization and how they can be improved. First, we analyze the relationship between margins measured in the input space and sample noise. We find that different types of sample noise can have a very different effect on the overall margin of a network that has modeled noisy data. Following this, we empirically evaluate how robust margins measured at different representational spaces are at predicting generalization. We find that these metrics have several limitations and that a large margin does not exhibit a strong correlation with empirical risk in many cases. Finally, we introduce a new margin-based measure that incorporates an approximation of the underlying data manifold. It is empirically demonstrated that this measure is generally more predictive of generalization than all other margin-based measures. Furthermore, we find that this measurement also outperforms other contemporary complexity measures on a well-known generalization prediction benchmark. In addition, we analyze the utility and limitations of this approach and find that this metric is well aligned with intuitions expressed in prior work. | [
"['Coenraad Mouton']"
] |
null | null | 2405.17447 | null | null | http://arxiv.org/pdf/2405.17447v1 | 2024-05-21T08:36:30Z | 2024-05-21T08:36:30Z | How to train your ViT for OOD Detection | VisionTransformers have been shown to be powerful out-of-distribution detectors for ImageNet-scale settings when finetuned from publicly available checkpoints, often outperforming other model types on popular benchmarks. In this work, we investigate the impact of both the pretraining and finetuning scheme on the performance of ViTs on this task by analyzing a large pool of models. We find that the exact type of pretraining has a strong impact on which method works well and on OOD detection performance in general. We further show that certain training schemes might only be effective for a specific type of out-distribution, but not in general, and identify a best-practice training recipe. | [
"['Maximilian Mueller' 'Matthias Hein']"
] |
null | null | 2405.17449 | null | null | http://arxiv.org/pdf/2405.17449v1 | 2024-05-21T17:20:35Z | 2024-05-21T17:20:35Z | Image Based Character Recognition, Documentation System To Decode
Inscription From Temple | This project undertakes the training and analysis of optical character recognition OCR methods applied to 10th century ancient Tamil inscriptions discovered on the walls of the Brihadeeswarar Temple.The chosen OCR methods include Tesseract,a widely used OCR engine,using modern ICR techniques to pre process the raw data and a box editing software to finetune our model.The analysis with Tesseract aims to evaluate their effectiveness in accurately deciphering the nuances of the ancient Tamil characters.The performance of our model for the dataset are determined by their accuracy rate where the evaluated dataset divided into training set and testing set.By addressing the unique challenges posed by the script's historical context,this study seeks to contribute valuable insights to the broader field of OCR,facilitating improved preservation and interpretation of ancient inscriptions | [
"['Velmathi G' 'Shangavelan M' 'Harish D' 'Krithikshun M S']"
] |
null | null | 2405.17450 | null | null | http://arxiv.org/pdf/2405.17450v1 | 2024-05-21T17:55:54Z | 2024-05-21T17:55:54Z | The Power of Next-Frame Prediction for Learning Physical Laws | Next-frame prediction is a useful and powerful method for modelling and understanding the dynamics of video data. Inspired by the empirical success of causal language modelling and next-token prediction in language modelling, we explore the extent to which next-frame prediction serves as a strong foundational learning strategy (analogous to language modelling) for inducing an understanding of the visual world. In order to quantify the specific visual understanding induced by next-frame prediction, we introduce six diagnostic simulation video datasets derived from fundamental physical laws created by varying physical constants such as gravity and mass. We demonstrate that our models trained only on next-frame prediction are capable of predicting the value of these physical constants (e.g. gravity) without having been trained directly to learn these constants via a regression task. We find that the generative training phase alone induces a model state that can predict physical constants significantly better than that of a random model, improving the loss by a factor of between 1.28 to 6.24. We conclude that next-frame prediction shows great promise as a general learning strategy to induce understanding of the many `laws' that govern the visual domain without the need for explicit labelling. | [
"['Thomas Winterbottom' 'G. Thomas Hudson' 'Daniel Kluvanec' 'Dean Slack'\n 'Jamie Sterling' 'Junjie Shentu' 'Chenghao Xiao' 'Zheming Zhou'\n 'Noura Al Moubayed']"
] |
null | null | 2405.17451 | null | null | http://arxiv.org/pdf/2405.17451v1 | 2024-05-21T18:57:43Z | 2024-05-21T18:57:43Z | Green AI in Action: Strategic Model Selection for Ensembles in
Production | Integrating Artificial Intelligence (AI) into software systems has significantly enhanced their capabilities while escalating energy demands. Ensemble learning, combining predictions from multiple models to form a single prediction, intensifies this problem due to cumulative energy consumption. This paper presents a novel approach to model selection that addresses the challenge of balancing the accuracy of AI models with their energy consumption in a live AI ensemble system. We explore how reducing the number of models or improving the efficiency of model usage within an ensemble during inference can reduce energy demands without substantially sacrificing accuracy. This study introduces and evaluates two model selection strategies, Static and Dynamic, for optimizing ensemble learning systems performance while minimizing energy usage. Our results demonstrate that the Static strategy improves the F1 score beyond the baseline, reducing average energy usage from 100% from the full ensemble to 62%. The Dynamic strategy further enhances F1 scores, using on average 76% compared to 100% of the full ensemble. Moreover, we propose an approach that balances accuracy with resource consumption, significantly reducing energy usage without substantially impacting accuracy. This method decreased the average energy usage of the Static strategy from approximately 62% to 14%, and for the Dynamic strategy, from around 76% to 57%. Our field study of Green AI using an operational AI system developed by a large professional services provider shows the practical applicability of adopting energy-conscious model selection strategies in live production environments. | [
"['Nienke Nijkamp' 'June Sallou' 'Niels van der Heijden' 'Luís Cruz']"
] |
null | null | 2405.17455 | null | null | http://arxiv.org/pdf/2405.17455v1 | 2024-05-22T17:43:46Z | 2024-05-22T17:43:46Z | WeatherFormer: A Pretrained Encoder Model for Learning Robust Weather
Representations from Small Datasets | This paper introduces WeatherFormer, a transformer encoder-based model designed to learn robust weather features from minimal observations. It addresses the challenge of modeling complex weather dynamics from small datasets, a bottleneck for many prediction tasks in agriculture, epidemiology, and climate science. WeatherFormer was pretrained on a large pretraining dataset comprised of 39 years of satellite measurements across the Americas. With a novel pretraining task and fine-tuning, WeatherFormer achieves state-of-the-art performance in county-level soybean yield prediction and influenza forecasting. Technical innovations include a unique spatiotemporal encoding that captures geographical, annual, and seasonal variations, adapting the transformer architecture to continuous weather data, and a pretraining strategy to learn representations that are robust to missing weather features. This paper for the first time demonstrates the effectiveness of pretraining large transformer encoder models for weather-dependent applications across multiple domains. | [
"['Adib Hasan' 'Mardavij Roozbehani' 'Munther Dahleh']"
] |
null | null | 2405.17456 | null | null | http://arxiv.org/pdf/2405.17456v1 | 2024-05-22T20:38:58Z | 2024-05-22T20:38:58Z | Optimized Linear Measurements for Inverse Problems using Diffusion-Based
Image Generation | We re-examine the problem of reconstructing a high-dimensional signal from a small set of linear measurements, in combination with image prior from a diffusion probabilistic model. Well-established methods for optimizing such measurements include principal component analysis (PCA), independent component analysis (ICA) and compressed sensing (CS), all of which rely on axis- or subspace-aligned statistical characterization. But many naturally occurring signals, including photographic images, contain richer statistical structure. To exploit such structure, we introduce a general method for obtaining an optimized set of linear measurements, assuming a Bayesian inverse solution that leverages the prior implicit in a neural network trained to perform denoising. We demonstrate that these measurements are distinct from those of PCA and CS, with significant improvements in minimizing squared reconstruction error. In addition, we show that optimizing the measurements for the SSIM perceptual loss leads to perceptually improved reconstruction. Our results highlight the importance of incorporating the specific statistical regularities of natural signals when designing effective linear measurements. | [
"['Ling-Qi Zhang' 'Zahra Kadkhodaie' 'Eero P. Simoncelli'\n 'David H. Brainard']"
] |
null | null | 2405.17457 | null | null | http://arxiv.org/pdf/2405.17457v1 | 2024-05-22T20:59:18Z | 2024-05-22T20:59:18Z | Data-Free Federated Class Incremental Learning with Diffusion-Based
Generative Memory | Federated Class Incremental Learning (FCIL) is a critical yet largely underexplored issue that deals with the dynamic incorporation of new classes within federated learning (FL). Existing methods often employ generative adversarial networks (GANs) to produce synthetic images to address privacy concerns in FL. However, GANs exhibit inherent instability and high sensitivity, compromising the effectiveness of these methods. In this paper, we introduce a novel data-free federated class incremental learning framework with diffusion-based generative memory (DFedDGM) to mitigate catastrophic forgetting by generating stable, high-quality images through diffusion models. We design a new balanced sampler to help train the diffusion models to alleviate the common non-IID problem in FL, and introduce an entropy-based sample filtering technique from an information theory perspective to enhance the quality of generative samples. Finally, we integrate knowledge distillation with a feature-based regularization term for better knowledge transfer. Our framework does not incur additional communication costs compared to the baseline FedAvg method. Extensive experiments across multiple datasets demonstrate that our method significantly outperforms existing baselines, e.g., over a 4% improvement in average accuracy on the Tiny-ImageNet dataset. | [
"['Naibo Wang' 'Yuchen Deng' 'Wenjie Feng' 'Jianwei Yin' 'See-Kiong Ng']"
] |
null | null | 2405.17458 | null | null | http://arxiv.org/pdf/2405.17458v1 | 2024-05-23T01:34:59Z | 2024-05-23T01:34:59Z | Blood Glucose Control Via Pre-trained Counterfactual Invertible Neural
Networks | Type 1 diabetes mellitus (T1D) is characterized by insulin deficiency and blood glucose (BG) control issues. The state-of-the-art solution for continuous BG control is reinforcement learning (RL), where an agent can dynamically adjust exogenous insulin doses in time to maintain BG levels within the target range. However, due to the lack of action guidance, the agent often needs to learn from randomized trials to understand misleading correlations between exogenous insulin doses and BG levels, which can lead to instability and unsafety. To address these challenges, we propose an introspective RL based on Counterfactual Invertible Neural Networks (CINN). We use the pre-trained CINN as a frozen introspective block of the RL agent, which integrates forward prediction and counterfactual inference to guide the policy updates, promoting more stable and safer BG control. Constructed based on interpretable causal order, CINN employs bidirectional encoders with affine coupling layers to ensure invertibility while using orthogonal weight normalization to enhance the trainability, thereby ensuring the bidirectional differentiability of network parameters. We experimentally validate the accuracy and generalization ability of the pre-trained CINN in BG prediction and counterfactual inference for action. Furthermore, our experimental results highlight the effectiveness of pre-trained CINN in guiding RL policy updates for more accurate and safer BG control. | [
"['Jingchi Jiang' 'Rujia Shen' 'Boran Wang' 'Yi Guan']"
] |
null | null | 2405.17459 | null | null | http://arxiv.org/pdf/2405.17459v1 | 2024-05-23T02:22:10Z | 2024-05-23T02:22:10Z | Integrating Medical Imaging and Clinical Reports Using Multimodal Deep
Learning for Advanced Disease Analysis | In this paper, an innovative multi-modal deep learning model is proposed to deeply integrate heterogeneous information from medical images and clinical reports. First, for medical images, convolutional neural networks were used to extract high-dimensional features and capture key visual information such as focal details, texture and spatial distribution. Secondly, for clinical report text, a two-way long and short-term memory network combined with an attention mechanism is used for deep semantic understanding, and key statements related to the disease are accurately captured. The two features interact and integrate effectively through the designed multi-modal fusion layer to realize the joint representation learning of image and text. In the empirical study, we selected a large medical image database covering a variety of diseases, combined with corresponding clinical reports for model training and validation. The proposed multimodal deep learning model demonstrated substantial superiority in the realms of disease classification, lesion localization, and clinical description generation, as evidenced by the experimental results. | [
"['Ziyan Yao' 'Fei Lin' 'Sheng Chai' 'Weijie He' 'Lu Dai' 'Xinghui Fei']"
] |
null | null | 2405.17460 | null | null | http://arxiv.org/pdf/2405.17460v1 | 2024-05-23T04:30:41Z | 2024-05-23T04:30:41Z | Investigation of Customized Medical Decision Algorithms Utilizing Graph
Neural Networks | Aiming at the limitations of traditional medical decision system in processing large-scale heterogeneous medical data and realizing highly personalized recommendation, this paper introduces a personalized medical decision algorithm utilizing graph neural network (GNN). This research innovatively integrates graph neural network technology into the medical and health field, aiming to build a high-precision representation model of patient health status by mining the complex association between patients' clinical characteristics, genetic information, living habits. In this study, medical data is preprocessed to transform it into a graph structure, where nodes represent different data entities (such as patients, diseases, genes, etc.) and edges represent interactions or relationships between entities. The core of the algorithm is to design a novel multi-scale fusion mechanism, combining the historical medical records, physiological indicators and genetic characteristics of patients, to dynamically adjust the attention allocation strategy of the graph neural network, so as to achieve highly customized analysis of individual cases. In the experimental part, this study selected several publicly available medical data sets for validation, and the results showed that compared with traditional machine learning methods and a single graph neural network model, the proposed personalized medical decision algorithm showed significantly superior performance in terms of disease prediction accuracy, treatment effect evaluation and patient risk stratification. | [
"['Yafeng Yan' 'Shuyao He' 'Zhou Yu' 'Jiajie Yuan' 'Ziang Liu' 'Yan Chen']"
] |
null | null | 2405.17461 | null | null | http://arxiv.org/pdf/2405.17461v1 | 2024-05-23T05:25:45Z | 2024-05-23T05:25:45Z | EMR-Merging: Tuning-Free High-Performance Model Merging | The success of pretrain-finetune paradigm brings about the release of numerous model weights. In this case, merging models finetuned on different tasks to enable a single model with multi-task capabilities is gaining increasing attention for its practicability. Existing model merging methods usually suffer from (1) significant performance degradation or (2) requiring tuning by additional data or training. In this paper, we rethink and analyze the existing model merging paradigm. We discover that using a single model's weights can hardly simulate all the models' performance. To tackle this issue, we propose Elect, Mask & Rescale-Merging (EMR-Merging). We first (a) elect a unified model from all the model weights and then (b) generate extremely lightweight task-specific modulators, including masks and rescalers, to align the direction and magnitude between the unified model and each specific model, respectively. EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance. We find that EMR-Merging shows outstanding performance compared to existing merging methods under different classical and newly-established settings, including merging different numbers of vision models (up to 30), NLP models, PEFT models, and multi-modal models. | [
"['Chenyu Huang' 'Peng Ye' 'Tao Chen' 'Tong He' 'Xiangyu Yue'\n 'Wanli Ouyang']"
] |
null | null | 2405.17462 | null | null | http://arxiv.org/pdf/2405.17462v2 | 2024-05-29T17:11:04Z | 2024-05-23T07:20:45Z | Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity | The advent of Federated Learning (FL) highlights the practical necessity for the 'right to be forgotten' for all clients, allowing them to request data deletion from the machine learning model's service provider. This necessity has spurred a growing demand for Federated Unlearning (FU). Feature unlearning has gained considerable attention due to its applications in unlearning sensitive features, backdoor features, and bias features. Existing methods employ the influence function to achieve feature unlearning, which is impractical for FL as it necessitates the participation of other clients in the unlearning process. Furthermore, current research lacks an evaluation of the effectiveness of feature unlearning. To address these limitations, we define feature sensitivity in the evaluation of feature unlearning according to Lipschitz continuity. This metric characterizes the rate of change or sensitivity of the model output to perturbations in the input feature. We then propose an effective federated feature unlearning framework called Ferrari, which minimizes feature sensitivity. Extensive experimental results and theoretical analysis demonstrate the effectiveness of Ferrari across various feature unlearning scenarios, including sensitive, backdoor, and biased features. | [
"['Hanlin Gu' 'WinKent Ong' 'Chee Seng Chan' 'Lixin Fan']"
] |
null | null | 2405.17463 | null | null | http://arxiv.org/pdf/2405.17463v1 | 2024-05-23T08:21:48Z | 2024-05-23T08:21:48Z | No Algorithmic Collusion in Two-Player Blindfolded Game with Thompson
Sampling | When two players are engaged in a repeated game with unknown payoff matrices, they may be completely unaware of the existence of each other and use multi-armed bandit algorithms to choose the actions, which is referred to as the ``blindfolded game'' in this paper. We show that when the players use Thompson sampling, the game dynamics converges to the Nash equilibrium under a mild assumption on the payoff matrices. Therefore, algorithmic collusion doesn't arise in this case despite the fact that the players do not intentionally deploy competitive strategies. To prove the convergence result, we find that the framework developed in stochastic approximation doesn't apply, because of the sporadic and infrequent updates of the inferior actions and the lack of Lipschitz continuity. We develop a novel sample-path-wise approach to show the convergence. | [
"['Ningyuan Chen' 'Xuefeng Gao' 'Yi Xiong']"
] |
null | null | 2405.17464 | null | null | http://arxiv.org/pdf/2405.17464v1 | 2024-05-23T08:58:08Z | 2024-05-23T08:58:08Z | Data Valuation by Leveraging Global and Local Statistical Information | Data valuation has garnered increasing attention in recent years, given the critical role of high-quality data in various applications, particularly in machine learning tasks. There are diverse technical avenues to quantify the value of data within a corpus. While Shapley value-based methods are among the most widely used techniques in the literature due to their solid theoretical foundation, the accurate calculation of Shapley values is often intractable, leading to the proposal of numerous approximated calculation methods. Despite significant progress, nearly all existing methods overlook the utilization of distribution information of values within a data corpus. In this paper, we demonstrate that both global and local statistical information of value distributions hold significant potential for data valuation within the context of machine learning. Firstly, we explore the characteristics of both global and local value distributions across several simulated and real data corpora. Useful observations and clues are obtained. Secondly, we propose a new data valuation method that estimates Shapley values by incorporating the explored distribution characteristics into an existing method, AME. Thirdly, we present a new path to address the dynamic data valuation problem by formulating an optimization problem that integrates information of both global and local value distributions. Extensive experiments are conducted on Shapley value estimation, value-based data removal/adding, mislabeled data detection, and incremental/decremental data valuation. The results showcase the effectiveness and efficiency of our proposed methodologies, affirming the significant potential of global and local value distributions in data valuation. | [
"['Xiaoling Zhou' 'Ou Wu' 'Michael K. Ng' 'Hao Jiang']"
] |
null | null | 2405.17465 | null | null | http://arxiv.org/pdf/2405.17465v1 | 2024-05-23T17:53:31Z | 2024-05-23T17:53:31Z | Application of Machine Learning in Agriculture: Recent Trends and Future
Research Avenues | Food production is a vital global concern and the potential for an agritech revolution through artificial intelligence (AI) remains largely unexplored. This paper presents a comprehensive review focused on the application of machine learning (ML) in agriculture, aiming to explore its transformative potential in farming practices and efficiency enhancement. To understand the extent of research activity in this field, statistical data have been gathered, revealing a substantial growth trend in recent years. This indicates that it stands out as one of the most dynamic and vibrant research domains. By introducing the concept of ML and delving into the realm of smart agriculture, including Precision Agriculture, Smart Farming, Digital Agriculture, and Agriculture 4.0, we investigate how AI can optimize crop output and minimize environmental impact. We highlight the capacity of ML to analyze and classify agricultural data, providing examples of improved productivity and profitability on farms. Furthermore, we discuss prominent ML models and their unique features that have shown promising results in agricultural applications. Through a systematic review of the literature, this paper addresses the existing literature gap on AI in agriculture and offers valuable information to newcomers and researchers. By shedding light on unexplored areas within this emerging field, our objective is to facilitate a deeper understanding of the significant contributions and potential of AI in agriculture, ultimately benefiting the research community. | [
"['Aashu' 'Kanchan Rajwar' 'Millie Pant' 'Kusum Deep']"
] |
null | null | 2405.17466 | null | null | http://arxiv.org/pdf/2405.17466v1 | 2024-05-23T21:24:26Z | 2024-05-23T21:24:26Z | Distributed Continual Learning | This work studies the intersection of continual and federated learning, in which independent agents face unique tasks in their environments and incrementally develop and share knowledge. We introduce a mathematical framework capturing the essential aspects of distributed continual learning, including agent model and statistical heterogeneity, continual distribution shift, network topology, and communication constraints. Operating on the thesis that distributed continual learning enhances individual agent performance over single-agent learning, we identify three modes of information exchange: data instances, full model parameters, and modular (partial) model parameters. We develop algorithms for each sharing mode and conduct extensive empirical investigations across various datasets, topology structures, and communication limits. Our findings reveal three key insights: sharing parameters is more efficient than sharing data as tasks become more complex; modular parameter sharing yields the best performance while minimizing communication costs; and combining sharing modes can cumulatively improve performance. | [
"['Long Le' 'Marcel Hussing' 'Eric Eaton']"
] |
null | null | 2405.17467 | null | null | http://arxiv.org/pdf/2405.17467v1 | 2024-05-23T22:05:04Z | 2024-05-23T22:05:04Z | Sports center customer segmentation: a case study | Customer segmentation is a fundamental process to develop effective marketing strategies, personalize customer experience and boost their retention and loyalty. This problem has been widely addressed in the scientific literature, yet no definitive solution for every case is available. A specific case study characterized by several individualizing features is thoroughly analyzed and discussed in this paper. Because of the case properties a robust and innovative approach to both data handling and analytical processes is required. The study led to a sound proposal for customer segmentation. The highlights of the proposal include a convenient data partition to decompose the problem, an adaptive distance function definition and its optimization through genetic algorithms. These comprehensive data handling strategies not only enhance the dataset reliability for segmentation analysis but also support the operational efficiency and marketing strategies of sports centers, ultimately improving the customer experience. | [
"['Juan Soto' 'Ramón Carmenaty' 'Miguel Lastra' 'Juan M. Fernández-Luna'\n 'José M. Benítez']"
] |
null | null | 2405.17468 | null | null | http://arxiv.org/pdf/2405.17468v1 | 2024-05-24T02:04:10Z | 2024-05-24T02:04:10Z | Deep Activity Model: A Generative Approach for Human Mobility Pattern
Synthesis | Human mobility significantly impacts various aspects of society, including transportation, urban planning, and public health. The increasing availability of diverse mobility data and advancements in deep learning have revolutionized mobility modeling. Existing deep learning models, however, mainly study spatio-temporal patterns using trajectories and often fall short in capturing the underlying semantic interdependency among activities. Moreover, they are also constrained by the data source. These two factors thereby limit their realism and adaptability, respectively. Meanwhile, traditional activity-based models (ABMs) in transportation modeling rely on rigid assumptions and are costly and time-consuming to calibrate, making them difficult to adapt and scale to new regions, especially those regions with limited amount of required conventional travel data. To address these limitations, we develop a novel generative deep learning approach for human mobility modeling and synthesis, using ubiquitous and open-source data. Additionally, the model can be fine-tuned with local data, enabling adaptable and accurate representations of mobility patterns across different regions. The model is evaluated on a nationwide dataset of the United States, where it demonstrates superior performance in generating activity chains that closely follow ground truth distributions. Further tests using state- or city-specific datasets from California, Washington, and Mexico City confirm its transferability. This innovative approach offers substantial potential to advance mobility modeling research, especially in generating human activity chains as input for downstream activity-based mobility simulation models and providing enhanced tools for urban planners and policymakers. | [
"['Xishun Liao' 'Brian Yueshuai He' 'Qinhua Jiang' 'Chenchen Kuai'\n 'Jiaqi Ma']"
] |
null | null | 2405.17469 | null | null | http://arxiv.org/pdf/2405.17469v1 | 2024-05-24T02:59:52Z | 2024-05-24T02:59:52Z | A Dataset for Research on Water Sustainability | Freshwater scarcity is a global problem that requires collective efforts across all industry sectors. Nevertheless, a lack of access to operational water footprint data bars many applications from exploring optimization opportunities hidden within the temporal and spatial variations. To break this barrier into research in water sustainability, we build a dataset for operation direct water usage in the cooling systems and indirect water embedded in electricity generation. Our dataset consists of the hourly water efficiency of major U.S. cities and states from 2019 to 2023. We also offer cooling system models that capture the impact of weather on water efficiency. We present a preliminary analysis of our dataset and discuss three potential applications that can benefit from it. Our dataset is publicly available at Open Science Framework (OSF) | [
"['Pranjol Sen Gupta' 'Md Rajib Hossen' 'Pengfei Li' 'Shaolei Ren'\n 'Mohammad A. Islam']"
] |
null | null | 2405.17470 | null | null | http://arxiv.org/pdf/2405.17470v1 | 2024-05-24T03:14:29Z | 2024-05-24T03:14:29Z | Athena: Efficient Block-Wise Post-Training Quantization for Large
Language Models Using Second-Order Matrix Derivative Information | Large Language Models (LLMs) have significantly advanced natural language processing tasks such as machine translation, text generation, and sentiment analysis. However, their large size, often consisting of billions of parameters, poses challenges for storage, computation, and deployment, particularly in resource-constrained environments like mobile devices and edge computing platforms. Effective compression and quantization techniques are crucial for addressing these issues, reducing memory footprint and computational requirements without significantly compromising performance. Traditional methods that uniformly map parameters to compressed spaces fail to account for the uneven distribution of parameters, leading to substantial accuracy loss. In this work, we propose Athena, a novel algorithm for efficient block-wise post-training quantization of LLMs. Athena leverages Second-Order Matrix Derivative Information to guide the quantization process using the curvature information of the loss landscape. By grouping parameters by columns or rows and iteratively optimizing the quantization process, Athena updates the model parameters and Hessian matrix to achieve significant compression while maintaining high accuracy. This makes Athena a practical solution for deploying LLMs in various settings. | [
"['Yanshu Wang' 'Wenyang He' 'Tong Yang']"
] |
null | null | 2405.17471 | null | null | http://arxiv.org/pdf/2405.17471v2 | 2024-05-29T01:36:56Z | 2024-05-24T03:23:37Z | Momentum-Based Federated Reinforcement Learning with Interaction and
Communication Efficiency | Federated Reinforcement Learning (FRL) has garnered increasing attention recently. However, due to the intrinsic spatio-temporal non-stationarity of data distributions, the current approaches typically suffer from high interaction and communication costs. In this paper, we introduce a new FRL algorithm, named $texttt{MFPO}$, that utilizes momentum, importance sampling, and additional server-side adjustment to control the shift of stochastic policy gradients and enhance the efficiency of data utilization. We prove that by proper selection of momentum parameters and interaction frequency, $texttt{MFPO}$ can achieve $tilde{mathcal{O}}(H N^{-1}epsilon^{-3/2})$ and $tilde{mathcal{O}}(epsilon^{-1})$ interaction and communication complexities ($N$ represents the number of agents), where the interaction complexity achieves linear speedup with the number of agents, and the communication complexity aligns the best achievable of existing first-order FL algorithms. Extensive experiments corroborate the substantial performance gains of $texttt{MFPO}$ over existing methods on a suite of complex and high-dimensional benchmarks. | [
"['Sheng Yue' 'Xingyuan Hua' 'Lili Chen' 'Ju Ren']"
] |
null | null | 2405.17472 | null | null | http://arxiv.org/pdf/2405.17472v1 | 2024-05-24T03:23:51Z | 2024-05-24T03:23:51Z | FreezeAsGuard: Mitigating Illegal Adaptation of Diffusion Models via
Selective Tensor Freezing | Text-to-image diffusion models can be fine-tuned in custom domains to adapt to specific user preferences, but such unconstrained adaptability has also been utilized for illegal purposes, such as forging public figures' portraits and duplicating copyrighted artworks. Most existing work focuses on detecting the illegally generated contents, but cannot prevent or mitigate illegal adaptations of diffusion models. Other schemes of model unlearning and reinitialization, similarly, cannot prevent users from relearning the knowledge of illegal model adaptation with custom data. In this paper, we present FreezeAsGuard, a new technique that addresses these limitations and enables irreversible mitigation of illegal adaptations of diffusion models. The basic approach is that the model publisher selectively freezes tensors in pre-trained diffusion models that are critical to illegal model adaptations, to mitigate the fine-tuned model's representation power in illegal domains but minimize the impact on legal model adaptations in other domains. Such tensor freezing can be enforced via APIs provided by the model publisher for fine-tuning, can motivate users' adoption due to its computational savings. Experiment results with datasets in multiple domains show that FreezeAsGuard provides stronger power in mitigating illegal model adaptations of generating fake public figures' portraits, while having the minimum impact on model adaptation in other legal domains. The source code is available at: https://github.com/pittisl/FreezeAsGuard/ | [
"['Kai Huang' 'Wei Gao']"
] |
null | null | 2405.17473 | null | null | http://arxiv.org/pdf/2405.17473v2 | 2024-06-20T05:23:57Z | 2024-05-24T03:24:29Z | Repeat-Aware Neighbor Sampling for Dynamic Graph Learning | Dynamic graph learning equips the edges with time attributes and allows multiple links between two nodes, which is a crucial technology for understanding evolving data scenarios like traffic prediction and recommendation systems. Existing works obtain the evolving patterns mainly depending on the most recent neighbor sequences. However, we argue that whether two nodes will have interaction with each other in the future is highly correlated with the same interaction that happened in the past. Only considering the recent neighbors overlooks the phenomenon of repeat behavior and fails to accurately capture the temporal evolution of interactions. To fill this gap, this paper presents RepeatMixer, which considers evolving patterns of first and high-order repeat behavior in the neighbor sampling strategy and temporal information learning. Firstly, we define the first-order repeat-aware nodes of the source node as the destination nodes that have interacted historically and extend this concept to high orders as nodes in the destination node's high-order neighbors. Then, we extract neighbors of the source node that interacted before the appearance of repeat-aware nodes with a slide window strategy as its neighbor sequence. Next, we leverage both the first and high-order neighbor sequences of source and destination nodes to learn temporal patterns of interactions via an MLP-based encoder. Furthermore, considering the varying temporal patterns on different orders, we introduce a time-aware aggregation mechanism that adaptively aggregates the temporal representations from different orders based on the significance of their interaction time sequences. Experimental results demonstrate the superiority of RepeatMixer over state-of-the-art models in link prediction tasks, underscoring the effectiveness of the proposed repeat-aware neighbor sampling strategy. | [
"['Tao Zou' 'Yuhao Mao' 'Junchen Ye' 'Bowen Du']"
] |
null | null | 2405.17474 | null | null | http://arxiv.org/pdf/2405.17474v2 | 2024-05-29T01:38:59Z | 2024-05-24T04:24:03Z | Federated Offline Policy Optimization with Dual Regularization | Federated Reinforcement Learning (FRL) has been deemed as a promising solution for intelligent decision-making in the era of Artificial Internet of Things. However, existing FRL approaches often entail repeated interactions with the environment during local updating, which can be prohibitively expensive or even infeasible in many real-world domains. To overcome this challenge, this paper proposes a novel offline federated policy optimization algorithm, named $texttt{DRPO}$, which enables distributed agents to collaboratively learn a decision policy only from private and static data without further environmental interactions. $texttt{DRPO}$ leverages dual regularization, incorporating both the local behavioral policy and the global aggregated policy, to judiciously cope with the intrinsic two-tier distributional shifts in offline FRL. Theoretical analysis characterizes the impact of the dual regularization on performance, demonstrating that by achieving the right balance thereof, $texttt{DRPO}$ can effectively counteract distributional shifts and ensure strict policy improvement in each federative learning round. Extensive experiments validate the significant performance gains of $texttt{DRPO}$ over baseline methods. | [
"['Sheng Yue' 'Zerui Qin' 'Xingyuan Hua' 'Yongheng Deng' 'Ju Ren']"
] |
null | null | 2405.17475 | null | null | http://arxiv.org/pdf/2405.17475v1 | 2024-05-24T04:45:14Z | 2024-05-24T04:45:14Z | How Culturally Aware are Vision-Language Models? | An image is often said to be worth a thousand words, and certain images can tell rich and insightful stories. Can these stories be told via image captioning? Images from folklore genres, such as mythology, folk dance, cultural signs, and symbols, are vital to every culture. Our research compares the performance of four popular vision-language models (GPT-4V, Gemini Pro Vision, LLaVA, and OpenFlamingo) in identifying culturally specific information in such images and creating accurate and culturally sensitive image captions. We also propose a new evaluation metric, Cultural Awareness Score (CAS), dedicated to measuring the degree of cultural awareness in image captions. We provide a dataset MOSAIC-1.5k, labeled with ground truth for images containing cultural background and context, as well as a labeled dataset with assigned Cultural Awareness Scores that can be used with unseen data. Creating culturally appropriate image captions is valuable for scientific research and can be beneficial for many practical applications. We envision that our work will promote a deeper integration of cultural sensitivity in AI applications worldwide. By making the dataset and Cultural Awareness Score available to the public, we aim to facilitate further research in this area, encouraging the development of more culturally aware AI systems that respect and celebrate global diversity. | [
"['Olena Burda-Lassen' 'Aman Chadha' 'Shashank Goswami' 'Vinija Jain']"
] |
null | null | 2405.17476 | null | null | http://arxiv.org/pdf/2405.17476v3 | 2024-05-30T17:15:09Z | 2024-05-24T04:56:39Z | How to Leverage Diverse Demonstrations in Offline Imitation Learning | Offline Imitation Learning (IL) with imperfect demonstrations has garnered increasing attention owing to the scarcity of expert data in many real-world domains. A fundamental problem in this scenario is how to extract positive behaviors from noisy data. In general, current approaches to the problem select data building on state-action similarity to given expert demonstrations, neglecting precious information in (potentially abundant) $textit{diverse}$ state-actions that deviate from expert ones. In this paper, we introduce a simple yet effective data selection method that identifies positive behaviors based on their resultant states -- a more informative criterion enabling explicit utilization of dynamics information and effective extraction of both expert and beneficial diverse behaviors. Further, we devise a lightweight behavior cloning algorithm capable of leveraging the expert and selected data correctly. In the experiments, we evaluate our method on a suite of complex and high-dimensional offline IL benchmarks, including continuous-control and vision-based tasks. The results demonstrate that our method achieves state-of-the-art performance, outperforming existing methods on $textbf{20/21}$ benchmarks, typically by $textbf{2-5x}$, while maintaining a comparable runtime to Behavior Cloning ($texttt{BC}$). | [
"['Sheng Yue' 'Jiani Liu' 'Xingyuan Hua' 'Ju Ren' 'Sen Lin' 'Junshan Zhang'\n 'Yaoxue Zhang']"
] |
null | null | 2405.17477 | null | null | http://arxiv.org/pdf/2405.17477v3 | 2024-05-30T17:11:46Z | 2024-05-24T04:57:25Z | OLLIE: Imitation Learning from Offline Pretraining to Online Finetuning | In this paper, we study offline-to-online Imitation Learning (IL) that pretrains an imitation policy from static demonstration data, followed by fast finetuning with minimal environmental interaction. We find the na"ive combination of existing offline IL and online IL methods tends to behave poorly in this context, because the initial discriminator (often used in online IL) operates randomly and discordantly against the policy initialization, leading to misguided policy optimization and $textit{unlearning}$ of pretraining knowledge. To overcome this challenge, we propose a principled offline-to-online IL method, named $texttt{OLLIE}$, that simultaneously learns a near-expert policy initialization along with an $textit{aligned discriminator initialization}$, which can be seamlessly integrated into online IL, achieving smooth and fast finetuning. Empirically, $texttt{OLLIE}$ consistently and significantly outperforms the baseline methods in $textbf{20}$ challenging tasks, from continuous control to vision-based domains, in terms of performance, demonstration efficiency, and convergence speed. This work may serve as a foundation for further exploration of pretraining and finetuning in the context of IL. | [
"['Sheng Yue' 'Xingyuan Hua' 'Ju Ren' 'Sen Lin' 'Junshan Zhang'\n 'Yaoxue Zhang']"
] |
null | null | 2405.17478 | null | null | http://arxiv.org/pdf/2405.17478v1 | 2024-05-24T06:01:09Z | 2024-05-24T06:01:09Z | ROSE: Register Assisted General Time Series Forecasting with Decomposed
Frequency Learning | With the increasing collection of time series data from various domains, there arises a strong demand for general time series forecasting models pre-trained on a large number of time-series datasets to support a variety of downstream prediction tasks. Enabling general time series forecasting faces two challenges: how to obtain unified representations from multi-domian time series data, and how to capture domain-specific features from time series data across various domains for adaptive transfer in downstream tasks. To address these challenges, we propose a Register Assisted General Time Series Forecasting Model with Decomposed Frequency Learning (ROSE), a novel pre-trained model for time series forecasting. ROSE employs Decomposed Frequency Learning for the pre-training task, which decomposes coupled semantic and periodic information in time series with frequency-based masking and reconstruction to obtain unified representations across domains. We also equip ROSE with a Time Series Register, which learns to generate a register codebook to capture domain-specific representations during pre-training and enhances domain-adaptive transfer by selecting related register tokens on downstream tasks. After pre-training on large-scale time series data, ROSE achieves state-of-the-art forecasting performance on 8 real-world benchmarks. Remarkably, even in few-shot scenarios, it demonstrates competitive or superior performance compared to existing methods trained with full data. | [
"['Yihang Wang' 'Yuying Qiu' 'Peng Chen' 'Kai Zhao' 'Yang Shu'\n 'Zhongwen Rao' 'Lujia Pan' 'Bin Yang' 'Chenjuan Guo']"
] |
null | null | 2405.17479 | null | null | http://arxiv.org/pdf/2405.17479v1 | 2024-05-24T06:57:23Z | 2024-05-24T06:57:23Z | A rationale from frequency perspective for grokking in training neural
network | Grokking is the phenomenon where neural networks NNs initially fit the training data and later generalize to the test data during training. In this paper, we empirically provide a frequency perspective to explain the emergence of this phenomenon in NNs. The core insight is that the networks initially learn the less salient frequency components present in the test data. We observe this phenomenon across both synthetic and real datasets, offering a novel viewpoint for elucidating the grokking phenomenon by characterizing it through the lens of frequency dynamics during the training process. Our empirical frequency-based analysis sheds new light on understanding the grokking phenomenon and its underlying mechanisms. | [
"['Zhangchen Zhou' 'Yaoyu Zhang' 'Zhi-Qin John Xu']"
] |
null | null | 2405.17481 | null | null | http://arxiv.org/pdf/2405.17481v1 | 2024-05-24T10:51:51Z | 2024-05-24T10:51:51Z | Improving Simulation Regression Efficiency using a Machine
Learning-based Method in Design Verification | The verification throughput is becoming a major challenge bottleneck, since the complexity and size of SoC designs are still ever increasing. Simply adding more CPU cores and running more tests in parallel will not scale anymore. This paper discusses various methods of improving verification throughput: ranking and the new machine learning (ML) based technology introduced by Cadence i.e. Xcelium ML. Both methods aim at getting comparable coverage in less CPU time by applying more efficient stimulus. Ranking selects specific seeds that simply turned out to come up with the largest coverage in previous simulations, while Xcelium ML generates optimized patterns as a result of finding correlations between randomization points and achieved coverage of previous regressions. Quantified results as well as pros & cons of each approach are discussed in this paper at the example of three actual industry projects. Both Xcelium ML and Ranking methods gave comparable compression & speedup factors around 3 consistently. But the optimized ML based regressions simulated new random scenarios occasionally producing a coverage regain of more than 100%. Finally, a methodology is proposed to use Xcelium ML efficiently throughout the product development. | [
"['Deepak Narayan Gadde' 'Sebastian Simon' 'Djones Lettnin' 'Thomas Ziller']"
] |
null | null | 2405.17483 | null | null | http://arxiv.org/pdf/2405.17483v1 | 2024-05-24T13:36:44Z | 2024-05-24T13:36:44Z | Concept-based Explainable Malignancy Scoring on Pulmonary Nodules in CT
Images | To increase the transparency of modern computer-aided diagnosis (CAD) systems for assessing the malignancy of lung nodules, an interpretable model based on applying the generalized additive models and the concept-based learning is proposed. The model detects a set of clinically significant attributes in addition to the final malignancy regression score and learns the association between the lung nodule attributes and a final diagnosis decision as well as their contributions into the decision. The proposed concept-based learning framework provides human-readable explanations in terms of different concepts (numerical and categorical), their values, and their contribution to the final prediction. Numerical experiments with the LIDC-IDRI dataset demonstrate that the diagnosis results obtained using the proposed model, which explicitly explores internal relationships, are in line with similar patterns observed in clinical practice. Additionally, the proposed model shows the competitive classification and the nodule attribute scoring performance, highlighting its potential for effective decision-making in the lung nodule diagnosis. | [
"['Rinat I. Dumaev' 'Sergei A. Molodyakov' 'Lev V. Utkin']"
] |
null | null | 2405.17484 | null | null | http://arxiv.org/pdf/2405.17484v1 | 2024-05-24T16:18:16Z | 2024-05-24T16:18:16Z | Bridging The Gap between Low-rank and Orthogonal Adaptation via
Householder Reflection Adaptation | While following different technical routes, both low-rank and orthogonal adaptation techniques can efficiently adapt large-scale pre-training models in specific tasks or domains based on a small piece of trainable parameters. In this study, we bridge the gap between these two techniques, proposing a simple but effective adaptation method based on Householder reflections. Given a pre-trained model, our method fine-tunes its layers by multiplying each frozen weight matrix with an orthogonal matrix constructed by a chain of learnable Householder reflections (HRs). This HR-based orthogonal fine-tuning is equivalent to an adaptive low-rank adaptation. Moreover, we show that the orthogonality of the reflection planes corresponding to the HRs impacts the model capacity and regularity. The analysis motivates us to regularize the orthogonality of the HRs, leading to different implementations of the proposed Householder reflection adaptation (HRA) method. Compared with state-of-the-art methods, HRA achieves superior performance with fewer learnable parameters when adapting large language models and conditional image generators. The code is available at https://github.com/DaShenZi721/HRA | [
"['Shen Yuan' 'Haotian Liu' 'Hongteng Xu']"
] |
null | null | 2405.17485 | null | null | http://arxiv.org/pdf/2405.17485v1 | 2024-05-24T18:43:00Z | 2024-05-24T18:43:00Z | $\textit{Comet:}$ A $\underline{Com}$munication-$\underline{e}$fficient
and Performant Approxima$\underline{t}$ion for Private Transformer Inference | The prevalent use of Transformer-like models, exemplified by ChatGPT in modern language processing applications, underscores the critical need for enabling private inference essential for many cloud-based services reliant on such models. However, current privacy-preserving frameworks impose significant communication burden, especially for non-linear computation in Transformer model. In this paper, we introduce a novel plug-in method Comet to effectively reduce the communication cost without compromising the inference performance. We second introduce an efficient approximation method to eliminate the heavy communication in finding good initial approximation. We evaluate our Comet on Bert and RoBERTa models with GLUE benchmark datasets, showing up to 3.9$times$ less communication and 3.5$times$ speedups while keep competitive model performance compared to the prior art. | [
"['Xiangrui Xu' 'Qiao Zhang' 'Rui Ning' 'Chunsheng Xin' 'Hongyi Wu']"
] |
null | null | 2405.17486 | null | null | http://arxiv.org/pdf/2405.17486v1 | 2024-05-24T18:43:05Z | 2024-05-24T18:43:05Z | eQMARL: Entangled Quantum Multi-Agent Reinforcement Learning for
Distributed Cooperation over Quantum Channels | Collaboration is a key challenge in distributed multi-agent reinforcement learning (MARL) environments. Learning frameworks for these decentralized systems must weigh the benefits of explicit player coordination against the communication overhead and computational cost of sharing local observations and environmental data. Quantum computing has sparked a potential synergy between quantum entanglement and cooperation in multi-agent environments, which could enable more efficient distributed collaboration with minimal information sharing. This relationship is largely unexplored, however, as current state-of-the-art quantum MARL (QMARL) implementations rely on classical information sharing rather than entanglement over a quantum channel as a coordination medium. In contrast, in this paper, a novel framework dubbed entangled QMARL (eQMARL) is proposed. The proposed eQMARL is a distributed actor-critic framework that facilitates cooperation over a quantum channel and eliminates local observation sharing via a quantum entangled split critic. Introducing a quantum critic uniquely spread across the agents allows coupling of local observation encoders through entangled input qubits over a quantum channel, which requires no explicit sharing of local observations and reduces classical communication overhead. Further, agent policies are tuned through joint observation-value function estimation via joint quantum measurements, thereby reducing the centralized computational burden. Experimental results show that eQMARL with ${Psi}^{+}$ entanglement converges to a cooperative strategy up to $17.8%$ faster and with a higher overall score compared to split classical and fully centralized classical and quantum baselines. The results also show that eQMARL achieves this performance with a constant factor of $25$-times fewer centralized parameters compared to the split classical baseline. | [
"['Alexander DeRieux' 'Walid Saad']"
] |
null | null | 2405.17488 | null | null | http://arxiv.org/pdf/2405.17488v1 | 2024-05-24T20:27:45Z | 2024-05-24T20:27:45Z | Pattern-Based Time-Series Risk Scoring for Anomaly Detection and Alert
Filtering -- A Predictive Maintenance Case Study | Fault detection is a key challenge in the management of complex systems. In the context of SparkCognition's efforts towards predictive maintenance in large scale industrial systems, this problem is often framed in terms of anomaly detection - identifying patterns of behavior in the data which deviate from normal. Patterns of normal behavior aren't captured simply in the coarse statistics of measured signals. Rather, the multivariate sequential pattern itself can be indicative of normal vs. abnormal behavior. For this reason, normal behavior modeling that relies on snapshots of the data without taking into account temporal relationships as they evolve would be lacking. However, common strategies for dealing with temporal dependence, such as Recurrent Neural Networks or attention mechanisms are oftentimes computationally expensive and difficult to train. In this paper, we propose a fast and efficient approach to anomaly detection and alert filtering based on sequential pattern similarities. In our empirical analysis section, we show how this approach can be leveraged for a variety of purposes involving anomaly detection on a large scale real-world industrial system. Subsequently, we test our approach on a publicly-available dataset in order to establish its general applicability and robustness compared to a state-of-the-art baseline. We also demonstrate an efficient way of optimizing the framework based on an alert recall objective function. | [
"['Elad Liebman']"
] |
null | null | 2405.17489 | null | null | http://arxiv.org/pdf/2405.17489v1 | 2024-05-25T03:26:33Z | 2024-05-25T03:26:33Z | On the Inflation of KNN-Shapley Value | Shapley value-based data valuation methods, originating from cooperative game theory, quantify the usefulness of each individual sample by considering its contribution to all possible training subsets. Despite their extensive applications, these methods encounter the challenge of value inflation - while samples with negative Shapley values are detrimental, some with positive values can also be harmful. This challenge prompts two fundamental questions: the suitability of zero as a threshold for distinguishing detrimental from beneficial samples and the determination of an appropriate threshold. To address these questions, we focus on KNN-Shapley and propose Calibrated KNN-Shapley (CKNN-Shapley), which calibrates zero as the threshold to distinguish detrimental samples from beneficial ones by mitigating the negative effects of small-sized training subsets. Through extensive experiments, we demonstrate the effectiveness of CKNN-Shapley in alleviating data valuation inflation, detecting detrimental samples, and assessing data quality. We also extend our approach beyond conventional classification settings, applying it to diverse and practical scenarios such as learning with mislabeled data, online learning with stream data, and active learning for label annotation. | [
"['Ziao Yang' 'Han Yue' 'Jian Chen' 'Hongfu Liu']"
] |
null | null | 2405.17490 | null | null | http://arxiv.org/pdf/2405.17490v1 | 2024-05-25T03:43:36Z | 2024-05-25T03:43:36Z | Revisit, Extend, and Enhance Hessian-Free Influence Functions | Influence functions serve as crucial tools for assessing sample influence in model interpretation, subset training set selection, noisy label detection, and more. By employing the first-order Taylor extension, influence functions can estimate sample influence without the need for expensive model retraining. However, applying influence functions directly to deep models presents challenges, primarily due to the non-convex nature of the loss function and the large size of model parameters. This difficulty not only makes computing the inverse of the Hessian matrix costly but also renders it non-existent in some cases. Various approaches, including matrix decomposition, have been explored to expedite and approximate the inversion of the Hessian matrix, with the aim of making influence functions applicable to deep models. In this paper, we revisit a specific, albeit naive, yet effective approximation method known as TracIn. This method substitutes the inverse of the Hessian matrix with an identity matrix. We provide deeper insights into why this simple approximation method performs well. Furthermore, we extend its applications beyond measuring model utility to include considerations of fairness and robustness. Finally, we enhance TracIn through an ensemble strategy. To validate its effectiveness, we conduct experiments on synthetic data and extensive evaluations on noisy label detection, sample selection for large language model fine-tuning, and defense against adversarial attacks. | [
"['Ziao Yang' 'Han Yue' 'Jian Chen' 'Hongfu Liu']"
] |
null | null | 2405.17493 | null | null | http://arxiv.org/pdf/2405.17493v1 | 2024-05-25T07:17:47Z | 2024-05-25T07:17:47Z | Overcoming Negative Transfer by Online Selection: Distant Domain
Adaptation for Fault Diagnosis | Unsupervised domain adaptation (UDA) has achieved remarkable success in fault diagnosis, bringing significant benefits to diverse industrial applications. While most UDA methods focus on cross-working condition scenarios where the source and target domains are notably similar, real-world applications often grapple with severe domain shifts. We coin the term `distant domain adaptation problem' to describe the challenge of adapting from a labeled source domain to a significantly disparate unlabeled target domain. This problem exhibits the risk of negative transfer, where extraneous knowledge from the source domain adversely affects the target domain performance. Unfortunately, conventional UDA methods often falter in mitigating this negative transfer, leading to suboptimal performance. In response to this challenge, we propose a novel Online Selective Adversarial Alignment (OSAA) approach. Central to OSAA is its ability to dynamically identify and exclude distant source samples via an online gradient masking approach, focusing primarily on source samples that closely resemble the target samples. Furthermore, recognizing the inherent complexities in bridging the source and target domains, we construct an intermediate domain to act as a transitional domain and ease the adaptation process. Lastly, we develop a class-conditional adversarial adaptation to address the label distribution disparities while learning domain invariant representation to account for potential label distribution disparities between the domains. Through detailed experiments and ablation studies on two real-world datasets, we validate the superior performance of the OSAA method over state-of-the-art methods, underscoring its significant utility in practical scenarios with severe domain shifts. | [
"['Ziyan Wang' 'Mohamed Ragab' 'Wenmian Yang' 'Min Wu' 'Sinno Jialin Pan'\n 'Jie Zhang' 'Zhenghua Chen']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.