categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.17470 | null | null | http://arxiv.org/pdf/2406.17470v1 | 2024-06-25T11:15:53Z | 2024-06-25T11:15:53Z | Dynamic Scheduling for Vehicle-to-Vehicle Communications Enhanced
Federated Learning | Leveraging the computing and sensing capabilities of vehicles, vehicular federated learning (VFL) has been applied to edge training for connected vehicles. The dynamic and interconnected nature of vehicular networks presents unique opportunities to harness direct vehicle-to-vehicle (V2V) communications, enhancing VFL training efficiency. In this paper, we formulate a stochastic optimization problem to optimize the VFL training performance, considering the energy constraints and mobility of vehicles, and propose a V2V-enhanced dynamic scheduling (VEDS) algorithm to solve it. The model aggregation requirements of VFL and the limited transmission time due to mobility result in a stepwise objective function, which presents challenges in solving the problem. We thus propose a derivative-based drift-plus-penalty method to convert the long-term stochastic optimization problem to an online mixed integer nonlinear programming (MINLP) problem, and provide a theoretical analysis to bound the performance gap between the online solution and the offline optimal solution. Further analysis of the scheduling priority reduces the original problem into a set of convex optimization problems, which are efficiently solved using the interior-point method. Experimental results demonstrate that compared with the state-of-the-art benchmarks, the proposed algorithm enhances the image classification accuracy on the CIFAR-10 dataset by 3.18% and reduces the average displacement errors on the Argoverse trajectory prediction dataset by 10.21%. | [
"['Jintao Yan' 'Tan Chen' 'Yuxuan Sun' 'Zhaojun Nan' 'Sheng Zhou'\n 'Zhisheng Niu']"
] |
null | null | 2406.17475 | null | null | http://arxiv.org/abs/2406.17475v1 | 2024-06-25T11:41:50Z | 2024-06-25T11:41:50Z | Performative Debias with Fair-exposure Optimization Driven by Strategic
Agents in Recommender Systems | Data bias, e.g., popularity impairs the dynamics of two-sided markets within recommender systems. This overshadows the less visible but potentially intriguing long-tail items that could capture user interest. Despite the abundance of research surrounding this issue, it still poses challenges and remains a hot topic in academic circles. Along this line, in this paper, we developed a re-ranking approach in dynamic settings with fair-exposure optimization driven by strategic agents. Designed for the producer side, the execution of agents assumes content creators can modify item features based on strategic incentives to maximize their exposure. This iterative process entails an end-to-end optimization, employing differentiable ranking operators that simultaneously target accuracy and fairness. Joint objectives ensure the performance of recommendations while enhancing the visibility of tail items. We also leveraged the performativity nature of predictions to illustrate how strategic learning influences content creators to shift towards fairness efficiently, thereby incentivizing features of tail items. Through comprehensive experiments on both public and industrial datasets, we have substantiated the effectiveness and dominance of the proposed method especially on unveiling the potential of tail items. | [
"['Zhichen Xiang' 'Hongke Zhao' 'Chuang Zhao' 'Ming He' 'Jianping Fan']"
] |
null | null | 2406.17477 | null | null | http://arxiv.org/pdf/2406.17477v1 | 2024-06-25T11:49:33Z | 2024-06-25T11:49:33Z | Towards Federated Low-Rank Adaptation with Rank-Heterogeneous
Communication | Low-rank adaptation (LoRA) is an attractive alternative of adapting full weights for the federated fine-tuning of large pretrained models, which can significantly reduce the memory and communication burden. In principle, federated LoRA can provide an effective mean to allocate different resources to each client by tuning ranks for each client, which can be useful in achieving a better communication-performance tradeoff. We find, however, that the empirical performance of LoRA is highly unstable with respect to such rank-heterogeneity, severely limiting the applicability to the scenarios where it is desirable or even required to allocate nonuniform communication bandwidth to each client due to constrained total bandwidth. Our investigation reveals that the root cause of this instability is the zero-padding-based aggregation strategy adopted in conventional federated LoRA frameworks, which causes the information from high rank clients to get diluted during the aggregation process. To address this issue, we propose a new replication-based padding strategy, which allows us to better leverage the information from clients with high-quality datasets. This method ensures that valuable information from high rank clients is retained during the aggregation process, accelerating the convergence speed and enhancing the overall prediction quality of the global model. | [
"['Yuji Byun' 'Jaeho Lee']"
] |
null | null | 2406.17490 | null | null | http://arxiv.org/pdf/2406.17490v1 | 2024-06-25T12:17:44Z | 2024-06-25T12:17:44Z | BricksRL: A Platform for Democratizing Robotics and Reinforcement
Learning Research and Education with LEGO | We present BricksRL, a platform designed to democratize access to robotics for reinforcement learning research and education. BricksRL facilitates the creation, design, and training of custom LEGO robots in the real world by interfacing them with the TorchRL library for reinforcement learning agents. The integration of TorchRL with the LEGO hubs, via Bluetooth bidirectional communication, enables state-of-the-art reinforcement learning training on GPUs for a wide variety of LEGO builds. This offers a flexible and cost-efficient approach for scaling and also provides a robust infrastructure for robot-environment-algorithm communication. We present various experiments across tasks and robot configurations, providing built plans and training results. Furthermore, we demonstrate that inexpensive LEGO robots can be trained end-to-end in the real world to achieve simple tasks, with training times typically under 120 minutes on a normal laptop. Moreover, we show how users can extend the capabilities, exemplified by the successful integration of non-LEGO sensors. By enhancing accessibility to both robotics and reinforcement learning, BricksRL establishes a strong foundation for democratized robotic learning in research and educational settings. | [
"['Sebastian Dittert' 'Vincent Moens' 'Gianni De Fabritiis']"
] |
null | null | 2406.17503 | null | null | http://arxiv.org/pdf/2406.17503v2 | 2024-07-15T06:41:13Z | 2024-06-25T12:43:33Z | WAVE: Weight Template for Adaptive Initialization of Variable-sized
Models | The expansion of model parameters underscores the significance of pre-trained models; however, the constraints encountered during model deployment necessitate models of variable sizes. Consequently, the traditional pre-training and fine-tuning paradigm fails to address the initialization problem when target models are incompatible with pre-trained models. We tackle this issue from a multitasking perspective and introduce textbf{WAVE}, which incorporates a set of shared textbf{W}eight templates for textbf{A}daptive initialization of textbf{V}ariable-siztextbf{E}d Models. During initialization, target models will initialize the corresponding weight scalers tailored to their model size, which are sufficient to learn the connection rules of weight templates based on the Kronecker product from a limited amount of data. For the construction of the weight templates, WAVE utilizes the textit{Learngene} framework, which structurally condenses common knowledge from ancestry models into weight templates as the learngenes through knowledge distillation. This process allows the integration of pre-trained models' knowledge into structured knowledge according to the rules of weight templates. We provide a comprehensive benchmark for the learngenes, and extensive experiments demonstrate the efficacy of WAVE. The results show that WAVE achieves state-of-the-art performance when initializing models with various depth and width, and even outperforms the direct pre-training of $n$ entire models, particularly for smaller models, saving approximately $ntimes$ and $5times$ in computational and storage resources, respectively. WAVE simultaneously achieves the most efficient knowledge transfer across a series of datasets, specifically achieving an average improvement of 1.8% and 1.2% on 7 downstream datasets. | [
"['Fu Feng' 'Yucheng Xie' 'Jing Wang' 'Xin Geng']"
] |
null | null | 2406.17517 | null | null | http://arxiv.org/pdf/2406.17517v1 | 2024-06-25T12:54:35Z | 2024-06-25T12:54:35Z | Preserving Node Distinctness in Graph Autoencoders via Similarity
Distillation | Graph autoencoders (GAEs), as a kind of generative self-supervised learning approach, have shown great potential in recent years. GAEs typically rely on distance-based criteria, such as mean-square-error (MSE), to reconstruct the input graph. However, relying solely on a single reconstruction criterion may lead to a loss of distinctiveness in the reconstructed graph, causing nodes to collapse into similar representations and resulting in sub-optimal performance. To address this issue, we have developed a simple yet effective strategy to preserve the necessary distinctness in the reconstructed graph. Inspired by the knowledge distillation technique, we found that the dual encoder-decoder architecture of GAEs can be viewed as a teacher-student relationship. Therefore, we propose transferring the knowledge of distinctness from the raw graph to the reconstructed graph, achieved through a simple KL constraint. Specifically, we compute pairwise node similarity scores in the raw graph and reconstructed graph. During the training process, the KL constraint is optimized alongside the reconstruction criterion. We conducted extensive experiments across three types of graph tasks, demonstrating the effectiveness and generality of our strategy. This indicates that the proposed approach can be employed as a plug-and-play method to avoid vague reconstructions and enhance overall performance. | [
"['Ge Chen' 'Yulan Hu' 'Sheng Ouyang' 'Yong Liu' 'Cuicui Luo']"
] |
null | null | 2406.17523 | null | null | http://arxiv.org/pdf/2406.17523v2 | 2024-07-02T16:33:26Z | 2024-06-25T13:06:09Z | On the consistency of hyper-parameter selection in value-based deep
reinforcement learning | Deep reinforcement learning (deep RL) has achieved tremendous success on various domains through a combination of algorithmic design and careful selection of hyper-parameters. Algorithmic improvements are often the result of iterative enhancements built upon prior approaches, while hyper-parameter choices are typically inherited from previous methods or fine-tuned specifically for the proposed technique. Despite their crucial impact on performance, hyper-parameter choices are frequently overshadowed by algorithmic advancements. This paper conducts an extensive empirical study focusing on the reliability of hyper-parameter selection for value-based deep reinforcement learning agents, including the introduction of a new score to quantify the consistency and reliability of various hyper-parameters. Our findings not only help establish which hyper-parameters are most critical to tune, but also help clarify which tunings remain consistent across different training regimes. | [
"['Johan Obando-Ceron' 'João G. M. Araújo' 'Aaron Courville'\n 'Pablo Samuel Castro']"
] |
null | null | 2406.17536 | null | null | http://arxiv.org/pdf/2406.17536v2 | 2024-06-26T09:52:47Z | 2024-06-25T13:20:39Z | MedMNIST-C: Comprehensive benchmark and improved classifier robustness
by simulating realistic image corruptions | The integration of neural-network-based systems into clinical practice is limited by challenges related to domain generalization and robustness. The computer vision community established benchmarks such as ImageNet-C as a fundamental prerequisite to measure progress towards those challenges. Similar datasets are largely absent in the medical imaging community which lacks a comprehensive benchmark that spans across imaging modalities and applications. To address this gap, we create and open-source MedMNIST-C, a benchmark dataset based on the MedMNIST+ collection covering 12 datasets and 9 imaging modalities. We simulate task and modality-specific image corruptions of varying severity to comprehensively evaluate the robustness of established algorithms against real-world artifacts and distribution shifts. We further provide quantitative evidence that our simple-to-use artificial corruptions allow for highly performant, lightweight data augmentation to enhance model robustness. Unlike traditional, generic augmentation strategies, our approach leverages domain knowledge, exhibiting significantly higher robustness when compared to widely adopted methods. By introducing MedMNIST-C and open-sourcing the corresponding library allowing for targeted data augmentations, we contribute to the development of increasingly robust methods tailored to the challenges of medical imaging. The code is available at https://github.com/francescodisalvo05/medmnistc-api}{github.com/francescodisalvo05/medmnistc-api . | [
"['Francesco Di Salvo' 'Sebastian Doerrich' 'Christian Ledig']"
] |
null | null | 2406.17537 | null | null | http://arxiv.org/pdf/2406.17537v1 | 2024-06-25T13:21:01Z | 2024-06-25T13:21:01Z | SincVAE: a New Approach to Improve Anomaly Detection on EEG Data Using
SincNet and Variational Autoencoder | Over the past few decades, electroencephalography (EEG) monitoring has become a pivotal tool for diagnosing neurological disorders, particularly for detecting seizures. Epilepsy, one of the most prevalent neurological diseases worldwide, affects approximately the 1 % of the population. These patients face significant risks, underscoring the need for reliable, continuous seizure monitoring in daily life. Most of the techniques discussed in the literature rely on supervised Machine Learning (ML) methods. However, the challenge of accurately labeling variations in epileptic EEG waveforms complicates the use of these approaches. Additionally, the rarity of ictal events introduces an high imbalancing within the data, which could lead to poor prediction performance in supervised learning approaches. Instead, a semi-supervised approach allows to train the model only on data not containing seizures, thus avoiding the issues related to the data imbalancing. This work proposes a semi-supervised approach for detecting epileptic seizures from EEG data, utilizing a novel Deep Learning-based method called SincVAE. This proposal incorporates the learning of an ad-hoc array of bandpass filter as a first layer of a Variational Autoencoder (VAE), potentially eliminating the preprocessing stage where informative band frequencies are identified and isolated. Results indicate that SincVAE improves seizure detection in EEG data and is capable of identifying early seizures during the preictal stage as well as monitoring patients throughout the postictal stage. | [
"['Andrea Pollastro' 'Francesco Isgrò' 'Roberto Prevete']"
] |
null | null | 2406.17542 | null | null | http://arxiv.org/pdf/2406.17542v2 | 2024-06-26T07:44:42Z | 2024-06-25T13:29:14Z | CDQuant: Accurate Post-training Weight Quantization of Large Pre-trained
Models using Greedy Coordinate Descent | Large language models (LLMs) have recently demonstrated remarkable performance across diverse language tasks. But their deployment is often constrained by their substantial computational and storage requirements. Quantization has emerged as a key technique for addressing this challenge, enabling the compression of large models with minimal impact on performance. The recent GPTQ algorithm, a post-training quantization (PTQ) method, has proven highly effective for compressing LLMs, sparking a wave of research that leverages GPTQ as a core component. Recognizing the pivotal role of GPTQ in the PTQ landscape, we introduce CDQuant, a simple and scalable alternative to GPTQ with improved performance. CDQuant uses coordinate descent to minimize the layer-wise reconstruction loss to achieve high-quality quantized weights. Our algorithm is easy to implement and scales efficiently to models with hundreds of billions of parameters. Through extensive evaluation on the PaLM2 model family, we demonstrate that CDQuant consistently outperforms GPTQ across diverse model sizes and quantization levels. In particular, for INT2 quantization of PaLM2-Otter, CDQuant achieves a 10% reduction in perplexity compared to GPTQ. | [
"['Pranav Ajit Nair' 'Arun Sai Suggala']"
] |
null | null | 2406.17556 | null | null | http://arxiv.org/pdf/2406.17556v1 | 2024-06-25T13:49:56Z | 2024-06-25T13:49:56Z | Modularity Based Community Detection in Hypergraphs | In this paper, we propose a scalable community detection algorithm using hypergraph modularity function, h-Louvain. It is an adaptation of the classical Louvain algorithm in the context of hypergraphs. We observe that a direct application of the Louvain algorithm to optimize the hypergraph modularity function often fails to find meaningful communities. We propose a solution to this issue by adjusting the initial stage of the algorithm via carefully and dynamically tuned linear combination of the graph modularity function of the corresponding two-section graph and the desired hypergraph modularity function. The process is guided by Bayesian optimization of the hyper-parameters of the proposed procedure. Various experiments on synthetic as well as real-world networks are performed showing that this process yields improved results in various regimes. | [
"['Bogumił Kamiński' 'Paweł Misiorek' 'Paweł Prałat' 'François Théberge']"
] |
null | null | 2406.17563 | null | null | http://arxiv.org/pdf/2406.17563v1 | 2024-06-25T14:00:42Z | 2024-06-25T14:00:42Z | Multi-property Steering of Large Language Models with Dynamic Activation
Composition | Activation steering methods were shown to be effective in conditioning language model generation by additively intervening over models' intermediate representations. However, the evaluation of these techniques has so far been limited to single conditioning properties and synthetic settings. In this work, we conduct a comprehensive evaluation of various activation steering strategies, highlighting the property-dependent nature of optimal parameters to ensure a robust effect throughout generation. To address this issue, we propose Dynamic Activation Composition, an information-theoretic approach to modulate the steering intensity of one or more properties throughout generation. Our experiments on multi-property steering show that our method successfully maintains high conditioning while minimizing the impact of conditioning on generation fluency. | [
"['Daniel Scalena' 'Gabriele Sarti' 'Malvina Nissim']"
] |
null | null | 2406.17576 | null | null | http://arxiv.org/pdf/2406.17576v1 | 2024-06-25T14:16:40Z | 2024-06-25T14:16:40Z | Leveraging Reinforcement Learning in Red Teaming for Advanced Ransomware
Attack Simulations | Ransomware presents a significant and increasing threat to individuals and organizations by encrypting their systems and not releasing them until a large fee has been extracted. To bolster preparedness against potential attacks, organizations commonly conduct red teaming exercises, which involve simulated attacks to assess existing security measures. This paper proposes a novel approach utilizing reinforcement learning (RL) to simulate ransomware attacks. By training an RL agent in a simulated environment mirroring real-world networks, effective attack strategies can be learned quickly, significantly streamlining traditional, manual penetration testing processes. The attack pathways revealed by the RL agent can provide valuable insights to the defense team, helping them identify network weak points and develop more resilient defensive measures. Experimental results on a 152-host example network confirm the effectiveness of the proposed approach, demonstrating the RL agent's capability to discover and orchestrate attacks on high-value targets while evading honeyfiles (decoy files strategically placed to detect unauthorized access). | [
"['Cheng Wang' 'Christopher Redino' 'Ryan Clark' 'Abdul Rahman'\n 'Sal Aguinaga' 'Sathvik Murli' 'Dhruv Nandakumar' 'Roland Rao'\n 'Lanxiao Huang' 'Daniel Radke' 'Edward Bowen']"
] |
null | null | 2406.17583 | null | null | http://arxiv.org/pdf/2406.17583v1 | 2024-06-25T14:27:03Z | 2024-06-25T14:27:03Z | Towards Compositional Interpretability for XAI | Artificial intelligence (AI) is currently based largely on black-box machine learning models which lack interpretability. The field of eXplainable AI (XAI) strives to address this major concern, being critical in high-stakes areas such as the finance, legal and health sectors. We present an approach to defining AI models and their interpretability based on category theory. For this we employ the notion of a compositional model, which sees a model in terms of formal string diagrams which capture its abstract structure together with its concrete implementation. This comprehensive view incorporates deterministic, probabilistic and quantum models. We compare a wide range of AI models as compositional models, including linear and rule-based models, (recurrent) neural networks, transformers, VAEs, and causal and DisCoCirc models. Next we give a definition of interpretation of a model in terms of its compositional structure, demonstrating how to analyse the interpretability of a model, and using this to clarify common themes in XAI. We find that what makes the standard 'intrinsically interpretable' models so transparent is brought out most clearly diagrammatically. This leads us to the more general notion of compositionally-interpretable (CI) models, which additionally include, for instance, causal, conceptual space, and DisCoCirc models. We next demonstrate the explainability benefits of CI models. Firstly, their compositional structure may allow the computation of other quantities of interest, and may facilitate inference from the model to the modelled phenomenon by matching its structure. Secondly, they allow for diagrammatic explanations for their behaviour, based on influence constraints, diagram surgery and rewrite explanations. Finally, we discuss many future directions for the approach, raising the question of how to learn such meaningfully structured models in practice. | [
"['Sean Tull' 'Robin Lorenz' 'Stephen Clark' 'Ilyas Khan' 'Bob Coecke']"
] |
null | null | 2406.17585 | null | null | http://arxiv.org/pdf/2406.17585v1 | 2024-06-25T14:28:17Z | 2024-06-25T14:28:17Z | Learning Dynamic Bayesian Networks from Data: Foundations, First
Principles and Numerical Comparisons | In this paper, we present a guide to the foundations of learning Dynamic Bayesian Networks (DBNs) from data in the form of multiple samples of trajectories for some length of time. We present the formalism for a generic as well as a set of common types of DBNs for particular variable distributions. We present the analytical form of the models, with a comprehensive discussion on the interdependence between structure and weights in a DBN model and their implications for learning. Next, we give a broad overview of learning methods and describe and categorize them based on the most important statistical features, and how they treat the interplay between learning structure and weights. We give the analytical form of the likelihood and Bayesian score functions, emphasizing the distinction from the static case. We discuss functions used in optimization to enforce structural requirements. We briefly discuss more complex extensions and representations. Finally we present a set of comparisons in different settings for various distinct but representative algorithms across the variants. | [
"['Vyacheslav Kungurtsev' 'Petr Rysavy' 'Fadwa Idlahcen' 'Pavel Rytir'\n 'Ales Wodecki']"
] |
null | null | 2406.17597 | null | null | http://arxiv.org/pdf/2406.17597v1 | 2024-06-25T14:40:34Z | 2024-06-25T14:40:34Z | Constructing structured tensor priors for Bayesian inverse problems | Specifying a prior distribution is an essential part of solving Bayesian inverse problems. The prior encodes a belief on the nature of the solution and this regularizes the problem. In this article we completely characterize a Gaussian prior that encodes the belief that the solution is a structured tensor. We first define the notion of (A,b)-constrained tensors and show that they describe a large variety of different structures such as Hankel, circulant, triangular, symmetric, and so on. Then we completely characterize the Gaussian probability distribution of such tensors by specifying its mean vector and covariance matrix. Furthermore, explicit expressions are proved for the covariance matrix of tensors whose entries are invariant under a permutation. These results unlock a whole new class of priors for Bayesian inverse problems. We illustrate how new kernel functions can be designed and efficiently computed and apply our results on two particular Bayesian inverse problems: completing a Hankel matrix from a few noisy measurements and learning an image classifier of handwritten digits. The effectiveness of the proposed priors is demonstrated for both problems. All applications have been implemented as reactive Pluto notebooks in Julia. | [
"['Kim Batselier']"
] |
null | null | 2406.17606 | null | null | http://arxiv.org/pdf/2406.17606v1 | 2024-06-25T14:48:28Z | 2024-06-25T14:48:28Z | Diffusion-based Adversarial Purification for Intrusion Detection | The escalating sophistication of cyberattacks has encouraged the integration of machine learning techniques in intrusion detection systems, but the rise of adversarial examples presents a significant challenge. These crafted perturbations mislead ML models, enabling attackers to evade detection or trigger false alerts. As a reaction, adversarial purification has emerged as a compelling solution, particularly with diffusion models showing promising results. However, their purification potential remains unexplored in the context of intrusion detection. This paper demonstrates the effectiveness of diffusion models in purifying adversarial examples in network intrusion detection. Through a comprehensive analysis of the diffusion parameters, we identify optimal configurations maximizing adversarial robustness with minimal impact on normal performance. Importantly, this study reveals insights into the relationship between diffusion noise and diffusion steps, representing a novel contribution to the field. Our experiments are carried out on two datasets and against 5 adversarial attacks. The implementation code is publicly available. | [
"['Mohamed Amine Merzouk' 'Erwan Beurier' 'Reda Yaich'\n 'Nora Boulahia-Cuppens' 'Frédéric Cuppens']"
] |
null | null | 2406.17611 | null | null | http://arxiv.org/pdf/2406.17611v1 | 2024-06-25T14:57:38Z | 2024-06-25T14:57:38Z | Distributed Training of Large Graph Neural Networks with Variable
Communication Rates | Training Graph Neural Networks (GNNs) on large graphs presents unique challenges due to the large memory and computing requirements. Distributed GNN training, where the graph is partitioned across multiple machines, is a common approach to training GNNs on large graphs. However, as the graph cannot generally be decomposed into small non-interacting components, data communication between the training machines quickly limits training speeds. Compressing the communicated node activations by a fixed amount improves the training speeds, but lowers the accuracy of the trained GNN. In this paper, we introduce a variable compression scheme for reducing the communication volume in distributed GNN training without compromising the accuracy of the learned model. Based on our theoretical analysis, we derive a variable compression method that converges to a solution equivalent to the full communication case, for all graph partitioning schemes. Our empirical results show that our method attains a comparable performance to the one obtained with full communication. We outperform full communication at any fixed compression ratio for any communication budget. | [
"['Juan Cervino' 'Md Asadullah Turja' 'Hesham Mostafa' 'Nageen Himayat'\n 'Alejandro Ribeiro']"
] |
null | null | 2406.17615 | null | null | http://arxiv.org/abs/2406.17615v1 | 2024-06-25T15:01:39Z | 2024-06-25T15:01:39Z | Aligning Programming Language and Natural Language: Exploring Design
Choices in Multi-Modal Transformer-Based Embedding for Bug Localization | Bug localization refers to the identification of source code files which is in a programming language and also responsible for the unexpected behavior of software using the bug report, which is a natural language. As bug localization is labor-intensive, bug localization models are employed to assist software developers. Due to the domain difference between source code files and bug reports, modern bug-localization systems, based on deep learning models, rely heavily on embedding techniques that project bug reports and source code files into a shared vector space. The creation of an embedding involves several design choices, but the impact of these choices on the quality of embedding and the performance of bug localization models remains unexplained in current research. To address this gap, our study evaluated 14 distinct embedding models to gain insights into the effects of various design choices. Subsequently, we developed bug localization models utilizing these embedding models to assess the influence of these choices on the performance of the localization models. Our findings indicate that the pre-training strategies significantly affect the quality of the embedding. Moreover, we discovered that the familiarity of the embedding models with the data has a notable impact on the bug localization model's performance. Notably, when the training and testing data are collected from different projects, the performance of the bug localization models exhibits substantial fluctuations. | [
"['Partha Chakraborty' 'Venkatraman Arumugam' 'Meiyappan Nagappan']"
] |
null | null | 2406.17627 | null | null | http://arxiv.org/pdf/2406.17627v1 | 2024-06-25T15:15:27Z | 2024-06-25T15:15:27Z | Querying Labeled Time Series Data with Scenario Programs | In order to ensure autonomous vehicles are safe for on-road deployment, simulation-based testing has become an integral complement to on-road testing. The rise in simulation testing and validation reflects a growing need to verify that AV behavior is consistent with desired outcomes even in edge case scenarios $-$ which may seldom or never appear in on-road testing data. This raises a critical question: to what extent are AV failures in simulation consistent with data collected from real-world testing? As a result of the gap between simulated and real sensor data (sim-to-real gap), failures in simulation can either be spurious (simulation- or simulator-specific issues) or relevant (safety-critical AV system issues). One possible method for validating if simulated time series failures are consistent with real world time series sensor data could involve retrieving instances of the failure scenario from a real-world time series dataset, in order to understand AV performance in these scenarios. Adopting this strategy, we propose a formal definition of what constitutes a match between a real-world labeled time series data item and a simulated scenario written from a fragment of the Scenic probabilistic programming language for simulation generation. With this definition of a match, we develop a querying algorithm that identifies the subset of a labeled time series dataset matching a given scenario. To allow this approach to be used to verify the safety of other cyber-physical systems (CPS), we present a definition and algorithm for matching scalable beyond the autonomous vehicles domain. Experiments demonstrate the precision and scalability of the algorithm for a set of challenging and uncommon time series scenarios identified from the nuScenes autonomous driving dataset. We include a full system implementation of the querying algorithm freely available for use across a wide range of CPS. | [
"['Devan Shanker']"
] |
null | null | 2406.17630 | null | null | http://arxiv.org/pdf/2406.17630v1 | 2024-06-25T15:17:01Z | 2024-06-25T15:17:01Z | KANQAS: Kolmogorov Arnold Network for Quantum Architecture Search | Quantum architecture search~(QAS) is a promising direction for optimization and automated design of quantum circuits towards quantum advantage. Recent techniques in QAS focus on machine learning-based approaches from reinforcement learning, like deep Q-network. While multi-layer perceptron-based deep Q-networks have been applied for QAS, their interpretability remains challenging due to the high number of parameters. In this work, we evaluate the practicality of KANs in quantum architecture search problems, analyzing their efficiency in terms of the probability of success, frequency of optimal solutions and their dependencies on various degrees of freedom of the network. In a noiseless scenario, the probability of success and the number of optimal quantum circuit configurations to generate the multi-qubit maximally entangled states are significantly higher than MLPs. Moreover in noisy scenarios, KAN can achieve a better fidelity in approximating maximally entangled state than MLPs, where the performance of the MLP significantly depends on the choice of activation function. Further investigation reveals that KAN requires a very small number of learnable parameters compared to MLPs, however, the average time of executing each episode for KAN is much higher. | [
"['Akash Kundu' 'Aritra Sarkar' 'Abhishek Sadhu']"
] |
null | null | 2406.17633 | null | null | http://arxiv.org/pdf/2406.17633v1 | 2024-06-25T15:20:25Z | 2024-06-25T15:20:25Z | Knowledge Distillation in Automated Annotation: Supervised Text
Classification with LLM-Generated Training Labels | Computational social science (CSS) practitioners often rely on human-labeled data to fine-tune supervised text classifiers. We assess the potential for researchers to augment or replace human-generated training data with surrogate training labels from generative large language models (LLMs). We introduce a recommended workflow and test this LLM application by replicating 14 classification tasks and measuring performance. We employ a novel corpus of English-language text classification data sets from recent CSS articles in high-impact journals. Because these data sets are stored in password-protected archives, our analyses are less prone to issues of contamination. For each task, we compare supervised classifiers fine-tuned using GPT-4 labels against classifiers fine-tuned with human annotations and against labels from GPT-4 and Mistral-7B with few-shot in-context learning. Our findings indicate that supervised classification models fine-tuned on LLM-generated labels perform comparably to models fine-tuned with labels from human annotators. Fine-tuning models using LLM-generated labels can be a fast, efficient and cost-effective method of building supervised text classifiers. | [
"['Nicholas Pangakis' 'Samuel Wolken']"
] |
null | null | 2406.17639 | null | null | http://arxiv.org/pdf/2406.17639v2 | 2024-06-26T10:58:48Z | 2024-06-25T15:24:02Z | Mitigate the Gap: Investigating Approaches for Improving Cross-Modal
Alignment in CLIP | Contrastive Language--Image Pre-training (CLIP) has manifested remarkable improvements in zero-shot classification and cross-modal vision-language tasks. Yet, from a geometrical point of view, the CLIP embedding space has been found to have a pronounced modality gap. This gap renders the embedding space overly sparse and disconnected, with different modalities being densely distributed in distinct subregions of the hypersphere. In this work, we aim at answering two main questions: 1. Does sharing the parameter space between the multi-modal encoders reduce the modality gap? 2. Can the gap be mitigated by pushing apart the uni-modal embeddings via intra-modality separation? We design AlignCLIP, in order to answer these questions and show that answers to both questions are positive. Through extensive experiments, we show that AlignCLIP achieves noticeable enhancements in the cross-modal alignment of the embeddings, and thereby, reduces the modality gap, while maintaining the performance across several downstream evaluations, such as zero-shot image classification, zero-shot multi-modal retrieval and zero-shot semantic text similarity. | [
"['Sedigheh Eslami' 'Gerard de Melo']"
] |
null | null | 2406.17640 | null | null | http://arxiv.org/pdf/2406.17640v1 | 2024-06-25T15:24:06Z | 2024-06-25T15:24:06Z | BayTTA: Uncertainty-aware medical image classification with optimized
test-time augmentation using Bayesian model averaging | Test-time augmentation (TTA) is a well-known technique employed during the testing phase of computer vision tasks. It involves aggregating multiple augmented versions of input data. Combining predictions using a simple average formulation is a common and straightforward approach after performing TTA. This paper introduces a novel framework for optimizing TTA, called BayTTA (Bayesian-based TTA), which is based on Bayesian Model Averaging (BMA). First, we generate a model list associated with different variations of the input data created through TTA. Then, we use BMA to combine model predictions weighted by their respective posterior probabilities. Such an approach allows one to take into account model uncertainty, and thus to enhance the predictive performance of the related machine learning or deep learning model. We evaluate the performance of BayTTA on various public data, including three medical image datasets comprising skin cancer, breast cancer, and chest X-ray images and two well-known gene editing datasets, CRISPOR and GUIDE-seq. Our experimental results indicate that BayTTA can be effectively integrated into state-of-the-art deep learning models used in medical image analysis as well as into some popular pre-trained CNN models such as VGG-16, MobileNetV2, DenseNet201, ResNet152V2, and InceptionRes-NetV2, leading to the enhancement in their accuracy and robustness performance. | [
"['Zeinab Sherkatghanad' 'Moloud Abdar' 'Mohammadreza Bakhtyari'\n 'Vladimir Makarenkov']"
] |
null | null | 2406.17649 | null | null | http://arxiv.org/pdf/2406.17649v1 | 2024-06-25T15:41:26Z | 2024-06-25T15:41:26Z | Privacy Preserving Reinforcement Learning for Population Processes | We consider the problem of privacy protection in Reinforcement Learning (RL) algorithms that operate over population processes, a practical but understudied setting that includes, for example, the control of epidemics in large populations of dynamically interacting individuals. In this setting, the RL algorithm interacts with the population over $T$ time steps by receiving population-level statistics as state and performing actions which can affect the entire population at each time step. An individual's data can be collected across multiple interactions and their privacy must be protected at all times. We clarify the Bayesian semantics of Differential Privacy (DP) in the presence of correlated data in population processes through a Pufferfish Privacy analysis. We then give a meta algorithm that can take any RL algorithm as input and make it differentially private. This is achieved by taking an approach that uses DP mechanisms to privatize the state and reward signal at each time step before the RL algorithm receives them as input. Our main theoretical result shows that the value-function approximation error when applying standard RL algorithms directly to the privatized states shrinks quickly as the population size and privacy budget increase. This highlights that reasonable privacy-utility trade-offs are possible for differentially private RL algorithms in population processes. Our theoretical findings are validated by experiments performed on a simulated epidemic control problem over large population sizes. | [
"['Samuel Yang-Zhao' 'Kee Siong Ng']"
] |
null | null | 2406.17660 | null | null | http://arxiv.org/pdf/2406.17660v1 | 2024-06-25T15:50:32Z | 2024-06-25T15:50:32Z | Grass: Compute Efficient Low-Memory LLM Training with Structured Sparse
Gradients | Large language model (LLM) training and finetuning are often bottlenecked by limited GPU memory. While existing projection-based optimization methods address this by projecting gradients into a lower-dimensional subspace to reduce optimizer state memory, they typically rely on dense projection matrices, which can introduce computational and memory overheads. In this work, we propose Grass (GRAdient Stuctured Sparsification), a novel approach that leverages sparse projections to transform gradients into structured sparse updates. This design not only significantly reduces memory usage for optimizer states but also minimizes gradient memory footprint, computation, and communication costs, leading to substantial throughput improvements. Extensive experiments on pretraining and finetuning tasks demonstrate that Grass achieves competitive performance to full-rank training and existing projection-based methods. Notably, Grass enables half-precision pretraining of a 13B parameter LLaMA model on a single 40GB A100 GPU--a feat infeasible for previous methods--and yields up to a $2times$ throughput improvement on an 8-GPU system. Code can be found at https://github.com/aashiqmuhamed/GRASS . | [
"['Aashiq Muhamed' 'Oscar Li' 'David Woodruff' 'Mona Diab' 'Virginia Smith']"
] |
null | null | 2406.17673 | null | null | http://arxiv.org/pdf/2406.17673v1 | 2024-06-25T16:03:50Z | 2024-06-25T16:03:50Z | LaTable: Towards Large Tabular Models | Tabular data is one of the most ubiquitous modalities, yet the literature on tabular generative foundation models is lagging far behind its text and vision counterparts. Creating such a model is hard, due to the heterogeneous feature spaces of different tabular datasets, tabular metadata (e.g. dataset description and feature headers), and tables lacking prior knowledge (e.g. feature order). In this work we propose LaTable: a novel tabular diffusion model that addresses these challenges and can be trained across different datasets. Through extensive experiments we find that LaTable outperforms baselines on in-distribution generation, and that finetuning LaTable can generate out-of-distribution datasets better with fewer samples. On the other hand, we explore the poor zero-shot performance of LaTable, and what it may teach us about building generative tabular foundation models with better zero- and few-shot generation capabilities. | [
"['Boris van Breugel' 'Jonathan Crabbé' 'Rob Davis'\n 'Mihaela van der Schaar']"
] |
null | null | 2406.17692 | null | null | http://arxiv.org/pdf/2406.17692v1 | 2024-06-25T16:32:33Z | 2024-06-25T16:32:33Z | From Distributional to Overton Pluralism: Investigating Large Language
Model Alignment | The alignment process changes several properties of a large language model's (LLM's) output distribution. We analyze two aspects of post-alignment distributional shift of LLM responses. First, we re-examine previously reported reductions in response diversity post-alignment. Our analysis suggests that an apparent drop in the diversity of responses is largely explained by quality control and information aggregation. Alignment suppresses irrelevant and unhelpful content while shifting the output distribution toward longer responses that cover information spanning several responses from the base LLM, essentially presenting diverse information in a single response. Finding little evidence that alignment suppresses useful information, it is natural to ask the opposite question: do aligned models surface information that cannot be recovered from base models? Our second investigation shows this is not the case and the behavior of aligned models is recoverable from base models without fine-tuning. A combination of in-context examples and lower-resolution semantic hints about response content can elicit responses from base LLMs that are as similar to alignment-tuned LLM responses as alignment-tuned LLM responses are to each other. Taken together, these results indicate that current alignment techniques capture but do not extend the useful subset of assistant-like base LLM behavior, providing further evidence for the Superficial Alignment Hypothesis. They also show that in-context alignment can go surprisingly far as a strategy for imitating aligned LLMs without fine-tuning. Our code and data is available at https://github.com/thomlake/investigating-alignment. | [
"['Thom Lake' 'Eunsol Choi' 'Greg Durrett']"
] |
null | null | 2406.17697 | null | null | http://arxiv.org/pdf/2406.17697v1 | 2024-06-25T16:33:33Z | 2024-06-25T16:33:33Z | HGTDP-DTA: Hybrid Graph-Transformer with Dynamic Prompt for Drug-Target
Binding Affinity Prediction | Drug target binding affinity (DTA) is a key criterion for drug screening. Existing experimental methods are time-consuming and rely on limited structural and domain information. While learning-based methods can model sequence and structural information, they struggle to integrate contextual data and often lack comprehensive modeling of drug-target interactions. In this study, we propose a novel DTA prediction method, termed HGTDP-DTA, which utilizes dynamic prompts within a hybrid Graph-Transformer framework. Our method generates context-specific prompts for each drug-target pair, enhancing the model's ability to capture unique interactions. The introduction of prompt tuning further optimizes the prediction process by filtering out irrelevant noise and emphasizing task-relevant information, dynamically adjusting the input features of the molecular graph. The proposed hybrid Graph-Transformer architecture combines structural information from Graph Convolutional Networks (GCNs) with sequence information captured by Transformers, facilitating the interaction between global and local information. Additionally, we adopted the multi-view feature fusion method to project molecular graph views and affinity subgraph views into a common feature space, effectively combining structural and contextual information. Experiments on two widely used public datasets, Davis and KIBA, show that HGTDP-DTA outperforms state-of-the-art DTA prediction methods in both prediction performance and generalization ability. | [
"['Xi Xiao' 'Wentao Wang' 'Jiacheng Xie' 'Lijing Zhu' 'Gaofei Chen'\n 'Zhengji Li' 'Tianyang Wang' 'Min Xu']"
] |
null | null | 2406.17698 | null | null | http://arxiv.org/pdf/2406.17698v1 | 2024-06-25T16:38:27Z | 2024-06-25T16:38:27Z | Identifying Nonstationary Causal Structures with High-Order Markov
Switching Models | Causal discovery in time series is a rapidly evolving field with a wide variety of applications in other areas such as climate science and neuroscience. Traditional approaches assume a stationary causal graph, which can be adapted to nonstationary time series with time-dependent effects or heterogeneous noise. In this work we address nonstationarity via regime-dependent causal structures. We first establish identifiability for high-order Markov Switching Models, which provide the foundations for identifiable regime-dependent causal discovery. Our empirical studies demonstrate the scalability of our proposed approach for high-order regime-dependent structure estimation, and we illustrate its applicability on brain activity data. | [
"['Carles Balsells-Rodas' 'Yixin Wang' 'Pedro A. M. Mediano' 'Yingzhen Li']"
] |
null | null | 2406.17699 | null | null | http://arxiv.org/pdf/2406.17699v1 | 2024-06-25T16:38:53Z | 2024-06-25T16:38:53Z | Can independent Metropolis beat crude Monte Carlo? | Assume that we would like to estimate the expected value of a function $F$ with respect to a density $pi$. We prove that if $pi$ is close enough under KL divergence to another density $q$, an independent Metropolis sampler estimator that obtains samples from $pi$ with proposal density $q$, enriched with a variance reduction computational strategy based on control variates, achieves smaller asymptotic variance than that of the crude Monte Carlo estimator. The control variates construction requires no extra computational effort but assumes that the expected value of $F$ under $q$ is analytically available. We illustrate this result by calculating the marginal likelihood in a linear regression model with prior-likelihood conflict and a non-conjugate prior. Furthermore, we propose an adaptive independent Metropolis algorithm that adapts the proposal density such that its KL divergence with the target is being reduced. We demonstrate its applicability in a Bayesian logistic and Gaussian process regression problems and we rigorously justify our asymptotic arguments under easily verifiable and essentially minimal conditions. | [
"['Siran Liu' 'Petros Dellaportas' 'Michalis K. Titsias']"
] |
null | null | 2406.17706 | null | null | http://arxiv.org/pdf/2406.17706v1 | 2024-06-25T16:45:47Z | 2024-06-25T16:45:47Z | FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model | Large language models (LLMs) show amazing performance on many domain-specific tasks after fine-tuning with some appropriate data. However, many domain-specific data are privately distributed across multiple owners. Thus, this dilemma raises the interest in how to perform LLM fine-tuning in federated learning (FL). However, confronted with limited computation and communication capacities, FL clients struggle to fine-tune an LLM effectively. To this end, we introduce FedBiOT, a resource-efficient LLM fine-tuning approach to FL. Specifically, our method involves the server generating a compressed LLM and aligning its performance with the full model. Subsequently, the clients fine-tune a lightweight yet important part of the compressed model, referred to as an adapter. Notice that as the server has no access to the private data owned by the clients, the data used for alignment by the server has a different distribution from the one used for fine-tuning by clients. We formulate the problem into a bi-level optimization problem to minimize the negative effect of data discrepancy and derive the updating rules for the server and clients. We conduct extensive experiments on LLaMA-2, empirically showing that the adapter has exceptional performance when reintegrated into the global LLM. The results also indicate that the proposed FedBiOT significantly reduces resource consumption compared to existing benchmarks, all while achieving comparable performance levels. | [
"['Feijie Wu' 'Zitao Li' 'Yaliang Li' 'Bolin Ding' 'Jing Gao']"
] |
null | null | 2406.17711 | null | null | http://arxiv.org/pdf/2406.17711v1 | 2024-06-25T16:52:37Z | 2024-06-25T16:52:37Z | Data curation via joint example selection further accelerates multimodal
learning | Data curation is an essential component of large-scale pretraining. In this work, we demonstrate that jointly selecting batches of data is more effective for learning than selecting examples independently. Multimodal contrastive objectives expose the dependencies between data and thus naturally yield criteria for measuring the joint learnability of a batch. We derive a simple and tractable algorithm for selecting such batches, which significantly accelerate training beyond individually-prioritized data points. As performance improves by selecting from larger super-batches, we also leverage recent advances in model approximation to reduce the associated computational overhead. As a result, our approach--multimodal contrastive learning with joint example selection (JEST)--surpasses state-of-the-art models with up to 13$times$ fewer iterations and 10$times$ less computation. Essential to the performance of JEST is the ability to steer the data selection process towards the distribution of smaller, well-curated datasets via pretrained reference models, exposing the level of data curation as a new dimension for neural scaling laws. | [
"['Talfan Evans' 'Nikhil Parthasarathy' 'Hamza Merzic' 'Olivier J. Henaff']"
] |
null | null | 2406.17714 | null | null | http://arxiv.org/pdf/2406.17714v1 | 2024-06-25T16:56:17Z | 2024-06-25T16:56:17Z | Compositional Models for Estimating Causal Effects | Many real-world systems can be represented as sets of interacting components. Examples of such systems include computational systems such as query processors, natural systems such as cells, and social systems such as families. Many approaches have been proposed in traditional (associational) machine learning to model such structured systems, including statistical relational models and graph neural networks. Despite this prior work, existing approaches to estimating causal effects typically treat such systems as single units, represent them with a fixed set of variables and assume a homogeneous data-generating process. We study a compositional approach for estimating individual treatment effects (ITE) in structured systems, where each unit is represented by the composition of multiple heterogeneous components. This approach uses a modular architecture to model potential outcomes at each component and aggregates component-level potential outcomes to obtain the unit-level potential outcomes. We discover novel benefits of the compositional approach in causal inference - systematic generalization to estimate counterfactual outcomes of unseen combinations of components and improved overlap guarantees between treatment and control groups compared to the classical methods for causal effect estimation. We also introduce a set of novel environments for empirically evaluating the compositional approach and demonstrate the effectiveness of our approach using both simulated and real-world data. | [
"['Purva Pruthi' 'David Jensen']"
] |
null | null | 2406.17718 | null | null | http://arxiv.org/pdf/2406.17718v1 | 2024-06-25T17:06:57Z | 2024-06-25T17:06:57Z | When does Self-Prediction help? Understanding Auxiliary Tasks in
Reinforcement Learning | We investigate the impact of auxiliary learning tasks such as observation reconstruction and latent self-prediction on the representation learning problem in reinforcement learning. We also study how they interact with distractions and observation functions in the MDP. We provide a theoretical analysis of the learning dynamics of observation reconstruction, latent self-prediction, and TD learning in the presence of distractions and observation functions under linear model assumptions. With this formalization, we are able to explain why latent-self prediction is a helpful emph{auxiliary task}, while observation reconstruction can provide more useful features when used in isolation. Our empirical analysis shows that the insights obtained from our learning dynamics framework predicts the behavior of these loss functions beyond the linear model assumption in non-linear neural networks. This reinforces the usefulness of the linear model framework not only for theoretical analysis, but also practical benefit for applied problems. | [
"['Claas Voelcker' 'Tyler Kastner' 'Igor Gilitschenski'\n 'Amir-massoud Farahmand']"
] |
null | null | 2406.17729 | null | null | http://arxiv.org/pdf/2406.17729v1 | 2024-06-21T18:27:09Z | 2024-06-21T18:27:09Z | Uncertainty-enabled machine learning for emulation of regional sea-level
change caused by the Antarctic Ice Sheet | Projecting sea-level change in various climate-change scenarios typically involves running forward simulations of the Earth's gravitational, rotational and deformational (GRD) response to ice mass change, which requires high computational cost and time. Here we build neural-network emulators of sea-level change at 27 coastal locations, due to the GRD effects associated with future Antarctic Ice Sheet mass change over the 21st century. The emulators are based on datasets produced using a numerical solver for the static sea-level equation and published ISMIP6-2100 ice-sheet model simulations referenced in the IPCC AR6 report. We show that the neural-network emulators have an accuracy that is competitive with baseline machine learning emulators. In order to quantify uncertainty, we derive well-calibrated prediction intervals for simulated sea-level change via a linear regression postprocessing technique that uses (nonlinear) machine learning model outputs, a technique that has previously been applied to numerical climate models. We also demonstrate substantial gains in computational efficiency: a feedforward neural-network emulator exhibits on the order of 100 times speedup in comparison to the numerical sea-level equation solver that is used for training. | [
"['Myungsoo Yoo' 'Giri Gopalan' 'Matthew J. Hoffman' 'Sophie Coulson'\n 'Holly Kyeore Han' 'Christopher K. Wikle' 'Trevor Hillebrand']"
] |
null | null | 2406.17737 | null | null | http://arxiv.org/pdf/2406.17737v1 | 2024-06-25T17:24:07Z | 2024-06-25T17:24:07Z | LLM Targeted Underperformance Disproportionately Impacts Vulnerable
Users | While state-of-the-art Large Language Models (LLMs) have shown impressive performance on many tasks, there has been extensive research on undesirable model behavior such as hallucinations and bias. In this work, we investigate how the quality of LLM responses changes in terms of information accuracy, truthfulness, and refusals depending on three user traits: English proficiency, education level, and country of origin. We present extensive experimentation on three state-of-the-art LLMs and two different datasets targeting truthfulness and factuality. Our findings suggest that undesirable behaviors in state-of-the-art LLMs occur disproportionately more for users with lower English proficiency, of lower education status, and originating from outside the US, rendering these models unreliable sources of information towards their most vulnerable users. | [
"['Elinor Poole-Dayan' 'Deb Roy' 'Jad Kabbara']"
] |
null | null | 2406.17740 | null | null | http://arxiv.org/pdf/2406.17740v1 | 2024-06-25T17:26:05Z | 2024-06-25T17:26:05Z | Structured Unrestricted-Rank Matrices for Parameter Efficient
Fine-tuning | Recent efforts to scale Transformer models have demonstrated rapid progress across a wide range of tasks (Wei et al., 2022). However, fine-tuning these models for downstream tasks is expensive due to their large parameter counts. Parameter-efficient fine-tuning (PEFT) approaches have emerged as a viable alternative by allowing us to fine-tune models by updating only a small number of parameters. In this work, we propose a general framework for parameter efficient fine-tuning (PEFT), based on structured unrestricted-rank matrices (SURM) which can serve as a drop-in replacement for popular approaches such as Adapters and LoRA. Unlike other methods like LoRA, SURMs provides more flexibility in finding the right balance between compactness and expressiveness. This is achieved by using low displacement rank matrices (LDRMs), which hasn't been used in this context before. SURMs remain competitive with baselines, often providing significant quality improvements while using a smaller parameter budget. SURMs achieve 5-7% accuracy gains on various image classification tasks while replacing low-rank matrices in LoRA. It also results in up to 12x reduction of the number of parameters in adapters (with virtually no loss in quality) on the GLUE benchmark. | [
"['Arijit Sehanobish' 'Avinava Dubey' 'Krzysztof Choromanski'\n 'Somnath Basu Roy Chowdhury' 'Deepali Jain' 'Vikas Sindhwani'\n 'Snigdha Chaturvedi']"
] |
null | null | 2406.17745 | null | null | http://arxiv.org/pdf/2406.17745v3 | 2024-07-04T17:52:06Z | 2024-06-25T17:31:04Z | Light-weight End-to-End Graph Interest Network for CTR Prediction in
E-commerce Search | Click-through-rate (CTR) prediction has an essential impact on improving user experience and revenue in e-commerce search. With the development of deep learning, graph-based methods are well exploited to utilize graph structure extracted from user behaviors and other information to help embedding learning. However, most of the previous graph-based methods mainly focus on recommendation scenarios, and therefore their graph structures highly depend on item's sequential information from user behaviors, ignoring query's sequential signal and query-item correlation. In this paper, we propose a new approach named Light-weight End-to-End Graph Interest Network (EGIN) to effectively mine users' search interests and tackle previous challenges. (i) EGIN utilizes query and item's correlation and sequential information from the search system to build a heterogeneous graph for better CTR prediction in e-commerce search. (ii) EGIN's graph embedding learning shares the same training input and is jointly trained with CTR prediction, making the end-to-end framework effortless to deploy in large-scale search systems. The proposed EGIN is composed of three parts: query-item heterogeneous graph, light-weight graph sampling, and multi-interest network. The query-item heterogeneous graph captures correlation and sequential information of query and item efficiently by the proposed light-weight graph sampling. The multi-interest network is well designed to utilize graph embedding to capture various similarity relationships between query and item to enhance the final CTR prediction. We conduct extensive experiments on both public and industrial datasets to demonstrate the effectiveness of the proposed EGIN. At the same time, the training cost of graph learning is relatively low compared with the main CTR prediction task, ensuring efficiency in practical applications. | [
"['Pipi Peng' 'Yunqing Jia' 'Ziqiang Zhou' 'murmurhash' 'Zichong Xiao']"
] |
null | null | 2406.17747 | null | null | http://arxiv.org/pdf/2406.17747v1 | 2024-06-25T17:34:09Z | 2024-06-25T17:34:09Z | Probing the effects of broken symmetries in machine learning | Symmetry is one of the most central concepts in physics, and it is no surprise that it has also been widely adopted as an inductive bias for machine-learning models applied to the physical sciences. This is especially true for models targeting the properties of matter at the atomic scale. Both established and state-of-the-art approaches, with almost no exceptions, are built to be exactly equivariant to translations, permutations, and rotations of the atoms. Incorporating symmetries -- rotations in particular -- constrains the model design space and implies more complicated architectures that are often also computationally demanding. There are indications that non-symmetric models can easily learn symmetries from data, and that doing so can even be beneficial for the accuracy of the model. We put a model that obeys rotational invariance only approximately to the test, in realistic scenarios involving simulations of gas-phase, liquid, and solid water. We focus specifically on physical observables that are likely to be affected -- directly or indirectly -- by symmetry breaking, finding negligible consequences when the model is used in an interpolative, bulk, regime. Even for extrapolative gas-phase predictions, the model remains very stable, even though symmetry artifacts are noticeable. We also discuss strategies that can be used to systematically reduce the magnitude of symmetry breaking when it occurs, and assess their impact on the convergence of observables. | [
"['Marcel F. Langer' 'Sergey N. Pozdnyakov' 'Michele Ceriotti']"
] |
null | null | 2406.17748 | null | null | http://arxiv.org/pdf/2406.17748v1 | 2024-06-25T17:34:51Z | 2024-06-25T17:34:51Z | A New Perspective on Shampoo's Preconditioner | Shampoo, a second-order optimization algorithm which uses a Kronecker product preconditioner, has recently garnered increasing attention from the machine learning community. The preconditioner used by Shampoo can be viewed either as an approximation of the Gauss--Newton component of the Hessian or the covariance matrix of the gradients maintained by Adagrad. We provide an explicit and novel connection between the $textit{optimal}$ Kronecker product approximation of these matrices and the approximation made by Shampoo. Our connection highlights a subtle but common misconception about Shampoo's approximation. In particular, the $textit{square}$ of the approximation used by the Shampoo optimizer is equivalent to a single step of the power iteration algorithm for computing the aforementioned optimal Kronecker product approximation. Across a variety of datasets and architectures we empirically demonstrate that this is close to the optimal Kronecker product approximation. Additionally, for the Hessian approximation viewpoint, we empirically study the impact of various practical tricks to make Shampoo more computationally efficient (such as using the batch gradient and the empirical Fisher) on the quality of Hessian approximation. | [
"['Depen Morwani' 'Itai Shapira' 'Nikhil Vyas' 'Eran Malach' 'Sham Kakade'\n 'Lucas Janson']"
] |
null | null | 2406.17749 | null | null | http://arxiv.org/pdf/2406.17749v1 | 2024-06-25T17:34:52Z | 2024-06-25T17:34:52Z | Benchmarking Deep Learning Models on NVIDIA Jetson Nano for Real-Time
Systems: An Empirical Investigation | The proliferation of complex deep learning (DL) models has revolutionized various applications, including computer vision-based solutions, prompting their integration into real-time systems. However, the resource-intensive nature of these models poses challenges for deployment on low-computational power and low-memory devices, like embedded and edge devices. This work empirically investigates the optimization of such complex DL models to analyze their functionality on an embedded device, particularly on the NVIDIA Jetson Nano. It evaluates the effectiveness of the optimized models in terms of their inference speed for image classification and video action detection. The experimental results reveal that, on average, optimized models exhibit a 16.11% speed improvement over their non-optimized counterparts. This not only emphasizes the critical need to consider hardware constraints and environmental sustainability in model development and deployment but also underscores the pivotal role of model optimization in enabling the widespread deployment of AI-assisted technologies on resource-constrained computational systems. It also serves as proof that prioritizing hardware-specific model optimization leads to efficient and scalable solutions that substantially decrease energy consumption and carbon footprint. | [
"['Tushar Prasanna Swaminathan' 'Christopher Silver' 'Thangarajah Akilan']"
] |
null | null | 2406.17759 | null | null | http://arxiv.org/pdf/2406.17759v1 | 2024-06-25T17:43:13Z | 2024-06-25T17:43:13Z | Interpreting Attention Layer Outputs with Sparse Autoencoders | Decomposing model activations into interpretable components is a key open problem in mechanistic interpretability. Sparse autoencoders (SAEs) are a popular method for decomposing the internal activations of trained transformers into sparse, interpretable features, and have been applied to MLP layers and the residual stream. In this work we train SAEs on attention layer outputs and show that also here SAEs find a sparse, interpretable decomposition. We demonstrate this on transformers from several model families and up to 2B parameters. We perform a qualitative study of the features computed by attention layers, and find multiple families: long-range context, short-range context and induction features. We qualitatively study the role of every head in GPT-2 Small, and estimate that at least 90% of the heads are polysemantic, i.e. have multiple unrelated roles. Further, we show that Sparse Autoencoders are a useful tool that enable researchers to explain model behavior in greater detail than prior work. For example, we explore the mystery of why models have so many seemingly redundant induction heads, use SAEs to motivate the hypothesis that some are long-prefix whereas others are short-prefix, and confirm this with more rigorous analysis. We use our SAEs to analyze the computation performed by the Indirect Object Identification circuit (Wang et al.), validating that the SAEs find causally meaningful intermediate variables, and deepening our understanding of the semantics of the circuit. We open-source the trained SAEs and a tool for exploring arbitrary prompts through the lens of Attention Output SAEs. | [
"['Connor Kissane' 'Robert Krzyzanowski' 'Joseph Isaac Bloom'\n 'Arthur Conmy' 'Neel Nanda']"
] |
null | null | 2406.17761 | null | null | http://arxiv.org/pdf/2406.17761v2 | 2024-07-03T16:33:55Z | 2024-06-25T17:45:26Z | CaLMQA: Exploring culturally specific long-form question answering
across 23 languages | Large language models (LLMs) are used for long-form question answering (LFQA), which requires them to generate paragraph-length answers to complex questions. While LFQA has been well-studied in English, this research has not been extended to other languages. To bridge this gap, we introduce CaLMQA, a collection of 1.5K complex culturally specific questions spanning 23 languages and 51 culturally agnostic questions translated from English into 22 other languages. We define culturally specific questions as those uniquely or more likely to be asked by people from cultures associated with the question's language. We collect naturally-occurring questions from community web forums and hire native speakers to write questions to cover under-resourced, rarely-studied languages such as Fijian and Kirundi. Our dataset contains diverse, complex questions that reflect cultural topics (e.g. traditions, laws, news) and the language usage of native speakers. We automatically evaluate a suite of open- and closed-source models on CaLMQA by detecting incorrect language and token repetitions in answers, and observe that the quality of LLM-generated answers degrades significantly for some low-resource languages. Lastly, we perform human evaluation on a subset of models and languages. Manual evaluation reveals that model performance is significantly worse for culturally specific questions than for culturally agnostic questions. Our findings highlight the need for further research in non-English LFQA and provide an evaluation framework. | [
"['Shane Arora' 'Marzena Karpinska' 'Hung-Ting Chen' 'Ipsita Bhattacharjee'\n 'Mohit Iyyer' 'Eunsol Choi']"
] |
null | null | 2406.17762 | null | null | http://arxiv.org/pdf/2406.17762v1 | 2024-06-25T17:47:13Z | 2024-06-25T17:47:13Z | Solving Hard Mizar Problems with Instantiation and Strategy Invention | In this work, we prove over 3000 previously ATP-unproved Mizar/MPTP problems by using several ATP and AI methods, raising the number of ATP-solved Mizar problems from 75% to above 80%. First, we start to experiment with the cvc5 SMT solver which uses several instantiation-based heuristics that differ from the superposition-based systems, that were previously applied to Mizar,and add many new solutions. Then we use automated strategy invention to develop cvc5 strategies that largely improve cvc5's performance on the hard problems. In particular, the best invented strategy solves over 14% more problems than the best previously available cvc5 strategy. We also show that different clausification methods have a high impact on such instantiation-based methods, again producing many new solutions. In total, the methods solve 3021 (21.3%) of the 14163 previously unsolved hard Mizar problems. This is a new milestone over the Mizar large-theory benchmark and a large strengthening of the hammer methods for Mizar. | [
"['Jan Jakubův' 'Mikoláš Janota' 'Josef Urban']"
] |
null | null | 2406.17763 | null | null | http://arxiv.org/pdf/2406.17763v1 | 2024-06-25T17:48:24Z | 2024-06-25T17:48:24Z | DiffusionPDE: Generative PDE-Solving Under Partial Observation | We introduce a general framework for solving partial differential equations (PDEs) using generative diffusion models. In particular, we focus on the scenarios where we do not have the full knowledge of the scene necessary to apply classical solvers. Most existing forward or inverse PDE approaches perform poorly when the observations on the data or the underlying coefficients are incomplete, which is a common assumption for real-world measurements. In this work, we propose DiffusionPDE that can simultaneously fill in the missing information and solve a PDE by modeling the joint distribution of the solution and coefficient spaces. We show that the learned generative priors lead to a versatile framework for accurately solving a wide range of PDEs under partial observation, significantly outperforming the state-of-the-art methods for both forward and inverse directions. | [
"['Jiahe Huang' 'Guandao Yang' 'Zichen Wang' 'Jeong Joon Park']"
] |
null | null | 2406.17768 | null | null | http://arxiv.org/pdf/2406.17768v1 | 2024-06-25T17:50:03Z | 2024-06-25T17:50:03Z | EXTRACT: Efficient Policy Learning by Extracting Transferrable Robot
Skills from Offline Data | Most reinforcement learning (RL) methods focus on learning optimal policies over low-level action spaces. While these methods can perform well in their training environments, they lack the flexibility to transfer to new tasks. Instead, RL agents that can act over useful, temporally extended skills rather than low-level actions can learn new tasks more easily. Prior work in skill-based RL either requires expert supervision to define useful skills, which is hard to scale, or learns a skill-space from offline data with heuristics that limit the adaptability of the skills, making them difficult to transfer during downstream RL. Our approach, EXTRACT, instead utilizes pre-trained vision language models to extract a discrete set of semantically meaningful skills from offline data, each of which is parameterized by continuous arguments, without human supervision. This skill parameterization allows robots to learn new tasks by only needing to learn when to select a specific skill and how to modify its arguments for the specific task. We demonstrate through experiments in sparse-reward, image-based, robot manipulation environments that EXTRACT can more quickly learn new tasks than prior works, with major gains in sample efficiency and performance over prior skill-based RL. Website at https://www.jessezhang.net/projects/extract/. | [
"['Jesse Zhang' 'Minho Heo' 'Zuxin Liu' 'Erdem Biyik' 'Joseph J Lim'\n 'Yao Liu' 'Rasool Fakoor']"
] |
null | null | 2406.17788 | null | null | http://arxiv.org/pdf/2406.17788v1 | 2024-05-27T08:49:47Z | 2024-05-27T08:49:47Z | CNN-based Compressor Mass Flow Estimator in Industrial Aircraft Vapor
Cycle System | In Vapor Cycle Systems, the mass flow sensor playsa key role for different monitoring and control purposes. However,physical sensors can be inaccurate, heavy, cumbersome, expensive orhighly sensitive to vibrations, which is especially problematic whenembedded into an aircraft. The conception of a virtual sensor, basedon other standard sensors, is a good alternative. This paper has twomain objectives. Firstly, a data-driven model using a ConvolutionalNeural Network is proposed to estimate the mass flow of thecompressor. We show that it significantly outperforms the standardPolynomial Regression model (thermodynamic maps), in terms of thestandard MSE metric and Engineer Performance metrics. Secondly,a semi-automatic segmentation method is proposed to compute theEngineer Performance metrics for real datasets, as the standard MSEmetric may pose risks in analyzing the dynamic behavior of VaporCycle Systems. | [
"['Justin Reverdi' 'Sixin Zhang' 'Saïd Aoues' 'Fabrice Gamboa'\n 'Serge Gratton' 'Thomas Pellegrini']"
] |
null | null | 2406.17793 | null | null | http://arxiv.org/pdf/2406.17793v1 | 2024-05-30T21:44:15Z | 2024-05-30T21:44:15Z | Deep Learning Approaches for Detecting Adversarial Cyberbullying and
Hate Speech in Social Networks | Cyberbullying is a significant concern intricately linked to technology that can find resolution through technological means. Despite its prevalence, technology also provides solutions to mitigate cyberbullying. To address growing concerns regarding the adverse impact of cyberbullying on individuals' online experiences, various online platforms and researchers are actively adopting measures to enhance the safety of digital environments. While researchers persist in crafting detection models to counteract or minimize cyberbullying, malicious actors are deploying adversarial techniques to circumvent these detection methods. This paper focuses on detecting cyberbullying in adversarial attack content within social networking site text data, specifically emphasizing hate speech. Utilizing a deep learning-based approach with a correction algorithm, this paper yielded significant results. An LSTM model with a fixed epoch of 100 demonstrated remarkable performance, achieving high accuracy, precision, recall, F1-score, and AUC-ROC scores of 87.57%, 88.73%, 87.57%, 88.15%, and 91% respectively. Additionally, the LSTM model's performance surpassed that of previous studies. | [
"['Sylvia Worlali Azumah' 'Nelly Elsayed' 'Zag ElSayed' 'Murat Ozer'\n 'Amanda La Guardia']"
] |
null | null | 2406.17797 | null | null | http://arxiv.org/pdf/2406.17797v1 | 2024-06-13T02:50:23Z | 2024-06-13T02:50:23Z | MoleculeCLA: Rethinking Molecular Benchmark via Computational
Ligand-Target Binding Analysis | Molecular representation learning is pivotal for various molecular property prediction tasks related to drug discovery. Robust and accurate benchmarks are essential for refining and validating current methods. Existing molecular property benchmarks derived from wet experiments, however, face limitations such as data volume constraints, unbalanced label distribution, and noisy labels. To address these issues, we construct a large-scale and precise molecular representation dataset of approximately 140,000 small molecules, meticulously designed to capture an extensive array of chemical, physical, and biological properties, derived through a robust computational ligand-target binding analysis pipeline. We conduct extensive experiments on various deep learning models, demonstrating that our dataset offers significant physicochemical interpretability to guide model development and design. Notably, the dataset's properties are linked to binding affinity metrics, providing additional insights into model performance in drug-target interaction tasks. We believe this dataset will serve as a more accurate and reliable benchmark for molecular representation learning, thereby expediting progress in the field of artificial intelligence-driven drug discovery. | [
"['Shikun Feng' 'Jiaxin Zheng' 'Yinjun Jia' 'Yanwen Huang' 'Fengfeng Zhou'\n 'Wei-Ying Ma' 'Yanyan Lan']"
] |
null | null | 2406.17806 | null | null | http://arxiv.org/pdf/2406.17806v1 | 2024-06-22T23:26:07Z | 2024-06-22T23:26:07Z | MOSSBench: Is Your Multimodal Language Model Oversensitive to Safe
Queries? | Humans are prone to cognitive distortions -- biased thinking patterns that lead to exaggerated responses to specific stimuli, albeit in very different contexts. This paper demonstrates that advanced Multimodal Large Language Models (MLLMs) exhibit similar tendencies. While these models are designed to respond queries under safety mechanism, they sometimes reject harmless queries in the presence of certain visual stimuli, disregarding the benign nature of their contexts. As the initial step in investigating this behavior, we identify three types of stimuli that trigger the oversensitivity of existing MLLMs: Exaggerated Risk, Negated Harm, and Counterintuitive Interpretation. To systematically evaluate MLLMs' oversensitivity to these stimuli, we propose the Multimodal OverSenSitivity Benchmark (MOSSBench). This toolkit consists of 300 manually collected benign multimodal queries, cross-verified by third-party reviewers (AMT). Empirical studies using MOSSBench on 20 MLLMs reveal several insights: (1). Oversensitivity is prevalent among SOTA MLLMs, with refusal rates reaching up to 76% for harmless queries. (2). Safer models are more oversensitive: increasing safety may inadvertently raise caution and conservatism in the model's responses. (3). Different types of stimuli tend to cause errors at specific stages -- perception, intent reasoning, and safety judgement -- in the response process of MLLMs. These findings highlight the need for refined safety mechanisms that balance caution with contextually appropriate responses, improving the reliability of MLLMs in real-world applications. We make our project available at https://turningpoint-ai.github.io/MOSSBench/. | [
"['Xirui Li' 'Hengguang Zhou' 'Ruochen Wang' 'Tianyi Zhou' 'Minhao Cheng'\n 'Cho-Jui Hsieh']"
] |
null | null | 2406.17808 | null | null | http://arxiv.org/pdf/2406.17808v1 | 2024-06-24T03:59:17Z | 2024-06-24T03:59:17Z | Training-Free Exponential Extension of Sliding Window Context with
Cascading KV Cache | The context window within a transformer provides a form of active memory for the current task, which can be useful for few-shot learning and conditional generation, both which depend heavily on previous context tokens. However, as the context length grows, the computational cost increases quadratically. Recent works have shown that saving a few initial tokens along with a fixed-sized sliding window leads to stable streaming generation with linear complexity in transformer-based Large Language Models (LLMs). However, they make suboptimal use of the fixed window by naively evicting all tokens unconditionally from the key-value (KV) cache once they reach the end of the window, resulting in tokens being forgotten and no longer able to affect subsequent predictions. To overcome this limitation, we propose a novel mechanism for storing longer sliding window contexts with the same total cache size by keeping separate cascading sub-cache buffers whereby each subsequent buffer conditionally accepts a fraction of the relatively more important tokens evicted from the previous buffer. Our method results in a dynamic KV cache that can store tokens from the more distant past than a fixed, static sliding window approach. Our experiments show improvements of 5.6% on long context generation (LongBench), 1.2% in streaming perplexity (PG19), and 0.6% in language understanding (MMLU STEM) using LLMs given the same fixed cache size. Additionally, we provide an efficient implementation that improves the KV cache latency from 1.33ms per caching operation to 0.54ms, a 59% speedup over previous work. | [
"['Jeffrey Willette' 'Heejun Lee' 'Youngwan Lee' 'Myeongjae Jeon'\n 'Sung Ju Hwang']"
] |
null | null | 2406.17811 | null | null | http://arxiv.org/pdf/2406.17811v1 | 2024-06-24T20:15:04Z | 2024-06-24T20:15:04Z | CATBench: A Compiler Autotuning Benchmarking Suite for Black-box
Optimization | Bayesian optimization is a powerful method for automating tuning of compilers. The complex landscape of autotuning provides a myriad of rarely considered structural challenges for black-box optimizers, and the lack of standardized benchmarks has limited the study of Bayesian optimization within the domain. To address this, we present CATBench, a comprehensive benchmarking suite that captures the complexities of compiler autotuning, ranging from discrete, conditional, and permutation parameter types to known and unknown binary constraints, as well as both multi-fidelity and multi-objective evaluations. The benchmarks in CATBench span a range of machine learning-oriented computations, from tensor algebra to image processing and clustering, and uses state-of-the-art compilers, such as TACO and RISE/ELEVATE. CATBench offers a unified interface for evaluating Bayesian optimization algorithms, promoting reproducibility and innovation through an easy-to-use, fully containerized setup of both surrogate and real-world compiler optimization tasks. We validate CATBench on several state-of-the-art algorithms, revealing their strengths and weaknesses and demonstrating the suite's potential for advancing both Bayesian optimization and compiler autotuning research. | [
"['Jacob O. Tørring' 'Carl Hvarfner' 'Luigi Nardi' 'Magnus Själander']"
] |
null | null | 2406.17812 | null | null | http://arxiv.org/pdf/2406.17812v1 | 2024-06-24T20:29:29Z | 2024-06-24T20:29:29Z | Scalable Artificial Intelligence for Science: Perspectives, Methods and
Exemplars | In a post-ChatGPT world, this paper explores the potential of leveraging scalable artificial intelligence for scientific discovery. We propose that scaling up artificial intelligence on high-performance computing platforms is essential to address such complex problems. This perspective focuses on scientific use cases like cognitive simulations, large language models for scientific inquiry, medical image analysis, and physics-informed approaches. The study outlines the methodologies needed to address such challenges at scale on supercomputers or the cloud and provides exemplars of such approaches applied to solve a variety of scientific problems. | [
"['Wesley Brewer' 'Aditya Kashi' 'Sajal Dash' 'Aristeidis Tsaris'\n 'Junqi Yin' 'Mallikarjun Shankar' 'Feiyi Wang']"
] |
null | null | 2406.17813 | null | null | http://arxiv.org/pdf/2406.17813v1 | 2024-06-24T23:41:46Z | 2024-06-24T23:41:46Z | Unsupervised Concept Drift Detection from Deep Learning Representations
in Real-time | Concept Drift is a phenomenon in which the underlying data distribution and statistical properties of a target domain change over time, leading to a degradation of the model's performance. Consequently, models deployed in production require continuous monitoring through drift detection techniques. Most drift detection methods to date are supervised, i.e., based on ground-truth labels. However, true labels are usually not available in many real-world scenarios. Although recent efforts have been made to develop unsupervised methods, they often lack the required accuracy, have a complexity that makes real-time implementation in production environments difficult, or are unable to effectively characterize drift. To address these challenges, we propose DriftLens, an unsupervised real-time concept drift detection framework. It works on unstructured data by exploiting the distribution distances of deep learning representations. DriftLens can also provide drift characterization by analyzing each label separately. A comprehensive experimental evaluation is presented with multiple deep learning classifiers for text, image, and speech. Results show that (i) DriftLens performs better than previous methods in detecting drift in $11/13$ use cases; (ii) it runs at least 5 times faster; (iii) its detected drift value is very coherent with the amount of drift (correlation $geq 0.85$); (iv) it is robust to parameter changes. | [
"['Salvatore Greco' 'Bartolomeo Vacchetti' 'Daniele Apiletti'\n 'Tania Cerquitelli']"
] |
null | null | 2406.17814 | null | null | http://arxiv.org/pdf/2406.17814v1 | 2024-06-25T05:09:54Z | 2024-06-25T05:09:54Z | Distribution Learnability and Robustness | We examine the relationship between learnability and robust (or agnostic) learnability for the problem of distribution learning. We show that, contrary to other learning settings (e.g., PAC learning of function classes), realizable learnability of a class of probability distributions does not imply its agnostic learnability. We go on to examine what type of data corruption can disrupt the learnability of a distribution class and what is such learnability robust against. We show that realizable learnability of a class of distributions implies its robust learnability with respect to only additive corruption, but not against subtractive corruption. We also explore related implications in the context of compression schemes and differentially private learnability. | [
"['Shai Ben-David' 'Alex Bie' 'Gautam Kamath' 'Tosca Lechner']"
] |
null | null | 2406.17818 | null | null | http://arxiv.org/abs/2406.17818v1 | 2024-06-25T08:07:00Z | 2024-06-25T08:07:00Z | Temporal Prototype-Aware Learning for Active Voltage Control on Power
Distribution Networks | Active Voltage Control (AVC) on the Power Distribution Networks (PDNs) aims to stabilize the voltage levels to ensure efficient and reliable operation of power systems. With the increasing integration of distributed energy resources, recent efforts have explored employing multi-agent reinforcement learning (MARL) techniques to realize effective AVC. Existing methods mainly focus on the acquisition of short-term AVC strategies, i.e., only learning AVC within the short-term training trajectories of a singular diurnal cycle. However, due to the dynamic nature of load demands and renewable energy, the operation states of real-world PDNs may exhibit significant distribution shifts across varying timescales (e.g., daily and seasonal changes). This can render those short-term strategies suboptimal or even obsolete when performing continuous AVC over extended periods. In this paper, we propose a novel temporal prototype-aware learning method, abbreviated as TPA, to learn time-adaptive AVC under short-term training trajectories. At the heart of TPA are two complementary components, namely multi-scale dynamic encoder and temporal prototype-aware policy, that can be readily incorporated into various MARL methods. The former component integrates a stacked transformer network to learn underlying temporal dependencies at different timescales of the PDNs, while the latter implements a learnable prototype matching mechanism to construct a dedicated AVC policy that can dynamically adapt to the evolving operation states. Experimental results on the AVC benchmark with different PDN sizes demonstrate that the proposed TPA surpasses the state-of-the-art counterparts not only in terms of control performance but also by offering model transferability. Our code is available at https://github.com/Canyizl/TPA-for-AVC. | [
"['Feiyang Xu' 'Shunyu Liu' 'Yunpeng Qing' 'Yihe Zhou' 'Yuwen Wang'\n 'Mingli Song']"
] |
null | null | 2406.17819 | null | null | http://arxiv.org/pdf/2406.17819v1 | 2024-06-25T08:29:32Z | 2024-06-25T08:29:32Z | Automatically Adaptive Conformal Risk Control | Science and technology have a growing need for effective mechanisms that ensure reliable, controlled performance from black-box machine learning algorithms. These performance guarantees should ideally hold conditionally on the input-that is the performance guarantees should hold, at least approximately, no matter what the input. However, beyond stylized discrete groupings such as ethnicity and gender, the right notion of conditioning can be difficult to define. For example, in problems such as image segmentation, we want the uncertainty to reflect the intrinsic difficulty of the test sample, but this may be difficult to capture via a conditioning event. Building on the recent work of Gibbs et al. [2023], we propose a methodology for achieving approximate conditional control of statistical risks-the expected value of loss functions-by adapting to the difficulty of test samples. Our framework goes beyond traditional conditional risk control based on user-provided conditioning events to the algorithmic, data-driven determination of appropriate function classes for conditioning. We apply this framework to various regression and segmentation tasks, enabling finer-grained control over model performance and demonstrating that by continuously monitoring and adjusting these parameters, we can achieve superior precision compared to conventional risk-control methods. | [
"['Vincent Blot' 'Anastasios N Angelopoulos' 'Michael I Jordan'\n 'Nicolas J-B Brunel']"
] |
null | null | 2406.17822 | null | null | http://arxiv.org/abs/2406.17822v1 | 2024-06-25T09:22:53Z | 2024-06-25T09:22:53Z | AI for the prediction of early stages of Alzheimer's disease from
neuroimaging biomarkers -- A narrative review of a growing field | Objectives: The objectives of this narrative review are to summarize the current state of AI applications in neuroimaging for early Alzheimer's disease (AD) prediction and to highlight the potential of AI techniques in improving early AD diagnosis, prognosis, and management. Methods: We conducted a narrative review of studies using AI techniques applied to neuroimaging data for early AD prediction. We examined single-modality studies using structural MRI and PET imaging, as well as multi-modality studies integrating multiple neuroimaging techniques and biomarkers. Furthermore, they reviewed longitudinal studies that model AD progression and identify individuals at risk of rapid decline. Results: Single-modality studies using structural MRI and PET imaging have demonstrated high accuracy in classifying AD and predicting progression from mild cognitive impairment (MCI) to AD. Multi-modality studies, integrating multiple neuroimaging techniques and biomarkers, have shown improved performance and robustness compared to single-modality approaches. Longitudinal studies have highlighted the value of AI in modeling AD progression and identifying individuals at risk of rapid decline. However, challenges remain in data standardization, model interpretability, generalizability, clinical integration, and ethical considerations. Conclusion: AI techniques applied to neuroimaging data have the potential to improve early AD diagnosis, prognosis, and management. Addressing challenges related to data standardization, model interpretability, generalizability, clinical integration, and ethical considerations is crucial for realizing the full potential of AI in AD research and clinical practice. Collaborative efforts among researchers, clinicians, and regulatory agencies are needed to develop reliable, robust, and ethical AI tools that can benefit AD patients and society. | [
"['Thorsten Rudroff' 'Oona Rainio' 'Riku Klén']"
] |
null | null | 2406.17826 | null | null | http://arxiv.org/pdf/2406.17826v1 | 2024-06-25T13:23:37Z | 2024-06-25T13:23:37Z | European Space Agency Benchmark for Anomaly Detection in Satellite
Telemetry | Machine learning has vast potential to improve anomaly detection in satellite telemetry which is a crucial task for spacecraft operations. This potential is currently hampered by a lack of comprehensible benchmarks for multivariate time series anomaly detection, especially for the challenging case of satellite telemetry. The European Space Agency Benchmark for Anomaly Detection in Satellite Telemetry (ESA-ADB) aims to address this challenge and establish a new standard in the domain. It is a result of close cooperation between spacecraft operations engineers from the European Space Agency (ESA) and machine learning experts. The newly introduced ESA Anomalies Dataset contains annotated real-life telemetry from three different ESA missions, out of which two are included in ESA-ADB. Results of typical anomaly detection algorithms assessed in our novel hierarchical evaluation pipeline show that new approaches are necessary to address operators' needs. All elements of ESA-ADB are publicly available to ensure its full reproducibility. | [
"['Krzysztof Kotowski' 'Christoph Haskamp' 'Jacek Andrzejewski'\n 'Bogdan Ruszczak' 'Jakub Nalepa' 'Daniel Lakey' 'Peter Collins'\n 'Aybike Kolmas' 'Mauro Bartesaghi' 'Jose Martinez-Heras'\n 'Gabriele De Canio']"
] |
null | null | 2406.17828 | null | null | http://arxiv.org/pdf/2406.17828v1 | 2024-06-25T13:50:00Z | 2024-06-25T13:50:00Z | Extreme Learning Machines for Fast Training of Click-Through Rate
Prediction Models | Extreme Learning Machines (ELM) provide a fast alternative to traditional gradient-based learning in neural networks, offering rapid training and robust generalization capabilities. Its theoretical basis shows its universal approximation capability. We explore the application of ELMs for the task of Click-Through Rate (CTR) prediction, which is largely unexplored by ELMs due to the high dimensionality of the problem. We introduce an ELM-based model enhanced with embedding layers to improve the performance on CTR tasks, which is a novel addition to the field. Experimental results on benchmark datasets, including Avazu and Criteo, demonstrate that our proposed ELM with embeddings achieves competitive F1 results while significantly reducing training time compared to state-of-the-art models such as Masknet. Our findings show that ELMs can be useful for CTR prediction, especially when fast training is needed. | [
"['Ergun Biçici']"
] |
null | null | 2406.17830 | null | null | http://arxiv.org/pdf/2406.17830v1 | 2024-06-25T14:00:55Z | 2024-06-25T14:00:55Z | Treatment of Statistical Estimation Problems in Randomized Smoothing for
Adversarial Robustness | Randomized smoothing is a popular certified defense against adversarial attacks. In its essence, we need to solve a problem of statistical estimation which is usually very time-consuming since we need to perform numerous (usually $10^5$) forward passes of the classifier for every point to be certified. In this paper, we review the statistical estimation problems for randomized smoothing to find out if the computational burden is necessary. In particular, we consider the (standard) task of adversarial robustness where we need to decide if a point is robust at a certain radius or not using as few samples as possible while maintaining statistical guarantees. We present estimation procedures employing confidence sequences enjoying the same statistical guarantees as the standard methods, with the optimal sample complexities for the estimation task and empirically demonstrate their good performance. Additionally, we provide a randomized version of Clopper-Pearson confidence intervals resulting in strictly stronger certificates. | [
"['Vaclav Voracek']"
] |
null | null | 2406.17831 | null | null | http://arxiv.org/pdf/2406.17831v2 | 2024-06-28T19:40:19Z | 2024-06-25T14:34:51Z | Empirical Bayes for Dynamic Bayesian Networks Using Generalized
Variational Inference | In this work, we demonstrate the Empirical Bayes approach to learning a Dynamic Bayesian Network. By starting with several point estimates of structure and weights, we can use a data-driven prior to subsequently obtain a model to quantify uncertainty. This approach uses a recent development of Generalized Variational Inference, and indicates the potential of sampling the uncertainty of a mixture of DAG structures as well as a parameter posterior. | [
"['Vyacheslav Kungurtsev' 'Apaar' 'Aarya Khandelwal'\n 'Parth Sandeep Rastogi' 'Bapi Chatterjee' 'Jakub Mareček']"
] |
null | null | 2406.17834 | null | null | http://arxiv.org/pdf/2406.17834v1 | 2024-06-25T15:07:06Z | 2024-06-25T15:07:06Z | Univariate Skeleton Prediction in Multivariate Systems Using
Transformers | Symbolic regression (SR) methods attempt to learn mathematical expressions that approximate the behavior of an observed system. However, when dealing with multivariate systems, they often fail to identify the functional form that explains the relationship between each variable and the system's response. To begin to address this, we propose an explainable neural SR method that generates univariate symbolic skeletons that aim to explain how each variable influences the system's response. By analyzing multiple sets of data generated artificially, where one input variable varies while others are fixed, relationships are modeled separately for each input variable. The response of such artificial data sets is estimated using a regression neural network (NN). Finally, the multiple sets of input-response pairs are processed by a pre-trained Multi-Set Transformer that solves a problem we termed Multi-Set Skeleton Prediction and outputs a univariate symbolic skeleton. Thus, such skeletons represent explanations of the function approximated by the regression NN. Experimental results demonstrate that this method learns skeleton expressions matching the underlying functions and outperforms two GP-based and two neural SR methods. | [
"['Giorgio Morales' 'John W. Sheppard']"
] |
null | null | 2406.17835 | null | null | http://arxiv.org/pdf/2406.17835v1 | 2024-06-25T15:33:01Z | 2024-06-25T15:33:01Z | The Use of AI-Robotic Systems for Scientific Discovery | The process of developing theories and models and testing them with experiments is fundamental to the scientific method. Automating the entire scientific method then requires not only automation of the induction of theories from data, but also experimentation from design to implementation. This is the idea behind a robot scientist -- a coupled system of AI and laboratory robotics that has agency to test hypotheses with real-world experiments. In this chapter we explore some of the fundamentals of robot scientists in the philosophy of science. We also map the activities of a robot scientist to machine learning paradigms, and argue that the scientific method shares an analogy with active learning. We demonstrate these concepts using examples from previous robot scientists, and also from Genesis: a next generation robot scientist designed for research in systems biology, comprising a micro-fluidic system with 1000 computer-controlled micro-bioreactors and interpretable models based in controlled vocabularies and logic. | [
"['Alexander H. Gower' 'Konstantin Korovin' 'Daniel Brunnsåker'\n 'Filip Kronström' 'Gabriel K. Reder' 'Ievgeniia A. Tiukova'\n 'Ronald S. Reiserer' 'John P. Wikswo' 'Ross D. King']"
] |
null | null | 2406.17837 | null | null | http://arxiv.org/pdf/2406.17837v1 | 2024-06-25T16:16:38Z | 2024-06-25T16:16:38Z | Transformer Normalisation Layers and the Independence of Semantic
Subspaces | Recent works have shown that transformers can solve contextual reasoning tasks by internally executing computational graphs called circuits. Circuits often use attention to logically match information from subspaces of the representation, e.g. using position-in-sequence to identify the previous token. In this work, we consider a semantic subspace to be any independent subspace of the latent representation that can fully determine an attention distribution. We show that Pre-Norm, the placement of normalisation layer used by state-of-the-art transformers, violates this ability unless the model learns a strict representation structure of orthogonal spheres. This is because it causes linear subspaces to interfere through their common normalisation factor. Theoretically, we analyse circuit stability by modelling this interference as random noise on the $L_2$-norms of the query/key/value vectors, predicting a phenomenon of circuit collapse when sparse-attention shifts to a different token. Empirically, we investigate the sensitivity of real-world models trained for mathematical addition, observing a 1% rate of circuit collapse when the norms are artificially perturbed by $lesssim$10%. We contrast Pre-Norm with QKV-Norm, which places normalisation after the attention head's linear operators. Theoretically this relaxes the representational constraints. Empirically we observe comparable in-distribution but worse out-of-distribution performance. | [
"['Stephen Menary' 'Samuel Kaski' 'Andre Freitas']"
] |
null | null | 2406.17838 | null | null | http://arxiv.org/pdf/2406.17838v1 | 2024-06-25T16:56:45Z | 2024-06-25T16:56:45Z | InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge
Distillation | The emergence of large-scale pre-trained models has heightened their application in various downstream tasks, yet deployment is a challenge in environments with limited computational resources. Knowledge distillation has emerged as a solution in such scenarios, whereby knowledge from large teacher models is transferred into smaller student' models, but this is a non-trivial process that traditionally requires technical expertise in AI/ML. To address these challenges, this paper presents InFiConD, a novel framework that leverages visual concepts to implement the knowledge distillation process and enable subsequent no-code fine-tuning of student models. We develop a novel knowledge distillation pipeline based on extracting text-aligned visual concepts from a concept corpus using multimodal models, and construct highly interpretable linear student models based on visual concepts that mimic a teacher model in a response-based manner. InFiConD's interface allows users to interactively fine-tune the student model by manipulating concept influences directly in the user interface. We validate InFiConD via a robust usage scenario and user study. Our findings indicate that InFiConD's human-in-the-loop and visualization-driven approach enables users to effectively create and analyze student models, understand how knowledge is transferred, and efficiently perform fine-tuning operations. We discuss how this work highlights the potential of interactive and visual methods in making knowledge distillation and subsequent no-code fine-tuning more accessible and adaptable to a wider range of users with domain-specific demands. | [
"['Jinbin Huang' 'Wenbin He' 'Liang Gou' 'Liu Ren' 'Chris Bryan']"
] |
null | null | 2406.17876 | null | null | http://arxiv.org/pdf/2406.17876v1 | 2024-06-25T18:35:13Z | 2024-06-25T18:35:13Z | ET tu, CLIP? Addressing Common Object Errors for Unseen Environments | We introduce a simple method that employs pre-trained CLIP encoders to enhance model generalization in the ALFRED task. In contrast to previous literature where CLIP replaces the visual encoder, we suggest using CLIP as an additional module through an auxiliary object detection objective. We validate our method on the recently proposed Episodic Transformer architecture and demonstrate that incorporating CLIP improves task performance on the unseen validation set. Additionally, our analysis results support that CLIP especially helps with leveraging object descriptions, detecting small objects, and interpreting rare words. | [
"['Ye Won Byun' 'Cathy Jiao' 'Shahriar Noroozizadeh' 'Jimin Sun'\n 'Rosa Vitiello']"
] |
null | null | 2406.17885 | null | null | http://arxiv.org/pdf/2406.17885v1 | 2024-06-25T18:47:50Z | 2024-06-25T18:47:50Z | Enabling Regional Explainability by Automatic and Model-agnostic Rule
Extraction | In Explainable AI, rule extraction translates model knowledge into logical rules, such as IF-THEN statements, crucial for understanding patterns learned by black-box models. This could significantly aid in fields like disease diagnosis, disease progression estimation, or drug discovery. However, such application domains often contain imbalanced data, with the class of interest underrepresented. Existing methods inevitably compromise the performance of rules for the minor class to maximise the overall performance. As the first attempt in this field, we propose a model-agnostic approach for extracting rules from specific subgroups of data, featuring automatic rule generation for numerical features. This method enhances the regional explainability of machine learning models and offers wider applicability compared to existing methods. We additionally introduce a new method for selecting features to compose rules, reducing computational costs in high-dimensional spaces. Experiments across various datasets and models demonstrate the effectiveness of our methods. | [
"['Yu Chen' 'Tianyu Cui' 'Alexander Capstick' 'Nan Fletcher-Loyd'\n 'Payam Barnaghi']"
] |
null | null | 2406.17887 | null | null | http://arxiv.org/pdf/2406.17887v1 | 2024-06-25T18:51:08Z | 2024-06-25T18:51:08Z | Federated Dynamical Low-Rank Training with Global Loss Convergence
Guarantees | In this work, we propose a federated dynamical low-rank training (FeDLRT) scheme to reduce client compute and communication costs - two significant performance bottlenecks in horizontal federated learning. Our method builds upon dynamical low-rank splitting schemes for manifold-constrained optimization to create a global low-rank basis of network weights, which enables client training on a small coefficient matrix. A consistent global low-rank basis allows us to incorporate a variance correction scheme and prove global loss descent and convergence to a stationary point. Dynamic augmentation and truncation of the low-rank bases automatically optimizes computing and communication resource utilization. We demonstrate the efficiency of FeDLRT in an array of computer vision benchmarks and show a reduction of client compute and communication costs by up to an order of magnitude with minimal impacts on global accuracy. | [
"['Steffen Schotthöfer' 'M. Paul Laiu']"
] |
null | null | 2406.17888 | null | null | http://arxiv.org/pdf/2406.17888v1 | 2024-06-25T18:52:48Z | 2024-06-25T18:52:48Z | CTBench: A Comprehensive Benchmark for Evaluating Language Model
Capabilities in Clinical Trial Design | CTBench is introduced as a benchmark to assess language models (LMs) in aiding clinical study design. Given study-specific metadata, CTBench evaluates AI models' ability to determine the baseline features of a clinical trial (CT), which include demographic and relevant features collected at the trial's start from all participants. These baseline features, typically presented in CT publications (often as Table 1), are crucial for characterizing study cohorts and validating results. Baseline features, including confounders and covariates, are also necessary for accurate treatment effect estimation in studies involving observational data. CTBench consists of two datasets: "CT-Repo," containing baseline features from 1,690 clinical trials sourced from clinicaltrials.gov, and "CT-Pub," a subset of 100 trials with more comprehensive baseline features gathered from relevant publications. Two LM-based evaluation methods are developed to compare the actual baseline feature lists against LM-generated responses. "ListMatch-LM" and "ListMatch-BERT" use GPT-4o and BERT scores (at various thresholds), respectively, for evaluation. To establish baseline results, advanced prompt engineering techniques using LLaMa3-70B-Instruct and GPT-4o in zero-shot and three-shot learning settings are applied to generate potential baseline features. The performance of GPT-4o as an evaluator is validated through human-in-the-loop evaluations on the CT-Pub dataset, where clinical experts confirm matches between actual and LM-generated features. The results highlight a promising direction with significant potential for improvement, positioning CTBench as a useful tool for advancing research on AI in CT design and potentially enhancing the efficacy and robustness of CTs. | [
"['Nafis Neehal' 'Bowen Wang' 'Shayom Debopadhaya' 'Soham Dan'\n 'Keerthiram Murugesan' 'Vibha Anand' 'Kristin P. Bennett']"
] |
null | null | 2406.17890 | null | null | http://arxiv.org/pdf/2406.17890v1 | 2024-06-25T18:58:39Z | 2024-06-25T18:58:39Z | SigKAN: Signature-Weighted Kolmogorov-Arnold Networks for Time Series | We propose a novel approach that enhances multivariate function approximation using learnable path signatures and Kolmogorov-Arnold networks (KANs). We enhance the learning capabilities of these networks by weighting the values obtained by KANs using learnable path signatures, which capture important geometric features of paths. This combination allows for a more comprehensive and flexible representation of sequential and temporal data. We demonstrate through studies that our SigKANs with learnable path signatures perform better than conventional methods across a range of function approximation challenges. By leveraging path signatures in neural networks, this method offers intriguing opportunities to enhance performance in time series analysis and time series forecasting, among other fields. | [
"['Hugo Inzirillo' 'Remi Genet']"
] |
null | null | 2406.17894 | null | null | http://arxiv.org/pdf/2406.17894v1 | 2024-06-25T19:07:21Z | 2024-06-25T19:07:21Z | Efficient and Effective Implicit Dynamic Graph Neural Network | Implicit graph neural networks have gained popularity in recent years as they capture long-range dependencies while improving predictive performance in static graphs. Despite the tussle between performance degradation due to the oversmoothing of learned embeddings and long-range dependency being more pronounced in dynamic graphs, as features are aggregated both across neighborhood and time, no prior work has proposed an implicit graph neural model in a dynamic setting. In this paper, we present Implicit Dynamic Graph Neural Network (IDGNN) a novel implicit neural network for dynamic graphs which is the first of its kind. A key characteristic of IDGNN is that it demonstrably is well-posed, i.e., it is theoretically guaranteed to have a fixed-point representation. We then demonstrate that the standard iterative algorithm often used to train implicit models is computationally expensive in our dynamic setting as it involves computing gradients, which themselves have to be estimated in an iterative manner. To overcome this, we pose an equivalent bilevel optimization problem and propose an efficient single-loop training algorithm that avoids iterative computation by maintaining moving averages of key components of the gradients. We conduct extensive experiments on real-world datasets on both classification and regression tasks to demonstrate the superiority of our approach over the state-of-the-art baselines. We also demonstrate that our bi-level optimization framework maintains the performance of the expensive iterative algorithm while obtaining up to textbf{1600x} speed-up. | [
"['Yongjian Zhong' 'Hieu Vu' 'Tianbao Yang' 'Bijaya Adhikari']"
] |
null | null | 2406.17899 | null | null | http://arxiv.org/pdf/2406.17899v1 | 2024-06-25T19:20:10Z | 2024-06-25T19:20:10Z | Entity Augmentation for Efficient Classification of Vertically
Partitioned Data with Limited Overlap | Vertical Federated Learning (VFL) is a machine learning paradigm for learning from vertically partitioned data (i.e. features for each input are distributed across multiple "guest" clients and an aggregating "host" server owns labels) without communicating raw data. Traditionally, VFL involves an "entity resolution" phase where the host identifies and serializes the unique entities known to all guests. This is followed by private set intersection to find common entities, and an "entity alignment" step to ensure all guests are always processing the same entity's data. However, using only data of entities from the intersection means guests discard potentially useful data. Besides, the effect on privacy is dubious and these operations are computationally expensive. We propose a novel approach that eliminates the need for set intersection and entity alignment in categorical tasks. Our Entity Augmentation technique generates meaningful labels for activations sent to the host, regardless of their originating entity, enabling efficient VFL without explicit entity alignment. With limited overlap between training data, this approach performs substantially better (e.g. with 5% overlap, 48.1% vs 69.48% test accuracy on CIFAR-10). In fact, thanks to the regularizing effect, our model performs marginally better even with 100% overlap. | [
"['Avi Amalanshu' 'Viswesh Nagaswamy' 'G. V. S. S. Prudhvi' 'Yash Sirvi'\n 'Debashish Chakravarty']"
] |
null | null | 2406.17902 | null | null | http://arxiv.org/pdf/2406.17902v1 | 2024-06-25T19:26:39Z | 2024-06-25T19:26:39Z | Domain Adaptation of Echocardiography Segmentation Via Reinforcement
Learning | Performance of deep learning segmentation models is significantly challenged in its transferability across different medical imaging domains, particularly when aiming to adapt these models to a target domain with insufficient annotated data for effective fine-tuning. While existing domain adaptation (DA) methods propose strategies to alleviate this problem, these methods do not explicitly incorporate human-verified segmentation priors, compromising the potential of a model to produce anatomically plausible segmentations. We introduce RL4Seg, an innovative reinforcement learning framework that reduces the need to otherwise incorporate large expertly annotated datasets in the target domain, and eliminates the need for lengthy manual human review. Using a target dataset of 10,000 unannotated 2D echocardiographic images, RL4Seg not only outperforms existing state-of-the-art DA methods in accuracy but also achieves 99% anatomical validity on a subset of 220 expert-validated subjects from the target domain. Furthermore, our framework's reward network offers uncertainty estimates comparable with dedicated state-of-the-art uncertainty methods, demonstrating the utility and effectiveness of RL4Seg in overcoming domain adaptation challenges in medical image segmentation. | [
"['Arnaud Judge' 'Thierry Judge' 'Nicolas Duchateau' 'Roman A. Sandler'\n 'Joseph Z. Sokol' 'Olivier Bernard' 'Pierre-Marc Jodoin']"
] |
null | null | 2406.17916 | null | null | http://arxiv.org/pdf/2406.17916v1 | 2024-06-25T19:56:21Z | 2024-06-25T19:56:21Z | Camera Model Identification Using Audio and Visual Content from Videos | The identification of device brands and models plays a pivotal role in the realm of multimedia forensic applications. This paper presents a framework capable of identifying devices using audio, visual content, or a fusion of them. The fusion of visual and audio content occurs later by applying two fundamental fusion rules: the product and the sum. The device identification problem is tackled as a classification one by leveraging Convolutional Neural Networks. Experimental evaluation illustrates that the proposed framework exhibits promising classification performance when independently using audio or visual content. Furthermore, although the fusion results don't consistently surpass both individual modalities, they demonstrate promising potential for enhancing classification performance. Future research could refine the fusion process to improve classification performance in both modalities consistently. Finally, a statistical significance test is performed for a more in-depth study of the classification results. | [
"['Ioannis Tsingalis' 'Christos Korgialas' 'Constantine Kotropoulos']"
] |
null | null | 2406.17918 | null | null | http://arxiv.org/pdf/2406.17918v2 | 2024-07-02T20:24:13Z | 2024-06-25T20:00:32Z | GraphSnapShot: Graph Machine Learning Acceleration with Fast Storage and
Retrieval | In our recent research, we have developed a framework called GraphSnapShot, which has been proven an useful tool for graph learning acceleration. GraphSnapShot is a framework for fast cache, storage, retrieval and computation for graph learning. It can quickly store and update the local topology of graph structure and allows us to track patterns in the structure of graph networks, just like take snapshots of the graphs. In experiments, GraphSnapShot shows efficiency, it can achieve up to 30% training acceleration and 73% memory reduction for lossless graph ML training compared to current baselines such as dgl.This technique is particular useful for large dynamic graph learning tasks such as social media analysis and recommendation systems to process complex relationships between entities. | [
"['Dong Liu' 'Roger Waleffe' 'Meng Jiang' 'Shivaram Venkataraman']"
] |
null | null | 2406.17931 | null | null | http://arxiv.org/abs/2406.17931v2 | 2024-06-27T03:01:29Z | 2024-06-25T20:43:15Z | CAT: Interpretable Concept-based Taylor Additive Models | As an emerging interpretable technique, Generalized Additive Models (GAMs) adopt neural networks to individually learn non-linear functions for each feature, which are then combined through a linear model for final predictions. Although GAMs can explain deep neural networks (DNNs) at the feature level, they require large numbers of model parameters and are prone to overfitting, making them hard to train and scale. Additionally, in real-world datasets with many features, the interpretability of feature-based explanations diminishes for humans. To tackle these issues, recent research has shifted towards concept-based interpretable methods. These approaches try to integrate concept learning as an intermediate step before making predictions, explaining the predictions in terms of human-understandable concepts. However, these methods require domain experts to extensively label concepts with relevant names and their ground-truth values. In response, we propose CAT, a novel interpretable Concept-bAsed Taylor additive model to simply this process. CAT does not have to require domain experts to annotate concepts and their ground-truth values. Instead, it only requires users to simply categorize input features into broad groups, which can be easily accomplished through a quick metadata review. Specifically, CAT first embeds each group of input features into one-dimensional high-level concept representation, and then feeds the concept representations into a new white-box Taylor Neural Network (TaylorNet). The TaylorNet aims to learn the non-linear relationship between the inputs and outputs using polynomials. Evaluation results across multiple benchmarks demonstrate that CAT can outperform or compete with the baselines while reducing the need of extensive model parameters. Importantly, it can explain model predictions through high-level concepts that human can understand. | [
"['Viet Duong' 'Qiong Wu' 'Zhengyi Zhou' 'Hongjue Zhao' 'Chenxiang Luo'\n 'Eric Zavesky' 'Huaxiu Yao' 'Huajie Shao']"
] |
null | null | 2406.17936 | null | null | http://arxiv.org/pdf/2406.17936v1 | 2024-06-25T20:56:41Z | 2024-06-25T20:56:41Z | Hot-Distance: Combining One-Hot and Signed Distance Embeddings for
Segmentation | Machine learning models are only as good as the data to which they are fit. As such, it is always preferable to use as much data as possible in training models. What data can be used for fitting a model depends a lot on the formulation of the task. We introduce Hot-Distance, a novel segmentation target that incorporates the strength of signed boundary distance prediction with the flexibility of one-hot encoding, to increase the amount of usable training data for segmentation of subcellular structures in focused ion beam scanning electron microscopy (FIB-SEM). | [
"['Marwan Zouinkhi' 'Jeff L. Rhoades' 'Aubrey V. Weigel']"
] |
null | null | 2406.17949 | null | null | http://arxiv.org/pdf/2406.17949v1 | 2024-06-25T21:51:43Z | 2024-06-25T21:51:43Z | The Overcooked Generalisation Challenge | We introduce the Overcooked Generalisation Challenge (OGC) - the first benchmark to study agents' zero-shot cooperation abilities when faced with novel partners and levels in the Overcooked-AI environment. This perspective starkly contrasts a large body of previous work that has trained and evaluated cooperating agents only on the same level, failing to capture generalisation abilities required for real-world human-AI cooperation. Our challenge interfaces with state-of-the-art dual curriculum design (DCD) methods to generate auto-curricula for training general agents in Overcooked. It is the first cooperative multi-agent environment specially designed for DCD methods and, consequently, the first benchmarked with state-of-the-art methods. It is fully GPU-accelerated, built on the DCD benchmark suite minimax, and freely available under an open-source license: https://git.hcics.simtech.uni-stuttgart.de/public-projects/OGC. We show that current DCD algorithms struggle to produce useful policies in this novel challenge, even if combined with recent network architectures that were designed for scalability and generalisability. The OGC pushes the boundaries of real-world human-AI cooperation by enabling the research community to study the impact of generalisation on cooperating agents. | [
"['Constantin Ruhdorfer' 'Matteo Bortoletto' 'Anna Penzkofer'\n 'Andreas Bulling']"
] |
null | null | 2406.17951 | null | null | http://arxiv.org/pdf/2406.17951v1 | 2024-06-25T21:57:26Z | 2024-06-25T21:57:26Z | Navigating High-Degree Heterogeneity: Federated Learning in Aerial and
Space Networks | Federated learning offers a compelling solution to the challenges of networking and data privacy within aerial and space networks by utilizing vast private edge data and computing capabilities accessible through drones, balloons, and satellites. While current research has focused on optimizing the learning process, computing efficiency, and minimizing communication overhead, the issue of heterogeneity and class imbalance remains a significant barrier to rapid model convergence. In our study, we explore the influence of heterogeneity on class imbalance, which diminishes performance in ASN-based federated learning. We illustrate the correlation between heterogeneity and class imbalance within grouped data and show how constraints such as battery life exacerbate the class imbalance challenge. Our findings indicate that ASN-based FL faces heightened class imbalance issues even with similar levels of heterogeneity compared to other scenarios. Finally, we analyze the impact of varying degrees of heterogeneity on FL training and evaluate the efficacy of current state-of-the-art algorithms under these conditions. Our results reveal that the heterogeneity challenge is more pronounced in ASN-based federated learning and that prevailing algorithms often fail to effectively address high levels of heterogeneity. | [
"['Fan Dong' 'Henry Leung' 'Steve Drew']"
] |
null | null | 2406.17952 | null | null | http://arxiv.org/pdf/2406.17952v1 | 2024-06-25T21:58:37Z | 2024-06-25T21:58:37Z | LINSCAN -- A Linearity Based Clustering Algorithm | DBSCAN and OPTICS are powerful algorithms for identifying clusters of points in domains where few assumptions can be made about the structure of the data. In this paper, we leverage these strengths and introduce a new algorithm, LINSCAN, designed to seek lineated clusters that are difficult to find and isolate with existing methods. In particular, by embedding points as normal distributions approximating their local neighborhoods and leveraging a distance function derived from the Kullback Leibler Divergence, LINSCAN can detect and distinguish lineated clusters that are spatially close but have orthogonal covariances. We demonstrate how LINSCAN can be applied to seismic data to identify active faults, including intersecting faults, and determine their orientation. Finally, we discuss the properties a generalization of DBSCAN and OPTICS must have in order to retain the stability benefits of these algorithms. | [
"['Andrew Dennehy' 'Xiaoyu Zou' 'Shabnam J. Semnani' 'Yuri Fialko'\n 'Alexander Cloninger']"
] |
null | null | 2406.17954 | null | null | http://arxiv.org/pdf/2406.17954v1 | 2024-06-25T22:06:40Z | 2024-06-25T22:06:40Z | Why Line Search when you can Plane Search? SO-Friendly Neural Networks
allow Per-Iteration Optimization of Learning and Momentum Rates for Every
Layer | We introduce the class of SO-friendly neural networks, which include several models used in practice including networks with 2 layers of hidden weights where the number of inputs is larger than the number of outputs. SO-friendly networks have the property that performing a precise line search to set the step size on each iteration has the same asymptotic cost during full-batch training as using a fixed learning. Further, for the same cost a planesearch can be used to set both the learning and momentum rate on each step. Even further, SO-friendly networks also allow us to use subspace optimization to set a learning rate and momentum rate for each layer on each iteration. We explore augmenting gradient descent as well as quasi-Newton methods and Adam with line optimization and subspace optimization, and our experiments indicate that this gives fast and reliable ways to train these networks that are insensitive to hyper-parameters. | [
"['Betty Shea' 'Mark Schmidt']"
] |
null | null | 2406.17963 | null | null | http://arxiv.org/pdf/2406.17963v2 | 2024-06-28T06:44:45Z | 2024-06-25T22:44:53Z | Empowering Interdisciplinary Insights with Dynamic Graph Embedding
Trajectories | We developed DyGETViz, a novel framework for effectively visualizing dynamic graphs (DGs) that are ubiquitous across diverse real-world systems. This framework leverages recent advancements in discrete-time dynamic graph (DTDG) models to adeptly handle the temporal dynamics inherent in dynamic graphs. DyGETViz effectively captures both micro- and macro-level structural shifts within these graphs, offering a robust method for representing complex and massive dynamic graphs. The application of DyGETViz extends to a diverse array of domains, including ethology, epidemiology, finance, genetics, linguistics, communication studies, social studies, and international relations. Through its implementation, DyGETViz has revealed or confirmed various critical insights. These include the diversity of content sharing patterns and the degree of specialization within online communities, the chronological evolution of lexicons across decades, and the distinct trajectories exhibited by aging-related and non-related genes. Importantly, DyGETViz enhances the accessibility of scientific findings to non-domain experts by simplifying the complexities of dynamic graphs. Our framework is released as an open-source Python package for use across diverse disciplines. Our work not only addresses the ongoing challenges in visualizing and analyzing DTDG models but also establishes a foundational framework for future investigations into dynamic graph representation and analysis across various disciplines. | [
"['Yiqiao Jin' 'Andrew Zhao' 'Yeon-Chang Lee' 'Meng Ye' 'Ajay Divakaran'\n 'Srijan Kumar']"
] |
null | null | 2406.17968 | null | null | http://arxiv.org/pdf/2406.17968v1 | 2024-06-25T22:50:48Z | 2024-06-25T22:50:48Z | Efficient Document Ranking with Learnable Late Interactions | Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for query-document relevance in information retrieval. To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query and document embeddings; usually, the former has higher quality while the latter benefits from lower latency. Recently, late-interaction models have been proposed to realize more favorable latency-quality tradeoffs, by using a DE structure followed by a lightweight scorer based on query and document token embeddings. However, these lightweight scorers are often hand-crafted, and there is no understanding of their approximation power; further, such scorers require access to individual document token embeddings, which imposes an increased latency and storage burden. In this paper, we propose novel learnable late-interaction models (LITE) that resolve these issues. Theoretically, we prove that LITE is a universal approximator of continuous scoring functions, even for relatively small embedding dimension. Empirically, LITE outperforms previous late-interaction models such as ColBERT on both in-domain and zero-shot re-ranking tasks. For instance, experiments on MS MARCO passage re-ranking show that LITE not only yields a model with better generalization, but also lowers latency and requires 0.25x storage compared to ColBERT. | [
"['Ziwei Ji' 'Himanshu Jain' 'Andreas Veit' 'Sashank J. Reddi'\n 'Sadeep Jayasumana' 'Ankit Singh Rawat' 'Aditya Krishna Menon' 'Felix Yu'\n 'Sanjiv Kumar']"
] |
null | null | 2406.17972 | null | null | http://arxiv.org/pdf/2406.17972v1 | 2024-06-25T23:07:18Z | 2024-06-25T23:07:18Z | LABOR-LLM: Language-Based Occupational Representations with Large
Language Models | Many empirical studies of labor market questions rely on estimating relatively simple predictive models using small, carefully constructed longitudinal survey datasets based on hand-engineered features. Large Language Models (LLMs), trained on massive datasets, encode vast quantities of world knowledge and can be used for the next job prediction problem. However, while an off-the-shelf LLM produces plausible career trajectories when prompted, the probability with which an LLM predicts a particular job transition conditional on career history will not, in general, align with the true conditional probability in a given population. Recently, Vafa et al. (2024) introduced a transformer-based "foundation model", CAREER, trained using a large, unrepresentative resume dataset, that predicts transitions between jobs; it further demonstrated how transfer learning techniques can be used to leverage the foundation model to build better predictive models of both transitions and wages that reflect conditional transition probabilities found in nationally representative survey datasets. This paper considers an alternative where the fine-tuning of the CAREER foundation model is replaced by fine-tuning LLMs. For the task of next job prediction, we demonstrate that models trained with our approach outperform several alternatives in terms of predictive performance on the survey data, including traditional econometric models, CAREER, and LLMs with in-context learning, even though the LLM can in principle predict job titles that are not allowed in the survey data. Further, we show that our fine-tuned LLM-based models' predictions are more representative of the career trajectories of various workforce subpopulations than off-the-shelf LLM models and CAREER. We conduct experiments and analyses that highlight the sources of the gains in the performance of our models for representative predictions. | [
"['Tianyu Du' 'Ayush Kanodia' 'Herman Brunborg' 'Keyon Vafa' 'Susan Athey']"
] |
null | null | 2406.17975 | null | null | http://arxiv.org/pdf/2406.17975v1 | 2024-06-25T23:12:07Z | 2024-06-25T23:12:07Z | Inherent Challenges of Post-Hoc Membership Inference for Large Language
Models | Large Language Models (LLMs) are often trained on vast amounts of undisclosed data, motivating the development of post-hoc Membership Inference Attacks (MIAs) to gain insight into their training data composition. However, in this paper, we identify inherent challenges in post-hoc MIA evaluation due to potential distribution shifts between collected member and non-member datasets. Using a simple bag-of-words classifier, we demonstrate that datasets used in recent post-hoc MIAs suffer from significant distribution shifts, in some cases achieving near-perfect distinction between members and non-members. This implies that previously reported high MIA performance may be largely attributable to these shifts rather than model memorization. We confirm that randomized, controlled setups eliminate such shifts and thus enable the development and fair evaluation of new MIAs. However, we note that such randomized setups are rarely available for the latest LLMs, making post-hoc data collection still required to infer membership for real-world LLMs. As a potential solution, we propose a Regression Discontinuity Design (RDD) approach for post-hoc data collection, which substantially mitigates distribution shifts. Evaluating various MIA methods on this RDD setup yields performance barely above random guessing, in stark contrast to previously reported results. Overall, our findings highlight the challenges in accurately measuring LLM memorization and the need for careful experimental design in (post-hoc) membership inference tasks. | [
"['Matthieu Meeus' 'Shubham Jain' 'Marek Rei' 'Yves-Alexandre de Montjoye']"
] |
null | null | 2406.17989 | null | null | http://arxiv.org/pdf/2406.17989v1 | 2024-06-26T00:11:13Z | 2024-06-26T00:11:13Z | Learning Neural Networks with Sparse Activations | A core component present in many successful neural network architectures, is an MLP block of two fully connected layers with a non-linear activation in between. An intriguing phenomenon observed empirically, including in transformer architectures, is that, after training, the activations in the hidden layer of this MLP block tend to be extremely sparse on any given input. Unlike traditional forms of sparsity, where there are neurons/weights which can be deleted from the network, this form of {em dynamic} activation sparsity appears to be harder to exploit to get more efficient networks. Motivated by this we initiate a formal study of PAC learnability of MLP layers that exhibit activation sparsity. We present a variety of results showing that such classes of functions do lead to provable computational and statistical advantages over their non-sparse counterparts. Our hope is that a better theoretical understanding of {em sparsely activated} networks would lead to methods that can exploit activation sparsity in practice. | [
"['Pranjal Awasthi' 'Nishanth Dikkala' 'Pritish Kamath' 'Raghu Meka']"
] |
null | null | 2406.17990 | null | null | http://arxiv.org/pdf/2406.17990v1 | 2024-06-26T00:12:08Z | 2024-06-26T00:12:08Z | Explicit Diversity Conditions for Effective Question Answer Generation
with Large Language Models | Question Answer Generation (QAG) is an effective data augmentation technique to improve the accuracy of question answering systems, especially in low-resource domains. While recent pretrained and large language model-based QAG methods have made substantial progress, they face the critical issue of redundant QA pair generation, affecting downstream QA systems. Implicit diversity techniques such as sampling and diverse beam search are proven effective solutions but often yield smaller diversity. We present explicit diversity conditions for QAG, focusing on spatial aspects, question types, and entities, substantially increasing diversity in QA generation. Our work emphasizes the need of explicit diversity conditions for generating diverse question-answer synthetic data by showing significant improvements in downstream QA task over existing widely adopted implicit diversity techniques. In particular, generated QA pairs from explicit diversity conditions when used to train the downstream QA model results in an average 4.1% exact match and 4.5% F1 improvement over QAG from implicit sampling techniques on SQuADDU. Our work emphasizes the need for explicit diversity conditions even more in low-resource datasets (SubjQA), where average downstream QA performance improvements are around 12% EM. | [
"['Vikas Yadav' 'Hyuk Joon Kwon' 'Vijay Srinivasan' 'Hongxia Jin']"
] |
null | null | 2406.18020 | null | null | http://arxiv.org/pdf/2406.18020v1 | 2024-06-26T02:26:50Z | 2024-06-26T02:26:50Z | MolFusion: Multimodal Fusion Learning for Molecular Representations via
Multi-granularity Views | Artificial Intelligence predicts drug properties by encoding drug molecules, aiding in the rapid screening of candidates. Different molecular representations, such as SMILES and molecule graphs, contain complementary information for molecular encoding. Thus exploiting complementary information from different molecular representations is one of the research priorities in molecular encoding. Most existing methods for combining molecular multi-modalities only use molecular-level information, making it hard to encode intra-molecular alignment information between different modalities. To address this issue, we propose a multi-granularity fusion method that is MolFusion. The proposed MolFusion consists of two key components: (1) MolSim, a molecular-level encoding component that achieves molecular-level alignment between different molecular representations. and (2) AtomAlign, an atomic-level encoding component that achieves atomic-level alignment between different molecular representations. Experimental results show that MolFusion effectively utilizes complementary multimodal information, leading to significant improvements in performance across various classification and regression tasks. | [
"['Muzhen Cai' 'Sendong Zhao' 'Haochun Wang' 'Yanrui Du' 'Zewen Qiang'\n 'Bing Qin' 'Ting Liu']"
] |
null | null | 2406.18021 | null | null | http://arxiv.org/pdf/2406.18021v1 | 2024-06-26T02:32:59Z | 2024-06-26T02:32:59Z | SC-MoE: Switch Conformer Mixture of Experts for Unified Streaming and
Non-streaming Code-Switching ASR | In this work, we propose a Switch-Conformer-based MoE system named SC-MoE for unified streaming and non-streaming code-switching (CS) automatic speech recognition (ASR), where we design a streaming MoE layer consisting of three language experts, which correspond to Mandarin, English, and blank, respectively, and equipped with a language identification (LID) network with a Connectionist Temporal Classification (CTC) loss as a router in the encoder of SC-MoE to achieve a real-time streaming CS ASR system. To further utilize the language information embedded in text, we also incorporate MoE layers into the decoder of SC-MoE. In addition, we introduce routers into every MoE layer of the encoder and the decoder and achieve better recognition performance. Experimental results show that the SC-MoE significantly improves CS ASR performances over baseline with comparable computational efficiency. | [
"['Shuaishuai Ye' 'Shunfei Chen' 'Xinhui Hu' 'Xinkang Xu']"
] |
null | null | 2406.18022 | null | null | http://arxiv.org/pdf/2406.18022v1 | 2024-06-26T02:34:48Z | 2024-06-26T02:34:48Z | AutoOPE: Automated Off-Policy Estimator Selection | The Off-Policy Evaluation (OPE) problem consists of evaluating the performance of counterfactual policies with data collected by another one. This problem is of utmost importance for various application domains, e.g., recommendation systems, medical treatments, and many others. To solve the OPE problem, we resort to estimators, which aim to estimate in the most accurate way possible the performance that the counterfactual policies would have had if they were deployed in place of the logging policy. In the literature, several estimators have been developed, all with different characteristics and theoretical guarantees. Therefore, there is no dominant estimator, and each estimator may be the best one for different OPE problems, depending on the characteristics of the dataset at hand. While the selection of the estimator is a crucial choice for an accurate OPE, this problem has been widely overlooked in the literature. We propose an automated data-driven OPE estimator selection method based on machine learning. In particular, the core idea we propose in this paper is to create several synthetic OPE tasks and use a machine learning model trained to predict the best estimator for those synthetic tasks. We empirically show how our method is able to generalize to unseen tasks and make a better estimator selection compared to a baseline method on several real-world datasets, with a computational cost significantly lower than the one of the baseline. | [
"['Nicolò Felicioni' 'Michael Benigni' 'Maurizio Ferrari Dacrema']"
] |
null | null | 2406.18033 | null | null | http://arxiv.org/pdf/2406.18033v1 | 2024-06-26T03:02:22Z | 2024-06-26T03:02:22Z | Boosting Soft Q-Learning by Bounding | An agent's ability to leverage past experience is critical for efficiently solving new tasks. Prior work has focused on using value function estimates to obtain zero-shot approximations for solutions to a new task. In soft Q-learning, we show how any value function estimate can also be used to derive double-sided bounds on the optimal value function. The derived bounds lead to new approaches for boosting training performance which we validate experimentally. Notably, we find that the proposed framework suggests an alternative method for updating the Q-function, leading to boosted performance. | [
"['Jacob Adamczyk' 'Volodymyr Makarenko' 'Stas Tiomkin' 'Rahul V. Kulkarni']"
] |
null | null | 2406.18035 | null | null | http://arxiv.org/pdf/2406.18035v1 | 2024-06-26T03:08:24Z | 2024-06-26T03:08:24Z | Local Linear Recovery Guarantee of Deep Neural Networks at
Overparameterization | Determining whether deep neural network (DNN) models can reliably recover target functions at overparameterization is a critical yet complex issue in the theory of deep learning. To advance understanding in this area, we introduce a concept we term "local linear recovery" (LLR), a weaker form of target function recovery that renders the problem more amenable to theoretical analysis. In the sense of LLR, we prove that functions expressible by narrower DNNs are guaranteed to be recoverable from fewer samples than model parameters. Specifically, we establish upper limits on the optimistic sample sizes, defined as the smallest sample size necessary to guarantee LLR, for functions in the space of a given DNN. Furthermore, we prove that these upper bounds are achieved in the case of two-layer tanh neural networks. Our research lays a solid groundwork for future investigations into the recovery capabilities of DNNs in overparameterized scenarios. | [
"['Yaoyu Zhang' 'Leyang Zhang' 'Zhongwang Zhang' 'Zhiwei Bai']"
] |
null | null | 2406.18038 | null | null | http://arxiv.org/pdf/2406.18038v1 | 2024-06-26T03:12:07Z | 2024-06-26T03:12:07Z | MT2ST: Adaptive Multi-Task to Single-Task Learning | The conventional training approaches often face challenges in balancing the breadth of multi-task learning (MTL) with the depth of single-task learning (STL). To address this issue, we introduce the Multi-Task to Single-Task (MT2ST) framework, a groundbreaking approach that can combine the generalizability of MTL with the precision of STL. Our work include two strategies: 'Diminish' and 'Switch'. 'Diminish' Strategy will gradually reduce the influence of auxiliary tasks, while the 'Switch' strategy involves a shift from multi-tasking to single-tasking at a specific timepoint at the training process. In this paper, we propose the Multi-Task to Single-Task (MT2ST) framework, a novel approach that significantly enhances the efficiency and accuracy of word embedding training while concurrently addressing prevalent issues such as overfitting. Our empirical studies demonstrate that MT2ST can reduce training time by 67% when contrasted with single-task learning approaches, and by 13% compared to traditional multi-task learning methods. These findings underscore MT2ST's potential to be a powerful tools for word embedding training acceleration. | [
"['Dong Liu' 'Meng Jiang']"
] |
null | null | 2406.18043 | null | null | http://arxiv.org/pdf/2406.18043v1 | 2024-06-26T03:41:48Z | 2024-06-26T03:41:48Z | Multimodal foundation world models for generalist embodied agents | Learning generalist embodied agents, able to solve multitudes of tasks in different domains is a long-standing problem. Reinforcement learning (RL) is hard to scale up as it requires a complex reward design for each task. In contrast, language can specify tasks in a more natural way. Current foundation vision-language models (VLMs) generally require fine-tuning or other adaptations to be functional, due to the significant domain gap. However, the lack of multimodal data in such domains represents an obstacle toward developing foundation models for embodied applications. In this work, we overcome these problems by presenting multimodal foundation world models, able to connect and align the representation of foundation VLMs with the latent space of generative world models for RL, without any language annotations. The resulting agent learning framework, GenRL, allows one to specify tasks through vision and/or language prompts, ground them in the embodied domain's dynamics, and learns the corresponding behaviors in imagination. As assessed through large-scale multi-task benchmarking, GenRL exhibits strong multi-task generalization performance in several locomotion and manipulation domains. Furthermore, by introducing a data-free RL strategy, it lays the groundwork for foundation model-based RL for generalist embodied agents. | [
"['Pietro Mazzaglia' 'Tim Verbelen' 'Bart Dhoedt' 'Aaron Courville'\n 'Sai Rajeswar']"
] |
null | null | 2406.18053 | null | null | http://arxiv.org/pdf/2406.18053v1 | 2024-06-26T04:05:04Z | 2024-06-26T04:05:04Z | Bidirectional-Reachable Hierarchical Reinforcement Learning with
Mutually Responsive Policies | Hierarchical reinforcement learning (HRL) addresses complex long-horizon tasks by skillfully decomposing them into subgoals. Therefore, the effectiveness of HRL is greatly influenced by subgoal reachability. Typical HRL methods only consider subgoal reachability from the unilateral level, where a dominant level enforces compliance to the subordinate level. However, we observe that when the dominant level becomes trapped in local exploration or generates unattainable subgoals, the subordinate level is negatively affected and cannot follow the dominant level's actions. This can potentially make both levels stuck in local optima, ultimately hindering subsequent subgoal reachability. Allowing real-time bilateral information sharing and error correction would be a natural cure for this issue, which motivates us to propose a mutual response mechanism. Based on this, we propose the Bidirectional-reachable Hierarchical Policy Optimization (BrHPO)--a simple yet effective algorithm that also enjoys computation efficiency. Experiment results on a variety of long-horizon tasks showcase that BrHPO outperforms other state-of-the-art HRL baselines, coupled with a significantly higher exploration efficiency and robustness. | [
"['Yu Luo' 'Fuchun Sun' 'Tianying Ji' 'Xianyuan Zhan']"
] |
null | null | 2406.18060 | null | null | http://arxiv.org/pdf/2406.18060v1 | 2024-06-26T04:33:13Z | 2024-06-26T04:33:13Z | AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for
Memory-Efficient Large Language Models Fine-Tuning | Fine-tuning large language models (LLMs) has achieved remarkable performance across various natural language processing tasks, yet it demands more and more memory as model sizes keep growing. To address this issue, the recently proposed Memory-efficient Zeroth-order (MeZO) methods attempt to fine-tune LLMs using only forward passes, thereby avoiding the need for a backpropagation graph. However, significant performance drops and a high risk of divergence have limited their widespread adoption. In this paper, we propose the Adaptive Zeroth-order Tensor-Train Adaption (AdaZeta) framework, specifically designed to improve the performance and convergence of the ZO methods. To enhance dimension-dependent ZO estimation accuracy, we introduce a fast-forward, low-parameter tensorized adapter. To tackle the frequently observed divergence issue in large-scale ZO fine-tuning tasks, we propose an adaptive query number schedule that guarantees convergence. Detailed theoretical analysis and extensive experimental results on Roberta-Large and Llama-2-7B models substantiate the efficacy of our AdaZeta framework in terms of accuracy, memory efficiency, and convergence speed. | [
"['Yifan Yang' 'Kai Zhen' 'Ershad Banijamal' 'Athanasios Mouchtaris'\n 'Zheng Zhang']"
] |
null | null | 2406.18062 | null | null | http://arxiv.org/pdf/2406.18062v1 | 2024-06-26T04:49:03Z | 2024-06-26T04:49:03Z | Breaking the Barrier: Enhanced Utility and Robustness in Smoothed DRL
Agents | Robustness remains a paramount concern in deep reinforcement learning (DRL), with randomized smoothing emerging as a key technique for enhancing this attribute. However, a notable gap exists in the performance of current smoothed DRL agents, often characterized by significantly low clean rewards and weak robustness. In response to this challenge, our study introduces innovative algorithms aimed at training effective smoothed robust DRL agents. We propose S-DQN and S-PPO, novel approaches that demonstrate remarkable improvements in clean rewards, empirical robustness, and robustness guarantee across standard RL benchmarks. Notably, our S-DQN and S-PPO agents not only significantly outperform existing smoothed agents by an average factor of $2.16times$ under the strongest attack, but also surpass previous robustly-trained agents by an average factor of $2.13times$. This represents a significant leap forward in the field. Furthermore, we introduce Smoothed Attack, which is $1.89times$ more effective in decreasing the rewards of smoothed agents than existing adversarial attacks. | [
"['Chung-En Sun' 'Sicun Gao' 'Tsui-Wei Weng']"
] |
null | null | 2406.18066 | null | null | http://arxiv.org/pdf/2406.18066v1 | 2024-06-26T04:51:14Z | 2024-06-26T04:51:14Z | Learning Optimal Filters Using Variational Inference | Filtering-the task of estimating the conditional distribution of states of a dynamical system given partial, noisy, observations-is important in many areas of science and engineering, including weather and climate prediction. However, the filtering distribution is generally intractable to obtain for high-dimensional, nonlinear systems. Filters used in practice, such as the ensemble Kalman filter (EnKF), are biased for nonlinear systems and have numerous tuning parameters. Here, we present a framework for learning a parameterized analysis map-the map that takes a forecast distribution and observations to the filtering distribution-using variational inference. We show that this methodology can be used to learn gain matrices for filtering linear and nonlinear dynamical systems, as well as inflation and localization parameters for an EnKF. Future work will apply this framework to learn new filtering algorithms. | [
"['Enoch Luk' 'Eviatar Bach' 'Ricardo Baptista' 'Andrew Stuart']"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.