categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.14909 | null | null | http://arxiv.org/pdf/2406.14909v1 | 2024-06-21T06:58:37Z | 2024-06-21T06:58:37Z | MoA: Mixture of Sparse Attention for Automatic Large Language Model
Compression | Sparse attention can effectively mitigate the significant memory and throughput demands of Large Language Models (LLMs) in long contexts. Existing methods typically employ a uniform sparse attention mask, applying the same sparse pattern across different attention heads and input lengths. However, this uniform approach fails to capture the diverse attention patterns inherent in LLMs, ignoring their distinct accuracy-latency trade-offs. To address this challenge, we propose the Mixture of Attention (MoA), which automatically tailors distinct sparse attention configurations to different heads and layers. MoA constructs and navigates a search space of various attention patterns and their scaling rules relative to input sequence lengths. It profiles the model, evaluates potential configurations, and pinpoints the optimal sparse attention compression plan. MoA adapts to varying input sizes, revealing that some attention heads expand their focus to accommodate longer sequences, while other heads consistently concentrate on fixed-length local contexts. Experiments show that MoA increases the effective context length by $3.9times$ with the same average attention span, boosting retrieval accuracy by $1.5-7.1times$ over the uniform-attention baseline across Vicuna-7B, Vicuna-13B, and Llama3-8B models. Moreover, MoA narrows the capability gaps between sparse and dense models, reducing the maximum relative performance drop from $9%-36%$ to within $5%$ across two long-context understanding benchmarks. MoA achieves a $1.2-1.4times$ GPU memory reduction and boosts decode throughput by $5.5-6.7 times$ for 7B and 13B dense models on a single GPU, with minimal impact on performance. | [
"['Tianyu Fu' 'Haofeng Huang' 'Xuefei Ning' 'Genghan Zhang' 'Boju Chen'\n 'Tianqi Wu' 'Hongyi Wang' 'Zixiao Huang' 'Shiyao Li' 'Shengen Yan'\n 'Guohao Dai' 'Huazhong Yang' 'Yu Wang']"
] |
null | null | 2406.14910 | null | null | http://arxiv.org/pdf/2406.14910v1 | 2024-06-21T07:01:23Z | 2024-06-21T07:01:23Z | Towards Dynamic Resource Allocation and Client Scheduling in
Hierarchical Federated Learning: A Two-Phase Deep Reinforcement Learning
Approach | Federated learning (FL) is a viable technique to train a shared machine learning model without sharing data. Hierarchical FL (HFL) system has yet to be studied regrading its multiple levels of energy, computation, communication, and client scheduling, especially when it comes to clients relying on energy harvesting to power their operations. This paper presents a new two-phase deep deterministic policy gradient (DDPG) framework, referred to as ``TP-DDPG'', to balance online the learning delay and model accuracy of an FL process in an energy harvesting-powered HFL system. The key idea is that we divide optimization decisions into two groups, and employ DDPG to learn one group in the first phase, while interpreting the other group as part of the environment to provide rewards for training the DDPG in the second phase. Specifically, the DDPG learns the selection of participating clients, and their CPU configurations and the transmission powers. A new straggler-aware client association and bandwidth allocation (SCABA) algorithm efficiently optimizes the other decisions and evaluates the reward for the DDPG. Experiments demonstrate that with substantially reduced number of learnable parameters, the TP-DDPG can quickly converge to effective polices that can shorten the training time of HFL by 39.4% compared to its benchmarks, when the required test accuracy of HFL is 0.9. | [
"['Xiaojing Chen' 'Zhenyuan Li' 'Wei Ni' 'Xin Wang' 'Shunqing Zhang'\n 'Yanzan Sun' 'Shugong Xu' 'Qingqi Pei']"
] |
null | null | 2406.14916 | null | null | http://arxiv.org/pdf/2406.14916v1 | 2024-06-21T07:20:34Z | 2024-06-21T07:20:34Z | Demonstrating the Efficacy of Kolmogorov-Arnold Networks in Vision Tasks | In the realm of deep learning, the Kolmogorov-Arnold Network (KAN) has emerged as a potential alternative to multilayer projections (MLPs). However, its applicability to vision tasks has not been extensively validated. In our study, we demonstrated the effectiveness of KAN for vision tasks through multiple trials on the MNIST, CIFAR10, and CIFAR100 datasets, using a training batch size of 32. Our results showed that while KAN outperformed the original MLP-Mixer on CIFAR10 and CIFAR100, it performed slightly worse than the state-of-the-art ResNet-18. These findings suggest that KAN holds significant promise for vision tasks, and further modifications could enhance its performance in future evaluations.Our contributions are threefold: first, we showcase the efficiency of KAN-based algorithms for visual tasks; second, we provide extensive empirical assessments across various vision benchmarks, comparing KAN's performance with MLP-Mixer, CNNs, and Vision Transformers (ViT); and third, we pioneer the use of natural KAN layers in visual tasks, addressing a gap in previous research. This paper lays the foundation for future studies on KANs, highlighting their potential as a reliable alternative for image classification tasks. | [
"['Minjong Cheon']"
] |
null | null | 2406.14917 | null | null | http://arxiv.org/pdf/2406.14917v1 | 2024-06-21T07:20:51Z | 2024-06-21T07:20:51Z | LLM2FEA: Discover Novel Designs with Generative Evolutionary
Multitasking | The rapid research and development of generative artificial intelligence has enabled the generation of high-quality images, text, and 3D models from text prompts. This advancement impels an inquiry into whether these models can be leveraged to create digital artifacts for both creative and engineering applications. Drawing on innovative designs from other domains may be one answer to this question, much like the historical practice of ``bionics", where humans have sought inspiration from nature's exemplary designs. This raises the intriguing possibility of using generative models to simultaneously tackle design tasks across multiple domains, facilitating cross-domain learning and resulting in a series of innovative design solutions. In this paper, we propose LLM2FEA as the first attempt to discover novel designs in generative models by transferring knowledge across multiple domains. By utilizing a multi-factorial evolutionary algorithm (MFEA) to drive a large language model, LLM2FEA integrates knowledge from various fields to generate prompts that guide the generative model in discovering novel and practical objects. Experimental results in the context of 3D aerodynamic design verify the discovery capabilities of the proposed LLM2FEA. The designs generated by LLM2FEA not only satisfy practicality requirements to a certain degree but also feature novel and aesthetically pleasing shapes, demonstrating the potential applications of LLM2FEA in discovery tasks. | [
"['Melvin Wong' 'Jiao Liu' 'Thiago Rios' 'Stefan Menzel' 'Yew Soon Ong']"
] |
null | null | 2406.14929 | null | null | http://arxiv.org/pdf/2406.14929v1 | 2024-06-21T07:37:28Z | 2024-06-21T07:37:28Z | Efficient Graph Similarity Computation with Alignment Regularization | We consider the graph similarity computation (GSC) task based on graph edit distance (GED) estimation. State-of-the-art methods treat GSC as a learning-based prediction task using Graph Neural Networks (GNNs). To capture fine-grained interactions between pair-wise graphs, these methods mostly contain a node-level matching module in the end-to-end learning pipeline, which causes high computational costs in both the training and inference stages. We show that the expensive node-to-node matching module is not necessary for GSC, and high-quality learning can be attained with a simple yet powerful regularization technique, which we call the Alignment Regularization (AReg). In the training stage, the AReg term imposes a node-graph correspondence constraint on the GNN encoder. In the inference stage, the graph-level representations learned by the GNN encoder are directly used to compute the similarity score without using AReg again to speed up inference. We further propose a multi-scale GED discriminator to enhance the expressive ability of the learned representations. Extensive experiments on real-world datasets demonstrate the effectiveness, efficiency and transferability of our approach. | [
"['Wei Zhuo' 'Guang Tan']"
] |
null | null | 2406.14936 | null | null | http://arxiv.org/pdf/2406.14936v1 | 2024-06-21T07:45:28Z | 2024-06-21T07:45:28Z | On the growth of the parameters of approximating ReLU neural networks | This work focuses on the analysis of fully connected feed forward ReLU neural networks as they approximate a given, smooth function. In contrast to conventionally studied universal approximation properties under increasing architectures, e.g., in terms of width or depth of the networks, we are concerned with the asymptotic growth of the parameters of approximating networks. Such results are of interest, e.g., for error analysis or consistency results for neural network training. The main result of our work is that, for a ReLU architecture with state of the art approximation error, the realizing parameters grow at most polynomially. The obtained rate with respect to a normalized network size is compared to existing results and is shown to be superior in most cases, in particular for high dimensional input. | [
"['Erion Morina' 'Martin Holler']"
] |
null | null | 2406.14951 | null | null | http://arxiv.org/pdf/2406.14951v1 | 2024-06-21T08:03:25Z | 2024-06-21T08:03:25Z | An Idiosyncrasy of Time-discretization in Reinforcement Learning | Many reinforcement learning algorithms are built on an assumption that an agent interacts with an environment over fixed-duration, discrete time steps. However, physical systems are continuous in time, requiring a choice of time-discretization granularity when digitally controlling them. Furthermore, such systems do not wait for decisions to be made before advancing the environment state, necessitating the study of how the choice of discretization may affect a reinforcement learning algorithm. In this work, we consider the relationship between the definitions of the continuous-time and discrete-time returns. Specifically, we acknowledge an idiosyncrasy with naively applying a discrete-time algorithm to a discretized continuous-time environment, and note how a simple modification can better align the return definitions. This observation is of practical consideration when dealing with environments where time-discretization granularity is a choice, or situations where such granularity is inherently stochastic. | [
"['Kris De Asis' 'Richard S. Sutton']"
] |
null | null | 2406.14953 | null | null | http://arxiv.org/pdf/2406.14953v2 | 2024-07-02T11:22:36Z | 2024-06-21T08:04:12Z | Deep Imbalanced Regression to Estimate Vascular Age from PPG Data: a
Novel Digital Biomarker for Cardiovascular Health | Photoplethysmography (PPG) is emerging as a crucial tool for monitoring human hemodynamics, with recent studies highlighting its potential in assessing vascular aging through deep learning. However, real-world age distributions are often imbalanced, posing significant challenges for deep learning models. In this paper, we introduce a novel, simple, and effective loss function named the Dist Loss to address deep imbalanced regression tasks. We trained a one-dimensional convolutional neural network (Net1D) incorporating the Dist Loss on the extensive UK Biobank dataset (n=502,389) to estimate vascular age from PPG signals and validate its efficacy in characterizing cardiovascular health. The model's performance was validated on a 40% held-out test set, achieving state-of-the-art results, especially in regions with small sample sizes. Furthermore, we divided the population into three subgroups based on the difference between predicted vascular age and chronological age: less than -10 years, between -10 and 10 years, and greater than 10 years. We analyzed the relationship between predicted vascular age and several cardiovascular events over a follow-up period of up to 10 years, including death, coronary heart disease, and heart failure. Our results indicate that the predicted vascular age has significant potential to reflect an individual's cardiovascular health status. Our code will be available at https://github.com/Ngk03/AI-vascular-age. | [
"['Guangkun Nie' 'Qinghao Zhao' 'Gongzheng Tang' 'Jun Li' 'Shenda Hong']"
] |
null | null | 2406.14956 | null | null | http://arxiv.org/pdf/2406.14956v1 | 2024-06-21T08:10:03Z | 2024-06-21T08:10:03Z | Unlocking the Global Synergies in Low-Rank Adapters | Low-rank Adaption (LoRA) has been the de-facto parameter-efficient fine-tuning technique for large language models. We present HeteroLoRA, a light-weight search algorithm that leverages zero-cost proxies to allocate the limited LoRA trainable parameters across the model for better fine-tuned performance. In addition to the allocation for the standard LoRA-adapted models, we also demonstrate the efficacy of HeteroLoRA by performing the allocation in a more challenging search space that includes LoRA modules and LoRA-adapted shortcut connections. Experiments show that HeteroLoRA enables improvements in model performance given the same parameter budge. For example, on MRPC, we see an improvement of 1.6% in accuracy with similar training parameter budget. We will open-source our algorithm once the paper is accepted. | [
"['Zixi Zhang' 'Cheng Zhang' 'Xitong Gao' 'Robert D. Mullins'\n 'George A. Constantinides' 'Yiren Zhao']"
] |
null | null | 2406.14963 | null | null | http://arxiv.org/pdf/2406.14963v1 | 2024-06-21T08:20:06Z | 2024-06-21T08:20:06Z | Optimised Grouped-Query Attention Mechanism for Transformers | Grouped-query attention (GQA) has been widely adopted in LLMs to mitigate the complexity of multi-head attention (MHA). To transform an MHA to a GQA, neighbour queries in MHA are evenly split into groups where each group shares the value and key layers. In this work, we propose AsymGQA, an activation-informed approach to asymmetrically grouping an MHA to a GQA for better model performance. Our AsymGQA outperforms the GQA within the same model size budget. For example, AsymGQA LLaMA-2-7B has an accuracy increase of 7.5% on MMLU compared to neighbour grouping. Our approach addresses the GQA's trade-off problem between model performance and hardware efficiency. | [
"['Yuang Chen' 'Cheng Zhang' 'Xitong Gao' 'Robert D. Mullins'\n 'George A. Constantinides' 'Yiren Zhao']"
] |
null | null | 2406.14969 | null | null | http://arxiv.org/pdf/2406.14969v2 | 2024-07-01T09:08:44Z | 2024-06-21T08:28:54Z | Uni-Mol2: Exploring Molecular Pretraining Model at Scale | In recent years, pretraining models have made significant advancements in the fields of natural language processing (NLP), computer vision (CV), and life sciences. The significant advancements in NLP and CV are predominantly driven by the expansion of model parameters and data size, a phenomenon now recognized as the scaling laws. However, research exploring scaling law in molecular pretraining models remains unexplored. In this work, we present Uni-Mol2 , an innovative molecular pretraining model that leverages a two-track transformer to effectively integrate features at the atomic level, graph level, and geometry structure level. Along with this, we systematically investigate the scaling law within molecular pretraining models, characterizing the power-law correlations between validation loss and model size, dataset size, and computational resources. Consequently, we successfully scale Uni-Mol2 to 1.1 billion parameters through pretraining on 800 million conformations, making it the largest molecular pretraining model to date. Extensive experiments show consistent improvement in the downstream tasks as the model size grows. The Uni-Mol2 with 1.1B parameters also outperforms existing methods, achieving an average 27% improvement on the QM9 and 14% on COMPAS-1D dataset. | [
"['Xiaohong Ji' 'Zhen Wang' 'Zhifeng Gao' 'Hang Zheng' 'Linfeng Zhang'\n 'Guolin Ke' 'Weinan E']"
] |
null | null | 2406.14971 | null | null | http://arxiv.org/pdf/2406.14971v1 | 2024-06-21T08:29:31Z | 2024-06-21T08:29:31Z | Domain Adaptation of Llama3-70B-Instruct through Continual Pre-Training
and Model Merging: A Comprehensive Evaluation | We conducted extensive experiments on domain adaptation of the Meta-Llama-3-70B-Instruct model on SEC data, exploring its performance on both general and domain-specific benchmarks. Our focus included continual pre-training (CPT) and model merging, aiming to enhance the model's domain-specific capabilities while mitigating catastrophic forgetting. Through this study, we evaluated the impact of integrating financial regulatory data into a robust language model and examined the effectiveness of our model merging techniques in preserving and improving the model's instructive abilities. The model is accessible at hugging face: https://huggingface.co/arcee-ai/Llama-3-SEC-Base, arcee-ai/Llama-3-SEC-Base. This is an intermediate checkpoint of our final model, which has seen 20B tokens so far. The full model is still in the process of training. This is a preprint technical report with thorough evaluations to understand the entire process. | [
"['Shamane Siriwardhana' 'Mark McQuade' 'Thomas Gauthier' 'Lucas Atkins'\n 'Fernando Fernandes Neto' 'Luke Meyers' 'Anneketh Vij' 'Tyler Odenthal'\n 'Charles Goddard' 'Mary MacCarthy' 'Jacob Solawetz']"
] |
null | null | 2406.14983 | null | null | http://arxiv.org/pdf/2406.14983v1 | 2024-06-21T08:48:57Z | 2024-06-21T08:48:57Z | Hierarchical thematic classification of major conference proceedings | In this paper, we develop a decision support system for the hierarchical text classification. We consider text collections with a fixed hierarchical structure of topics given by experts in the form of a tree. The system sorts the topics by relevance to a given document. The experts choose one of the most relevant topics to finish the classification. We propose a weighted hierarchical similarity function to calculate topic relevance. The function calculates the similarity of a document and a tree branch. The weights in this function determine word importance. We use the entropy of words to estimate the weights. The proposed hierarchical similarity function formulates a joint hierarchical thematic classification probability model of the document topics, parameters, and hyperparameters. The variational Bayesian inference gives a closed-form EM algorithm. The EM algorithm estimates the parameters and calculates the probability of a topic for a given document. Compared to hierarchical multiclass SVM, hierarchical PLSA with adaptive regularization, and hierarchical naive Bayes, the weighted hierarchical similarity function has better improvement in ranking accuracy in an abstract collection of a major conference EURO and a website collection of industrial companies. | [
"['Arsentii Kuzmin' 'Alexander Aduenko' 'Vadim Strijov']"
] |
null | null | 2406.14990 | null | null | http://arxiv.org/pdf/2406.14990v1 | 2024-06-21T09:03:37Z | 2024-06-21T09:03:37Z | Learning Variable Compliance Control From a Few Demonstrations for
Bimanual Robot with Haptic Feedback Teleoperation System | Automating dexterous, contact-rich manipulation tasks using rigid robots is a significant challenge in robotics. Rigid robots, defined by their actuation through position commands, face issues of excessive contact forces due to their inability to adapt to contact with the environment, potentially causing damage. While compliance control schemes have been introduced to mitigate these issues by controlling forces via external sensors, they are hampered by the need for fine-tuning task-specific controller parameters. Learning from Demonstrations (LfD) offers an intuitive alternative, allowing robots to learn manipulations through observed actions. In this work, we introduce a novel system to enhance the teaching of dexterous, contact-rich manipulations to rigid robots. Our system is twofold: firstly, it incorporates a teleoperation interface utilizing Virtual Reality (VR) controllers, designed to provide an intuitive and cost-effective method for task demonstration with haptic feedback. Secondly, we present Comp-ACT (Compliance Control via Action Chunking with Transformers), a method that leverages the demonstrations to learn variable compliance control from a few demonstrations. Our methods have been validated across various complex contact-rich manipulation tasks using single-arm and bimanual robot setups in simulated and real-world environments, demonstrating the effectiveness of our system in teaching robots dexterous manipulations with enhanced adaptability and safety. | [
"['Tatsuya Kamijo' 'Cristian C. Beltran-Hernandez' 'Masashi Hamaya']"
] |
null | null | 2406.14995 | null | null | http://arxiv.org/pdf/2406.14995v1 | 2024-06-21T09:14:11Z | 2024-06-21T09:14:11Z | Probabilistic and Differentiable Wireless Simulation with Geometric
Transformers | Modelling the propagation of electromagnetic signals is critical for designing modern communication systems. While there are precise simulators based on ray tracing, they do not lend themselves to solving inverse problems or the integration in an automated design loop. We propose to address these challenges through differentiable neural surrogates that exploit the geometric aspects of the problem. We first introduce the Wireless Geometric Algebra Transformer (Wi-GATr), a generic backbone architecture for simulating wireless propagation in a 3D environment. It uses versatile representations based on geometric algebra and is equivariant with respect to E(3), the symmetry group of the underlying physics. Second, we study two algorithmic approaches to signal prediction and inverse problems based on differentiable predictive modelling and diffusion models. We show how these let us predict received power, localize receivers, and reconstruct the 3D environment from the received signal. Finally, we introduce two large, geometry-focused datasets of wireless signal propagation in indoor scenes. In experiments, we show that our geometry-forward approach achieves higher-fidelity predictions with less data than various baselines. | [
"['Thomas Hehn' 'Markus Peschl' 'Tribhuvanesh Orekondy' 'Arash Behboodi'\n 'Johann Brehmer']"
] |
null | null | 2406.15004 | null | null | http://arxiv.org/pdf/2406.15004v1 | 2024-06-21T09:32:09Z | 2024-06-21T09:32:09Z | Dislocation cartography: Representations and unsupervised classification
of dislocation networks with unique fingerprints | Detecting structure in data is the first step to arrive at meaningful representations for systems. This is particularly challenging for dislocation networks evolving as a consequence of plastic deformation of crystalline systems. Our study employs Isomap, a manifold learning technique, to unveil the intrinsic structure of high-dimensional density field data of dislocation structures from different compression axis. The resulting maps provide a systematic framework for quantitatively comparing dislocation structures, offering unique fingerprints based on density fields. Our novel, unbiased approach contributes to the quantitative classification of dislocation structures which can be systematically extended. | [
"['Benjamin Udofia' 'Tushar Jogi' 'Markus Stricker']"
] |
null | null | 2406.15025 | null | null | http://arxiv.org/pdf/2406.15025v1 | 2024-06-21T10:03:14Z | 2024-06-21T10:03:14Z | SiT: Symmetry-Invariant Transformers for Generalisation in Reinforcement
Learning | An open challenge in reinforcement learning (RL) is the effective deployment of a trained policy to new or slightly different situations as well as semantically-similar environments. We introduce Symmetry-Invariant Transformer (SiT), a scalable vision transformer (ViT) that leverages both local and global data patterns in a self-supervised manner to improve generalisation. Central to our approach is Graph Symmetric Attention, which refines the traditional self-attention mechanism to preserve graph symmetries, resulting in invariant and equivariant latent representations. We showcase SiT's superior generalization over ViTs on MiniGrid and Procgen RL benchmarks, and its sample efficiency on Atari 100k and CIFAR10. | [
"['Matthias Weissenbacher' 'Rishabh Agarwal' 'Yoshinobu Kawahara']"
] |
null | null | 2406.15027 | null | null | http://arxiv.org/pdf/2406.15027v1 | 2024-06-21T10:09:42Z | 2024-06-21T10:09:42Z | Using Neural Networks for Data Cleaning in Weather Datasets | In climate science, we often want to compare across different datasets. Difficulties can arise in doing this due to inevitable mismatches that arise between observational and reanalysis data, or even between different reanalyses. This misalignment can raise problems for any work that seeks to make inferences about one dataset from another. We considered tropical cyclone location as an example task with one dataset providing atmospheric conditions (ERA5) and another providing storm tracks (IBTrACS). We found that while the examples often aligned well, there were a considerable proportion (around 25%) which were not well aligned. We trained a neural network to map from the wind field to the storm location; in this setting misalignment in the datasets appears as "label noise" (i.e. the labelled storm location does not correspond to the underlying wind field). We found that this neural network trained only on the often noisy labels from IBTrACS had a denoising effect, and performed better than the IBTrACS labels themselves, as measured by human preferences. Remarkably, this even held true for training points, on which we might have expected the network to overfit to the IBTrACS predictions. | [
"['Jack R. P. Hanslope' 'Laurence Aitchison']"
] |
null | null | 2406.15038 | null | null | http://arxiv.org/abs/2406.15038v1 | 2024-06-21T10:35:46Z | 2024-06-21T10:35:46Z | Online detection and infographic explanation of spam reviews with data
drift adaptation | Spam reviews are a pervasive problem on online platforms due to its significant impact on reputation. However, research into spam detection in data streams is scarce. Another concern lies in their need for transparency. Consequently, this paper addresses those problems by proposing an online solution for identifying and explaining spam reviews, incorporating data drift adaptation. It integrates (i) incremental profiling, (ii) data drift detection & adaptation, and (iii) identification of spam reviews employing Machine Learning. The explainable mechanism displays a visual and textual prediction explanation in a dashboard. The best results obtained reached up to 87 % spam F-measure. | [
"['Francisco de Arriba-Pérez' 'Silvia García-Méndez' 'Fátima Leal'\n 'Benedita Malheiro' 'J. C. Burguillo']"
] |
null | null | 2406.15042 | null | null | http://arxiv.org/pdf/2406.15042v1 | 2024-06-21T10:45:43Z | 2024-06-21T10:45:43Z | Behaviour Distillation | Dataset distillation aims to condense large datasets into a small number of synthetic examples that can be used as drop-in replacements when training new models. It has applications to interpretability, neural architecture search, privacy, and continual learning. Despite strong successes in supervised domains, such methods have not yet been extended to reinforcement learning, where the lack of a fixed dataset renders most distillation methods unusable. Filling the gap, we formalize behaviour distillation, a setting that aims to discover and then condense the information required for training an expert policy into a synthetic dataset of state-action pairs, without access to expert data. We then introduce Hallucinating Datasets with Evolution Strategies (HaDES), a method for behaviour distillation that can discover datasets of just four state-action pairs which, under supervised learning, train agents to competitive performance levels in continuous control tasks. We show that these datasets generalize out of distribution to training policies with a wide range of architectures and hyperparameters. We also demonstrate application to a downstream task, namely training multi-task agents in a zero-shot fashion. Beyond behaviour distillation, HaDES provides significant improvements in neuroevolution for RL over previous approaches and achieves SoTA results on one standard supervised dataset distillation task. Finally, we show that visualizing the synthetic datasets can provide human-interpretable task insights. | [
"['Andrei Lupu' 'Chris Lu' 'Jarek Liesen' 'Robert Tjarko Lange'\n 'Jakob Foerster']"
] |
null | null | 2406.15043 | null | null | http://arxiv.org/pdf/2406.15043v1 | 2024-06-21T10:47:06Z | 2024-06-21T10:47:06Z | Discovering Common Information in Multi-view Data | We introduce an innovative and mathematically rigorous definition for computing common information from multi-view data, drawing inspiration from G'acs-K"orner common information in information theory. Leveraging this definition, we develop a novel supervised multi-view learning framework to capture both common and unique information. By explicitly minimizing a total correlation term, the extracted common information and the unique information from each view are forced to be independent of each other, which, in turn, theoretically guarantees the effectiveness of our framework. To estimate information-theoretic quantities, our framework employs matrix-based R{'e}nyi's $alpha$-order entropy functional, which forgoes the need for variational approximation and distributional estimation in high-dimensional space. Theoretical proof is provided that our framework can faithfully discover both common and unique information from multi-view data. Experiments on synthetic and seven benchmark real-world datasets demonstrate the superior performance of our proposed framework over state-of-the-art approaches. | [
"['Qi Zhang' 'Mingfei Lu' 'Shujian Yu' 'Jingmin Xin' 'Badong Chen']"
] |
null | null | 2406.15044 | null | null | http://arxiv.org/pdf/2406.15044v1 | 2024-06-21T10:47:26Z | 2024-06-21T10:47:26Z | From Overfitting to Robustness: Quantity, Quality, and Variety Oriented
Negative Sample Selection in Graph Contrastive Learning | Graph contrastive learning (GCL) aims to contrast positive-negative counterparts to learn the node embeddings, whereas graph data augmentation methods are employed to generate these positive-negative samples. The variation, quantity, and quality of negative samples compared to positive samples play crucial roles in learning meaningful embeddings for node classification downstream tasks. Less variation, excessive quantity, and low-quality negative samples cause the model to be overfitted for particular nodes, resulting in less robust models. To solve the overfitting problem in the GCL paradigm, this study proposes a novel Cumulative Sample Selection (CSS) algorithm by comprehensively considering negative samples' quality, variations, and quantity. Initially, three negative sample pools are constructed: easy, medium, and hard negative samples, which contain 25%, 50%, and 25% of the total available negative samples, respectively. Then, 10% negative samples are selected from each of these three negative sample pools for training the model. After that, a decision agent module evaluates model training results and decides whether to explore more negative samples from three negative sample pools by increasing the ratio or keep exploiting the current sampling ratio. The proposed algorithm is integrated into a proposed graph contrastive learning framework named NegAmplify. NegAmplify is compared with the SOTA methods on nine graph node classification datasets, with seven achieving better node classification accuracy with up to 2.86% improvement. | [
"['Adnan Ali' 'Jinlong Li' 'Huanhuan Chen' 'Ali Kashif Bashir']"
] |
null | null | 2406.15050 | null | null | http://arxiv.org/pdf/2406.15050v1 | 2024-06-21T10:50:55Z | 2024-06-21T10:50:55Z | Tri-VQA: Triangular Reasoning Medical Visual Question Answering for
Multi-Attribute Analysis | The intersection of medical Visual Question Answering (Med-VQA) is a challenging research topic with advantages including patient engagement and clinical expert involvement for second opinions. However, existing Med-VQA methods based on joint embedding fail to explain whether their provided results are based on correct reasoning or coincidental answers, which undermines the credibility of VQA answers. In this paper, we investigate the construction of a more cohesive and stable Med-VQA structure. Motivated by causal effect, we propose a novel Triangular Reasoning VQA (Tri-VQA) framework, which constructs reverse causal questions from the perspective of "Why this answer?" to elucidate the source of the answer and stimulate more reasonable forward reasoning processes. We evaluate our method on the Endoscopic Ultrasound (EUS) multi-attribute annotated dataset from five centers, and test it on medical VQA datasets. Experimental results demonstrate the superiority of our approach over existing methods. Our codes and pre-trained models are available at https://anonymous.4open.science/r/Tri_VQA. | [
"['Lin Fan' 'Xun Gong' 'Cenyang Zheng' 'Yafei Ou']"
] |
null | null | 2406.15057 | null | null | http://arxiv.org/pdf/2406.15057v1 | 2024-06-21T11:11:46Z | 2024-06-21T11:11:46Z | Latent Space Translation via Inverse Relative Projection | The emergence of similar representations between independently trained neural models has sparked significant interest in the representation learning community, leading to the development of various methods to obtain communication between latent spaces. "Latent space communication" can be achieved in two ways: i) by independently mapping the original spaces to a shared or relative one; ii) by directly estimating a transformation from a source latent space to a target one. In this work, we combine the two into a novel method to obtain latent space translation through the relative space. By formalizing the invertibility of angle-preserving relative representations and assuming the scale invariance of decoder modules in neural models, we can effectively use the relative space as an intermediary, independently projecting onto and from other semantically similar spaces. Extensive experiments over various architectures and datasets validate our scale invariance assumption and demonstrate the high accuracy of our method in latent space translation. We also apply our method to zero-shot stitching between arbitrary pre-trained text and image encoders and their classifiers, even across modalities. Our method has significant potential for facilitating the reuse of models in a practical manner via compositionality. | [
"['Valentino Maiorca' 'Luca Moschella' 'Marco Fumero' 'Francesco Locatello'\n 'Emanuele Rodolà']"
] |
null | null | 2406.15070 | null | null | http://arxiv.org/pdf/2406.15070v2 | 2024-06-24T07:39:38Z | 2024-06-21T11:40:01Z | Tempora-Fusion: Time-Lock Puzzle with Efficient Verifiable Homomorphic
Linear Combination | To securely transmit sensitive information into the future, Time-Lock Puzzles (TLPs) have been developed. Their applications include scheduled payments, timed commitments, e-voting, and sealed-bid auctions. Homomorphic TLP is a key variant of TLP that enables computation on puzzles from different clients. This allows a solver/server to tackle only a single puzzle encoding the computation's result. However, existing homomorphic TLPs lack support for verifying the correctness of the computation results. We address this limitation by introducing Tempora-Fusion, a TLP that allows a server to perform homomorphic linear combinations of puzzles from different clients while ensuring verification of computation correctness. This scheme avoids asymmetric-key cryptography for verification, thus paving the way for efficient implementations. We discuss our scheme's application in various domains, such as federated learning, scheduled payments in online banking, and e-voting. | [
"['Aydin Abadi']"
] |
null | null | 2406.15076 | null | null | http://arxiv.org/pdf/2406.15076v1 | 2024-06-21T11:42:55Z | 2024-06-21T11:42:55Z | Neural Incremental Data Assimilation | Data assimilation is a central problem in many geophysical applications, such as weather forecasting. It aims to estimate the state of a potentially large system, such as the atmosphere, from sparse observations, supplemented by prior physical knowledge. The size of the systems involved and the complexity of the underlying physical equations make it a challenging task from a computational point of view. Neural networks represent a promising method of emulating the physics at low cost, and therefore have the potential to considerably improve and accelerate data assimilation. In this work, we introduce a deep learning approach where the physical system is modeled as a sequence of coarse-to-fine Gaussian prior distributions parametrized by a neural network. This allows us to define an assimilation operator, which is trained in an end-to-end fashion to minimize the reconstruction error on a dataset with different observation processes. We illustrate our approach on chaotic dynamical physical systems with sparse observations, and compare it to traditional variational data assimilation methods. | [
"['Matthieu Blanke' 'Ronan Fablet' 'Marc Lelarge']"
] |
null | null | 2406.15079 | null | null | http://arxiv.org/pdf/2406.15079v1 | 2024-06-21T11:55:20Z | 2024-06-21T11:55:20Z | GOAL: A Generalist Combinatorial Optimization Agent Learner | Machine Learning-based heuristics have recently shown impressive performance in solving a variety of hard combinatorial optimization problems (COPs). However they generally rely on a separate neural model, specialized and trained for each single problem. Any variation of a problem requires adjustment of its model and re-training from scratch. In this paper, we propose GOAL (for Generalist combinatorial Optimization Agent Learning), a generalist model capable of efficiently solving multiple COPs and which can be fine-tuned to solve new COPs. GOAL consists of a single backbone plus light-weight problem-specific adapters, mostly for input and output processing. The backbone is based on a new form of mixed-attention blocks which allows to handle problems defined on graphs with arbitrary combinations of node, edge and instance-level features. Additionally, problems which involve heterogeneous nodes or edges, such as in multi-partite graphs, are handled through a novel multi-type transformer architecture, where the attention blocks are duplicated to attend only the relevant combination of types while relying on the same shared parameters. We train GOAL on a set of routing, scheduling and classic graph problems and show that it is only slightly inferior to the specialized baselines while being the first multi-task model that solves a variety of COPs. Finally, we showcase the strong transfer learning capacity of GOAL by fine-tuning or learning the adapters for new problems, with only few shots and little data. | [
"['Darko Drakulic' 'Sofia Michel' 'Jean-Marc Andreoli']"
] |
null | null | 2406.15096 | null | null | http://arxiv.org/pdf/2406.15096v1 | 2024-06-21T12:24:36Z | 2024-06-21T12:24:36Z | Towards General Negotiation Strategies with End-to-End Reinforcement
Learning | The research field of automated negotiation has a long history of designing agents that can negotiate with other agents. Such negotiation strategies are traditionally based on manual design and heuristics. More recently, reinforcement learning approaches have also been used to train agents to negotiate. However, negotiation problems are diverse, causing observation and action dimensions to change, which cannot be handled by default linear policy networks. Previous work on this topic has circumvented this issue either by fixing the negotiation problem, causing policies to be non-transferable between negotiation problems or by abstracting the observations and actions into fixed-size representations, causing loss of information and expressiveness due to feature design. We developed an end-to-end reinforcement learning method for diverse negotiation problems by representing observations and actions as a graph and applying graph neural networks in the policy. With empirical evaluations, we show that our method is effective and that we can learn to negotiate with other agents on never-before-seen negotiation problems. Our result opens up new opportunities for reinforcement learning in negotiation agents. | [
"['Bram M. Renting' 'Thomas M. Moerland' 'Holger H. Hoos'\n 'Catholijn M. Jonker']"
] |
null | null | 2406.15098 | null | null | http://arxiv.org/pdf/2406.15098v1 | 2024-06-21T12:26:48Z | 2024-06-21T12:26:48Z | How Intermodal Interaction Affects the Performance of Deep Multimodal
Fusion for Mixed-Type Time Series | Mixed-type time series (MTTS) is a bimodal data type that is common in many domains, such as healthcare, finance, environmental monitoring, and social media. It consists of regularly sampled continuous time series and irregularly sampled categorical event sequences. The integration of both modalities through multimodal fusion is a promising approach for processing MTTS. However, the question of how to effectively fuse both modalities remains open. In this paper, we present a comprehensive evaluation of several deep multimodal fusion approaches for MTTS forecasting. Our comparison includes three fusion types (early, intermediate, and late) and five fusion methods (concatenation, weighted mean, weighted mean with correlation, gating, and feature sharing). We evaluate these fusion approaches on three distinct datasets, one of which was generated using a novel framework. This framework allows for the control of key data properties, such as the strength and direction of intermodal interactions, modality imbalance, and the degree of randomness in each modality, providing a more controlled environment for testing fusion approaches. Our findings show that the performance of different fusion approaches can be substantially influenced by the direction and strength of intermodal interactions. The study reveals that early and intermediate fusion approaches excel at capturing fine-grained and coarse-grained cross-modal features, respectively. These findings underscore the crucial role of intermodal interactions in determining the most effective fusion strategy for MTTS forecasting. | [
"['Simon Dietz' 'Thomas Altstidl' 'Dario Zanca' 'Björn Eskofier'\n 'An Nguyen']"
] |
null | null | 2406.15102 | null | null | http://arxiv.org/pdf/2406.15102v1 | 2024-06-21T12:41:41Z | 2024-06-21T12:41:41Z | HLQ: Fast and Efficient Backpropagation via Hadamard Low-rank
Quantization | With the rapid increase in model size and the growing importance of various fine-tuning applications, lightweight training has become crucial. Since the backward pass is twice as expensive as the forward pass, optimizing backpropagation is particularly important. However, modifications to this process can lead to suboptimal convergence, so training optimization should minimize perturbations, which is a highly challenging task. In this study, we introduce a novel optimization strategy called Hadamard Low-rank Quantization (HLQ), focusing on reducing the cost of backpropagation in convolutional and linear layers. We first analyze the sensitivity of gradient computation with respect to activation and weight, and judiciously design the HLQ pipeline to apply 4-bit Hadamard quantization to the activation gradient and Hadamard low-rank approximation to the weight gradient. This combination was found to be the best for maximizing benefits, and our extensive experiments demonstrate the outstanding performance of HLQ in both training from scratch and fine-tuning, achieving significant memory savings and acceleration on real GPUs with negligible quality degradation. | [
"['Seonggon Kim' 'Eunhyeok Park']"
] |
null | null | 2406.15109 | null | null | http://arxiv.org/pdf/2406.15109v1 | 2024-06-21T12:54:03Z | 2024-06-21T12:54:03Z | Brain-Like Language Processing via a Shallow Untrained Multihead
Attention Network | Large Language Models (LLMs) have been shown to be effective models of the human language system, with some models predicting most explainable variance of brain activity in current datasets. Even in untrained models, the representations induced by architectural priors can exhibit reasonable alignment to brain data. In this work, we investigate the key architectural components driving the surprising alignment of untrained models. To estimate LLM-to-brain similarity, we first select language-selective units within an LLM, similar to how neuroscientists identify the language network in the human brain. We then benchmark the brain alignment of these LLM units across five different brain recording datasets. By isolating critical components of the Transformer architecture, we identify tokenization strategy and multihead attention as the two major components driving brain alignment. A simple form of recurrence further improves alignment. We further demonstrate this quantitative brain alignment of our model by reproducing landmark studies in the language neuroscience field, showing that localized model units -- just like language voxels measured empirically in the human brain -- discriminate more reliably between lexical than syntactic differences, and exhibit similar response profiles under the same experimental conditions. Finally, we demonstrate the utility of our model's representations for language modeling, achieving improved sample and parameter efficiency over comparable architectures. Our model's estimates of surprisal sets a new state-of-the-art in the behavioral alignment to human reading times. Taken together, we propose a highly brain- and behaviorally-aligned model that conceptualizes the human language system as an untrained shallow feature encoder, with structural priors, combined with a trained decoder to achieve efficient and performant language processing. | [
"['Badr AlKhamissi' 'Greta Tuckute' 'Antoine Bosselut' 'Martin Schrimpf']"
] |
null | null | 2406.15124 | null | null | http://arxiv.org/pdf/2406.15124v1 | 2024-06-21T13:17:33Z | 2024-06-21T13:17:33Z | A Provably Efficient Option-Based Algorithm for both High-Level and
Low-Level Learning | Hierarchical Reinforcement Learning (HRL) approaches have shown successful results in solving a large variety of complex, structured, long-horizon problems. Nevertheless, a full theoretical understanding of this empirical evidence is currently missing. In the context of the emph{option} framework, prior research has devised efficient algorithms for scenarios where options are fixed, and the high-level policy selecting among options only has to be learned. However, the fully realistic scenario in which both the high-level and the low-level policies are learned is surprisingly disregarded from a theoretical perspective. This work makes a step towards the understanding of this latter scenario. Focusing on the finite-horizon problem, we present a meta-algorithm alternating between regret minimization algorithms instanced at different (high and low) temporal abstractions. At the higher level, we treat the problem as a Semi-Markov Decision Process (SMDP), with fixed low-level policies, while at a lower level, inner option policies are learned with a fixed high-level policy. The bounds derived are compared with the lower bound for non-hierarchical finite-horizon problems, allowing to characterize when a hierarchical approach is provably preferable, even without pre-trained options. | [
"['Gianluca Drappo' 'Alberto Maria Metelli' 'Marcello Restelli']"
] |
null | null | 2406.15125 | null | null | http://arxiv.org/abs/2406.15125v1 | 2024-06-21T13:19:29Z | 2024-06-21T13:19:29Z | Embracing Federated Learning: Enabling Weak Client Participation via
Partial Model Training | In Federated Learning (FL), clients may have weak devices that cannot train the full model or even hold it in their memory space. To implement large-scale FL applications, thus, it is crucial to develop a distributed learning method that enables the participation of such weak clients. We propose EmbracingFL, a general FL framework that allows all available clients to join the distributed training regardless of their system resource capacity. The framework is built upon a novel form of partial model training method in which each client trains as many consecutive output-side layers as its system resources allow. Our study demonstrates that EmbracingFL encourages each layer to have similar data representations across clients, improving FL efficiency. The proposed partial model training method guarantees convergence to a neighbor of stationary points for non-convex and smooth problems. We evaluate the efficacy of EmbracingFL under a variety of settings with a mixed number of strong, moderate (~40% memory), and weak (~15% memory) clients, datasets (CIFAR-10, FEMNIST, and IMDB), and models (ResNet20, CNN, and LSTM). Our empirical study shows that EmbracingFL consistently achieves high accuracy as like all clients are strong, outperforming the state-of-the-art width reduction methods (i.e. HeteroFL and FjORD). | [
"['Sunwoo Lee' 'Tuo Zhang' 'Saurav Prakash' 'Yue Niu' 'Salman Avestimehr']"
] |
null | null | 2406.15131 | null | null | http://arxiv.org/pdf/2406.15131v1 | 2024-06-21T13:27:36Z | 2024-06-21T13:27:36Z | KalMamba: Towards Efficient Probabilistic State Space Models for RL
under Uncertainty | Probabilistic State Space Models (SSMs) are essential for Reinforcement Learning (RL) from high-dimensional, partial information as they provide concise representations for control. Yet, they lack the computational efficiency of their recent deterministic counterparts such as S4 or Mamba. We propose KalMamba, an efficient architecture to learn representations for RL that combines the strengths of probabilistic SSMs with the scalability of deterministic SSMs. KalMamba leverages Mamba to learn the dynamics parameters of a linear Gaussian SSM in a latent space. Inference in this latent space amounts to standard Kalman filtering and smoothing. We realize these operations using parallel associative scanning, similar to Mamba, to obtain a principled, highly efficient, and scalable probabilistic SSM. Our experiments show that KalMamba competes with state-of-the-art SSM approaches in RL while significantly improving computational efficiency, especially on longer interaction sequences. | [
"['Philipp Becker' 'Niklas Freymuth' 'Gerhard Neumann']"
] |
null | null | 2406.15132 | null | null | http://arxiv.org/pdf/2406.15132v1 | 2024-06-20T03:14:56Z | 2024-06-20T03:14:56Z | Younger: The First Dataset for Artificial Intelligence-Generated Neural
Network Architecture | Designing and optimizing neural network architectures typically requires extensive expertise, starting with handcrafted designs and then manual or automated refinement. This dependency presents a significant barrier to rapid innovation. Recognizing the complexity of automatically generating neural network architecture from scratch, we introduce Younger, a pioneering dataset to advance this ambitious goal. Derived from over 174K real-world models across more than 30 tasks from various public model hubs, Younger includes 7,629 unique architectures, and each is represented as a directed acyclic graph with detailed operator-level information. The dataset facilitates two primary design paradigms: global, for creating complete architectures from scratch, and local, for detailed architecture component refinement. By establishing these capabilities, Younger contributes to a new frontier, Artificial Intelligence-Generated Neural Network Architecture (AIGNNA). Our experiments explore the potential and effectiveness of Younger for automated architecture generation and, as a secondary benefit, demonstrate that Younger can serve as a benchmark dataset, advancing the development of graph neural networks. We release the dataset and code publicly to lower the entry barriers and encourage further research in this challenging area. | [
"['Zhengxin Yang' 'Wanling Gao' 'Luzhou Peng' 'Yunyou Huang' 'Fei Tang'\n 'Jianfeng Zhan']"
] |
null | null | 2406.15152 | null | null | http://arxiv.org/pdf/2406.15152v1 | 2024-06-21T13:55:34Z | 2024-06-21T13:55:34Z | Generative Topological Networks | Generative models have seen significant advancements in recent years, yet often remain challenging and costly to train and use. We introduce Generative Topological Networks (GTNs) -- a new class of generative models that addresses these shortcomings. GTNs are trained deterministically using a simple supervised learning approach grounded in topology theory. GTNs are fast to train, and require only a single forward pass in a standard feedforward neural network to generate samples. We demonstrate the strengths of GTNs in several datasets, including MNIST, celebA and the Hands and Palm Images dataset. Finally, the theory behind GTNs offers insights into how to train generative models for improved performance. | [
"['Alona Levy-Jurgenson' 'Zohar Yakhini']"
] |
null | null | 2406.15156 | null | null | http://arxiv.org/pdf/2406.15156v1 | 2024-06-21T14:01:23Z | 2024-06-21T14:01:23Z | Perks and Pitfalls of Faithfulness in Regular, Self-Explainable and
Domain Invariant GNNs | As Graph Neural Networks (GNNs) become more pervasive, it becomes paramount to build robust tools for computing explanations of their predictions. A key desideratum is that these explanations are faithful, i.e., that they portray an accurate picture of the GNN's reasoning process. A number of different faithfulness metrics exist, begging the question of what faithfulness is exactly, and what its properties are. We begin by showing that existing metrics are not interchangeable -- i.e., explanations attaining high faithfulness according to one metric may be unfaithful according to others -- and can be systematically insensitive to important properties of the explanation, and suggest how to address these issues. We proceed to show that, surprisingly, optimizing for faithfulness is not always a sensible design goal. Specifically, we show that for injective regular GNN architectures, perfectly faithful explanations are completely uninformative. The situation is different for modular GNNs, such as self-explainable and domain-invariant architectures, where optimizing faithfulness does not compromise informativeness, and is also unexpectedly tied to out-of-distribution generalization. | [
"['Steve Azzolin' 'Antonio Longa' 'Stefano Teso' 'Andrea Passerini']"
] |
null | null | 2406.15189 | null | null | http://arxiv.org/pdf/2406.15189v1 | 2024-06-21T14:31:45Z | 2024-06-21T14:31:45Z | Causal Learning in Biomedical Applications | We present a benchmark for methods in causal learning. Specifically, we consider training a rich class of causal models from time-series data, and we suggest the use of the Krebs cycle and models of metabolism more broadly. | [
"['Petr Ryšavý' 'Xiaoyu He' 'Jakub Mareček']"
] |
null | null | 2406.15213 | null | null | http://arxiv.org/pdf/2406.15213v1 | 2024-06-21T14:53:19Z | 2024-06-21T14:53:19Z | Injecting Bias in Text-To-Image Models via Composite-Trigger Backdoors | Recent advances in large text-conditional image generative models such as Stable Diffusion, Midjourney, and DALL-E 3 have revolutionized the field of image generation, allowing users to produce high-quality, realistic images from textual prompts. While these developments have enhanced artistic creation and visual communication, they also present an underexplored attack opportunity: the possibility of inducing biases by an adversary into the generated images for malicious intentions, e.g., to influence society and spread propaganda. In this paper, we demonstrate the possibility of such a bias injection threat by an adversary who backdoors such models with a small number of malicious data samples; the implemented backdoor is activated when special triggers exist in the input prompt of the backdoored models. On the other hand, the model's utility is preserved in the absence of the triggers, making the attack highly undetectable. We present a novel framework that enables efficient generation of poisoning samples with composite (multi-word) triggers for such an attack. Our extensive experiments using over 1 million generated images and against hundreds of fine-tuned models demonstrate the feasibility of the presented backdoor attack. We illustrate how these biases can bypass conventional detection mechanisms, highlighting the challenges in proving the existence of biases within operational constraints. Our cost analysis confirms the low financial barrier to executing such attacks, underscoring the need for robust defensive strategies against such vulnerabilities in text-to-image generation models. | [
"['Ali Naseh' 'Jaechul Roh' 'Eugene Bagdasaryan' 'Amir Houmansadr']"
] |
null | null | 2406.15229 | null | null | http://arxiv.org/pdf/2406.15229v1 | 2024-06-21T15:15:38Z | 2024-06-21T15:15:38Z | ExDAG: Exact learning of DAGs | There has been a growing interest in causal learning in recent years. Commonly used representations of causal structures, including Bayesian networks and structural equation models (SEM), take the form of directed acyclic graphs (DAGs). We provide a novel mixed-integer quadratic programming formulation and associated algorithm that identifies DAGs on up to 50 vertices, where these are identifiable. We call this method ExDAG, which stands for Exact learning of DAGs. Although there is a superexponential number of constraints that prevent the formation of cycles, the algorithm adds constraints violated by solutions found, rather than imposing all constraints in each continuous-valued relaxation. Our empirical results show that ExDAG outperforms local state-of-the-art solvers in terms of precision and outperforms state-of-the-art global solvers with respect to scaling, when considering Gaussian noise. We also provide validation with respect to other noise distributions. | [
"['Pavel Rytíř' 'Aleš Wodecki' 'Jakub Mareček']"
] |
null | null | 2406.15231 | null | null | http://arxiv.org/pdf/2406.15231v1 | 2024-06-21T15:19:21Z | 2024-06-21T15:19:21Z | Detecting Synthetic Lyrics with Few-Shot Inference | In recent years, generated content in music has gained significant popularity, with large language models being effectively utilized to produce human-like lyrics in various styles, themes, and linguistic structures. This technological advancement supports artists in their creative processes but also raises issues of authorship infringement, consumer satisfaction and content spamming. To address these challenges, methods for detecting generated lyrics are necessary. However, existing works have not yet focused on this specific modality or on creative text in general regarding machine-generated content detection methods and datasets. In response, we have curated the first dataset of high-quality synthetic lyrics and conducted a comprehensive quantitative evaluation of various few-shot content detection approaches, testing their generalization capabilities and complementing this with a human evaluation. Our best few-shot detector, based on LLM2Vec, surpasses stylistic and statistical methods, which are shown competitive in other domains at distinguishing human-written from machine-generated content. It also shows good generalization capabilities to new artists and models, and effectively detects post-generation paraphrasing. This study emphasizes the need for further research on creative content detection, particularly in terms of generalization and scalability with larger song catalogs. All datasets, pre-processing scripts, and code are available publicly on GitHub and Hugging Face under the Apache 2.0 license. | [
"['Yanis Labrak' 'Gabriel Meseguer-Brocal' 'Elena V. Epure']"
] |
null | null | 2406.15244 | null | null | http://arxiv.org/pdf/2406.15244v1 | 2024-06-21T15:29:31Z | 2024-06-21T15:29:31Z | Large Batch Analysis for Adagrad Under Anisotropic Smoothness | Adaptive gradient algorithms have been widely adopted in training large-scale deep neural networks, especially large foundation models. Despite their huge success in practice, their theoretical advantages over stochastic gradient descent (SGD) have not been fully understood, especially in the large batch-size setting commonly used in practice. This is because the only theoretical result that can demonstrate the benefit of Adagrad over SGD was obtained in the original paper of Adagrad for nonsmooth objective functions. However, for nonsmooth objective functions, there can be a linear slowdown of convergence when batch size increases, and thus a convergence analysis based on nonsmooth assumption cannot be used for large batch algorithms. In this work, we resolve this gap between theory and practice by providing a new analysis of Adagrad on both convex and nonconvex smooth objectives suitable for the large batch setting. It is shown that under the anisotropic smoothness and noise conditions, increased batch size does not slow down convergence for Adagrad, and thus it can still achieve a faster convergence guarantee over SGD even in the large batch setting. We present detailed comparisons between SGD and Adagrad to provide a better understanding of the benefits of adaptive gradient methods. Experiments in logistic regression and instruction following fine-tuning tasks provide strong evidence to support our theoretical analysis. | [
"['Yuxing Liu' 'Rui Pan' 'Tong Zhang']"
] |
null | null | 2406.15245 | null | null | http://arxiv.org/pdf/2406.15245v1 | 2024-06-21T15:35:49Z | 2024-06-21T15:35:49Z | Unsupervised Morphological Tree Tokenizer | As a cornerstone in language modeling, tokenization involves segmenting text inputs into pre-defined atomic units. Conventional statistical tokenizers often disrupt constituent boundaries within words, thereby corrupting semantic information. To address this drawback, we introduce morphological structure guidance to tokenization and propose a deep model to induce character-level structures of words. Specifically, the deep model jointly encodes internal structures and representations of words with a mechanism named $textit{MorphOverriding}$ to ensure the indecomposability of morphemes. By training the model with self-supervised objectives, our method is capable of inducing character-level structures that align with morphological rules without annotated training data. Based on the induced structures, our algorithm tokenizes words through vocabulary matching in a top-down manner. Empirical results indicate that the proposed method effectively retains complete morphemes and outperforms widely adopted methods such as BPE and WordPiece on both morphological segmentation tasks and language modeling tasks. The code will be released later. | [
"['Qingyang Zhu' 'Xiang Hu' 'Pengyu Ji' 'Wei Wu' 'Kewei Tu']"
] |
null | null | 2406.15249 | null | null | http://arxiv.org/pdf/2406.15249v1 | 2024-06-20T03:48:15Z | 2024-06-20T03:48:15Z | Machine Learning Techniques in Automatic Music Transcription: A
Systematic Survey | In the domain of Music Information Retrieval (MIR), Automatic Music Transcription (AMT) emerges as a central challenge, aiming to convert audio signals into symbolic notations like musical notes or sheet music. This systematic review accentuates the pivotal role of AMT in music signal analysis, emphasizing its importance due to the intricate and overlapping spectral structure of musical harmonies. Through a thorough examination of existing machine learning techniques utilized in AMT, we explore the progress and constraints of current models and methodologies. Despite notable advancements, AMT systems have yet to match the accuracy of human experts, largely due to the complexities of musical harmonies and the need for nuanced interpretation. This review critically evaluates both fully automatic and semi-automatic AMT systems, emphasizing the importance of minimal user intervention and examining various methodologies proposed to date. By addressing the limitations of prior techniques and suggesting avenues for improvement, our objective is to steer future research towards fully automated AMT systems capable of accurately and efficiently translating intricate audio signals into precise symbolic representations. This study not only synthesizes the latest advancements but also lays out a road-map for overcoming existing challenges in AMT, providing valuable insights for researchers aiming to narrow the gap between current systems and human-level transcription accuracy. | [
"['Fatemeh Jamshidi' 'Gary Pike' 'Amit Das' 'Richard Chapman']"
] |
null | null | 2406.15250 | null | null | http://arxiv.org/pdf/2406.15250v1 | 2024-06-21T15:43:02Z | 2024-06-21T15:43:02Z | Open Problem: Order Optimal Regret Bounds for Kernel-Based Reinforcement
Learning | Reinforcement Learning (RL) has shown great empirical success in various application domains. The theoretical aspects of the problem have been extensively studied over past decades, particularly under tabular and linear Markov Decision Process structures. Recently, non-linear function approximation using kernel-based prediction has gained traction. This approach is particularly interesting as it naturally extends the linear structure, and helps explain the behavior of neural-network-based models at their infinite width limit. The analytical results however do not adequately address the performance guarantees for this case. We will highlight this open problem, overview existing partial results, and discuss related challenges. | [
"['Sattar Vakili']"
] |
null | null | 2406.15283 | null | null | http://arxiv.org/pdf/2406.15283v2 | 2024-06-24T15:24:15Z | 2024-06-21T16:27:17Z | FT-AED: Benchmark Dataset for Early Freeway Traffic Anomalous Event
Detection | Early and accurate detection of anomalous events on the freeway, such as accidents, can improve emergency response and clearance. However, existing delays and errors in event identification and reporting make it a difficult problem to solve. Current large-scale freeway traffic datasets are not designed for anomaly detection and ignore these challenges. In this paper, we introduce the first large-scale lane-level freeway traffic dataset for anomaly detection. Our dataset consists of a month of weekday radar detection sensor data collected in 4 lanes along an 18-mile stretch of Interstate 24 heading toward Nashville, TN, comprising over 3.7 million sensor measurements. We also collect official crash reports from the Nashville Traffic Management Center and manually label all other potential anomalies in the dataset. To show the potential for our dataset to be used in future machine learning and traffic research, we benchmark numerous deep learning anomaly detection models on our dataset. We find that unsupervised graph neural network autoencoders are a promising solution for this problem and that ignoring spatial relationships leads to decreased performance. We demonstrate that our methods can reduce reporting delays by over 10 minutes on average while detecting 75% of crashes. Our dataset and all preprocessing code needed to get started are publicly released at https://vu.edu/ft-aed/ to facilitate future research. | [
"['Austin Coursey' 'Junyi Ji' 'Marcos Quinones-Grueiro' 'William Barbour'\n 'Yuhang Zhang' 'Tyler Derr' 'Gautam Biswas' 'Daniel B. Work']"
] |
null | null | 2406.15291 | null | null | http://arxiv.org/pdf/2406.15291v1 | 2024-06-21T16:35:27Z | 2024-06-21T16:35:27Z | Pessimistic asynchronous sampling in high-cost Bayesian optimization | Asynchronous Bayesian optimization is a recently implemented technique that allows for parallel operation of experimental systems and disjointed workflows. Contrasting with serial Bayesian optimization which individually selects experiments one at a time after conducting a measurement for each experiment, asynchronous policies sequentially assign multiple experiments before measurements can be taken and evaluate new measurements continuously as they are made available. This technique allows for faster data generation and therefore faster optimization of an experimental space. This work extends the capabilities of asynchronous optimization methods beyond prior studies by evaluating four additional policies that incorporate pessimistic predictions in the training data set. Combined with a conventional greedy policy, the five total policies were evaluated in a simulated environment and benchmarked with serial sampling. Under some conditions and parameter space dimensionalities, the pessimistic asynchronous policy reached optimum experimental conditions in significantly fewer experiments than equivalent serial policies and proved to be less susceptible to convergence onto local optima at higher dimensions. Without accounting for the faster sampling rate, the pessimistic asynchronous algorithm presented in this work could result in more efficient algorithm driven optimization of high-cost experimental spaces. Accounting for sampling rate, the presented asynchronous algorithm could allow for faster optimization in experimental spaces where multiple experiments can be run before results are collected. | [
"['Amanda A. Volk' 'Kristofer G. Reyes' 'Jeffrey G. Ethier'\n 'Luke A. Baldwin']"
] |
null | null | 2406.15299 | null | null | http://arxiv.org/pdf/2406.15299v1 | 2024-06-21T16:41:02Z | 2024-06-21T16:41:02Z | Learning Spatio-Temporal Patterns of Polar Ice Layers With
Physics-Informed Graph Neural Network | Learning spatio-temporal patterns of polar ice layers is crucial for monitoring the change in ice sheet balance and evaluating ice dynamic processes. While a few researchers focus on learning ice layer patterns from echogram images captured by airborne snow radar sensors via different convolutional neural networks, the noise in the echogram images proves to be a major obstacle. Instead, we focus on geometric deep learning based on graph neural networks to learn the spatio-temporal patterns from thickness information of shallow ice layers and make predictions for deep layers. In this paper, we propose a physics-informed hybrid graph neural network that combines the GraphSAGE framework for graph feature learning with the long short-term memory (LSTM) structure for learning temporal changes, and introduce measurements of physical ice properties from Model Atmospheric Regional (MAR) weather model as physical node features. We found that our proposed network can consistently outperform the current non-inductive or non-physical model in predicting deep ice layer thickness. | [
"['Zesheng Liu' 'Maryam Rahnemoonfar']"
] |
null | null | 2406.15306 | null | null | http://arxiv.org/pdf/2406.15306v1 | 2024-06-13T08:32:24Z | 2024-06-13T08:32:24Z | Advanced Multimodal Deep Learning Architecture for Image-Text Matching | Image-text matching is a key multimodal task that aims to model the semantic association between images and text as a matching relationship. With the advent of the multimedia information age, image, and text data show explosive growth, and how to accurately realize the efficient and accurate semantic correspondence between them has become the core issue of common concern in academia and industry. In this study, we delve into the limitations of current multimodal deep learning models in processing image-text pairing tasks. Therefore, we innovatively design an advanced multimodal deep learning architecture, which combines the high-level abstract representation ability of deep neural networks for visual information with the advantages of natural language processing models for text semantic understanding. By introducing a novel cross-modal attention mechanism and hierarchical feature fusion strategy, the model achieves deep fusion and two-way interaction between image and text feature space. In addition, we also optimize the training objectives and loss functions to ensure that the model can better map the potential association structure between images and text during the learning process. Experiments show that compared with existing image-text matching models, the optimized new model has significantly improved performance on a series of benchmark data sets. In addition, the new model also shows excellent generalization and robustness on large and diverse open scenario datasets and can maintain high matching performance even in the face of previously unseen complex situations. | [
"['Jinyin Wang' 'Haijing Zhang' 'Yihao Zhong' 'Yingbin Liang' 'Rongwei Ji'\n 'Yiru Cang']"
] |
null | null | 2406.15327 | null | null | http://arxiv.org/pdf/2406.15327v1 | 2024-06-21T17:40:46Z | 2024-06-21T17:40:46Z | Fine-grained Attention in Hierarchical Transformers for Tabular
Time-series | Tabular data is ubiquitous in many real-life systems. In particular, time-dependent tabular data, where rows are chronologically related, is typically used for recording historical events, e.g., financial transactions, healthcare records, or stock history. Recently, hierarchical variants of the attention mechanism of transformer architectures have been used to model tabular time-series data. At first, rows (or columns) are encoded separately by computing attention between their fields. Subsequently, encoded rows (or columns) are attended to one another to model the entire tabular time-series. While efficient, this approach constrains the attention granularity and limits its ability to learn patterns at the field-level across separate rows, or columns. We take a first step to address this gap by proposing Fieldy, a fine-grained hierarchical model that contextualizes fields at both the row and column levels. We compare our proposal against state of the art models on regression and classification tasks using public tabular time-series datasets. Our results show that combining row-wise and column-wise attention improves performance without increasing model size. Code and data are available at https://github.com/raphaaal/fieldy. | [
"['Raphael Azorin' 'Zied Ben Houidi' 'Massimo Gallo' 'Alessandro Finamore'\n 'Pietro Michiardi']"
] |
null | null | 2406.15331 | null | null | http://arxiv.org/pdf/2406.15331v1 | 2024-06-21T17:45:37Z | 2024-06-21T17:45:37Z | Masked Extended Attention for Zero-Shot Virtual Try-On In The Wild | Virtual Try-On (VTON) is a highly active line of research, with increasing demand. It aims to replace a piece of garment in an image with one from another, while preserving person and garment characteristics as well as image fidelity. Current literature takes a supervised approach for the task, impairing generalization and imposing heavy computation. In this paper, we present a novel zero-shot training-free method for inpainting a clothing garment by reference. Our approach employs the prior of a diffusion model with no additional training, fully leveraging its native generalization capabilities. The method employs extended attention to transfer image information from reference to target images, overcoming two significant challenges. We first initially warp the reference garment over the target human using deep features, alleviating "texture sticking". We then leverage the extended attention mechanism with careful masking, eliminating leakage of reference background and unwanted influence. Through a user study, qualitative, and quantitative comparison to state-of-the-art approaches, we demonstrate superior image quality and garment preservation compared unseen clothing pieces or human figures. | [
"['Nadav Orzech' 'Yotam Nitzan' 'Ulysse Mizrahi' 'Dov Danon'\n 'Amit H. Bermano']"
] |
null | null | 2406.15334 | null | null | http://arxiv.org/pdf/2406.15334v1 | 2024-06-21T17:50:02Z | 2024-06-21T17:50:02Z | Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning | The recent success of interleaved Large Multimodal Models (LMMs) in few-shot learning suggests that in-context learning (ICL) with many examples can be promising for learning new tasks. However, this many-shot multimodal ICL setting has one crucial problem: it is fundamentally limited by the model's context length set at pretraining. The problem is especially prominent in the multimodal domain, which processes both text and images, requiring additional tokens. This motivates the need for a multimodal method to compress many shots into fewer tokens without finetuning. In this work, we enable LMMs to perform multimodal, many-shot in-context learning by leveraging Multimodal Task Vectors (MTV)--compact implicit representations of in-context examples compressed in the model's attention heads. Specifically, we first demonstrate the existence of such MTV in LMMs and then leverage these extracted MTV to enable many-shot in-context learning for various vision-and-language tasks. Our experiments suggest that MTV can scale in performance with the number of compressed shots and generalize to similar out-of-domain tasks without additional context length for inference. | [
"['Brandon Huang' 'Chancharik Mitra' 'Assaf Arbelle' 'Leonid Karlinsky'\n 'Trevor Darrell' 'Roei Herzig']"
] |
null | null | 2406.15341 | null | null | http://arxiv.org/pdf/2406.15341v1 | 2024-06-21T17:55:24Z | 2024-06-21T17:55:24Z | GenoTEX: A Benchmark for Evaluating LLM-Based Exploration of Gene
Expression Data in Alignment with Bioinformaticians | Recent advancements in machine learning have significantly improved the identification of disease-associated genes from gene expression datasets. However, these processes often require extensive expertise and manual effort, limiting their scalability. Large Language Model (LLM)-based agents have shown promise in automating these tasks due to their increasing problem-solving abilities. To support the evaluation and development of such methods, we introduce GenoTEX, a benchmark dataset for the automatic exploration of gene expression data, involving the tasks of dataset selection, preprocessing, and statistical analysis. GenoTEX provides annotated code and results for solving a wide range of gene identification problems, in a full analysis pipeline that follows the standard of computational genomics. These annotations are curated by human bioinformaticians who carefully analyze the datasets to ensure accuracy and reliability. To provide baselines for these tasks, we present GenoAgents, a team of LLM-based agents designed with context-aware planning, iterative correction, and domain expert consultation to collaboratively explore gene datasets. Our experiments with GenoAgents demonstrate the potential of LLM-based approaches in genomics data analysis, while error analysis highlights the challenges and areas for future improvement. We propose GenoTEX as a promising resource for benchmarking and enhancing AI-driven methods for genomics data analysis. We make our benchmark publicly available at url{https://github.com/Liu-Hy/GenoTex}. | [
"['Haoyang Liu' 'Haohan Wang']"
] |
null | null | 2406.15346 | null | null | http://arxiv.org/pdf/2406.15346v1 | 2024-06-21T17:57:39Z | 2024-06-21T17:57:39Z | Privacy Preserved Blood Glucose Level Cross-Prediction: An Asynchronous
Decentralized Federated Learning Approach | Newly diagnosed Type 1 Diabetes (T1D) patients often struggle to obtain effective Blood Glucose (BG) prediction models due to the lack of sufficient BG data from Continuous Glucose Monitoring (CGM), presenting a significant "cold start" problem in patient care. Utilizing population models to address this challenge is a potential solution, but collecting patient data for training population models in a privacy-conscious manner is challenging, especially given that such data is often stored on personal devices. Considering the privacy protection and addressing the "cold start" problem in diabetes care, we propose "GluADFL", blood Glucose prediction by Asynchronous Decentralized Federated Learning. We compared GluADFL with eight baseline methods using four distinct T1D datasets, comprising 298 participants, which demonstrated its superior performance in accurately predicting BG levels for cross-patient analysis. Furthermore, patients' data might be stored and shared across various communication networks in GluADFL, ranging from highly interconnected (e.g., random, performs the best among others) to more structured topologies (e.g., cluster and ring), suitable for various social networks. The asynchronous training framework supports flexible participation. By adjusting the ratios of inactive participants, we found it remains stable if less than 70% are inactive. Our results confirm that GluADFL offers a practical, privacy-preserving solution for BG prediction in T1D, significantly enhancing the quality of diabetes management. | [
"['Chengzhe Piao' 'Taiyu Zhu' 'Yu Wang' 'Stephanie E Baldeweg'\n 'Paul Taylor' 'Pantelis Georgiou' 'Jiahao Sun' 'Jun Wang' 'Kezhi Li']"
] |
null | null | 2406.15349 | null | null | http://arxiv.org/pdf/2406.15349v1 | 2024-06-21T17:59:02Z | 2024-06-21T17:59:02Z | NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and
Benchmarking | Benchmarking vision-based driving policies is challenging. On one hand, open-loop evaluation with real data is easy, but these results do not reflect closed-loop performance. On the other, closed-loop evaluation is possible in simulation, but is hard to scale due to its significant computational demands. Further, the simulators available today exhibit a large domain gap to real data. This has resulted in an inability to draw clear conclusions from the rapidly growing body of research on end-to-end autonomous driving. In this paper, we present NAVSIM, a middle ground between these evaluation paradigms, where we use large datasets in combination with a non-reactive simulator to enable large-scale real-world benchmarking. Specifically, we gather simulation-based metrics, such as progress and time to collision, by unrolling bird's eye view abstractions of the test scenes for a short simulation horizon. Our simulation is non-reactive, i.e., the evaluated policy and environment do not influence each other. As we demonstrate empirically, this decoupling allows open-loop metric computation while being better aligned with closed-loop evaluations than traditional displacement errors. NAVSIM enabled a new competition held at CVPR 2024, where 143 teams submitted 463 entries, resulting in several new insights. On a large set of challenging scenarios, we observe that simple methods with moderate compute requirements such as TransFuser can match recent large-scale end-to-end driving architectures such as UniAD. Our modular framework can potentially be extended with new datasets, data curation strategies, and metrics, and will be continually maintained to host future challenges. Our code is available at https://github.com/autonomousvision/navsim. | [
"['Daniel Dauner' 'Marcel Hallgarten' 'Tianyu Li' 'Xinshuo Weng'\n 'Zhiyu Huang' 'Zetong Yang' 'Hongyang Li' 'Igor Gilitschenski'\n 'Boris Ivanovic' 'Marco Pavone' 'Andreas Geiger' 'Kashyap Chitta']"
] |
null | null | 2406.15377 | null | null | http://arxiv.org/pdf/2406.15377v1 | 2024-04-17T12:21:06Z | 2024-04-17T12:21:06Z | Model Callers for Transforming Predictive and Generative AI Applications | We introduce a novel software abstraction termed "model caller," acting as an intermediary for AI and ML model calling, advocating its transformative utility beyond existing model-serving frameworks. This abstraction offers multiple advantages: enhanced accuracy and reduced latency in model predictions, superior monitoring and observability of models, more streamlined AI system architectures, simplified AI development and management processes, and improved collaboration and accountability across AI/ML/Data Science, software, data, and operations teams. Model callers are valuable for both creators and users of models within both predictive and generative AI applications. Additionally, we have developed and released a prototype Python library for model callers, accessible for installation via pip or for download from GitHub. | [
"['Mukesh Dalal']"
] |
null | null | 2406.15396 | null | null | http://arxiv.org/pdf/2406.15396v1 | 2024-04-30T16:45:51Z | 2024-04-30T16:45:51Z | Feature Purified Transformer With Cross-level Feature Guiding Decoder
For Multi-class OOD and Anomaly Deteciton | Reconstruction networks are prevalently used in unsupervised anomaly and Out-of-Distribution (OOD) detection due to their independence from labeled anomaly data. However, in multi-class datasets, the effectiveness of anomaly detection is often compromised by the models' generalized reconstruction capabilities, which allow anomalies to blend within the expanded boundaries of normality resulting from the added categories, thereby reducing detection accuracy. We introduce the FUTUREG framework, which incorporates two innovative modules: the Feature Purification Module (FPM) and the CFG Decoder. The FPM constrains the normality boundary within the latent space to effectively filter out anomalous features, while the CFG Decoder uses layer-wise encoder representations to guide the reconstruction of filtered features, preserving fine-grained details. Together, these modules enhance the reconstruction error for anomalies, ensuring high-quality reconstructions for normal samples. Our results demonstrate that FUTUREG achieves state-of-the-art performance in multi-class OOD settings and remains competitive in industrial anomaly detection scenarios. | [
"['Jerry Chun-Wei Lin' 'Pi-Wei Chen' 'Chao-Chun Chen']"
] |
null | null | 2406.15451 | null | null | http://arxiv.org/pdf/2406.15451v1 | 2024-06-06T19:54:34Z | 2024-06-06T19:54:34Z | Deep Vision-Based Framework for Coastal Flood Prediction Under Climate
Change Impacts and Shoreline Adaptations | In light of growing threats posed by climate change in general and sea level rise (SLR) in particular, the necessity for computationally efficient means to estimate and analyze potential coastal flood hazards has become increasingly pressing. Data-driven supervised learning methods serve as promising candidates that can dramatically expedite the process, thereby eliminating the computational bottleneck associated with traditional physics-based hydrodynamic simulators. Yet, the development of accurate and reliable coastal flood prediction models, especially those based on Deep Learning (DL) techniques, has been plagued with two major issues: (1) the scarcity of training data and (2) the high-dimensional output required for detailed inundation mapping. To remove this barrier, we present a systematic framework for training high-fidelity Deep Vision-based coastal flood prediction models in low-data settings. We test the proposed workflow on different existing vision models, including a fully transformer-based architecture and a Convolutional Neural Network (CNN) with additive attention gates. Additionally, we introduce a deep CNN architecture tailored specifically to the coastal flood prediction problem at hand. The model was designed with a particular focus on its compactness so as to cater to resource-constrained scenarios and accessibility aspects. The performance of the developed DL models is validated against commonly adopted geostatistical regression methods and traditional Machine Learning (ML) approaches, demonstrating substantial improvement in prediction quality. Lastly, we round up the contributions by providing a meticulously curated dataset of synthetic flood inundation maps of Abu Dhabi's coast produced with a physics-based hydrodynamic simulator, which can serve as a benchmark for evaluating future coastal flood prediction models. | [
"['Areg Karapetyan' 'Aaron Chung Hin Chow' 'Samer Madanat']"
] |
null | null | 2406.15459 | null | null | http://arxiv.org/pdf/2406.15459v1 | 2024-06-11T03:36:00Z | 2024-06-11T03:36:00Z | Large-Scale Contextual Market Equilibrium Computation through Deep
Learning | Market equilibrium is one of the most fundamental solution concepts in economics and social optimization analysis. Existing works on market equilibrium computation primarily focus on settings with a relatively small number of buyers. Motivated by this, our paper investigates the computation of market equilibrium in scenarios with a large-scale buyer population, where buyers and goods are represented by their contexts. Building on this realistic and generalized contextual market model, we introduce MarketFCNet, a deep learning-based method for approximating market equilibrium. We start by parameterizing the allocation of each good to each buyer using a neural network, which depends solely on the context of the buyer and the good. Next, we propose an efficient method to estimate the loss function of the training algorithm unbiasedly, enabling us to optimize the network parameters through gradient descent. To evaluate the approximated solution, we introduce a metric called Nash Gap, which quantifies the deviation of the given allocation and price pair from the market equilibrium. Experimental results indicate that MarketFCNet delivers competitive performance and significantly lower running times compared to existing methods as the market scale expands, demonstrating the potential of deep learning-based methods to accelerate the approximation of large-scale contextual market equilibrium. | [
"['Yunxuan Ma' 'Yide Bian' 'Hao Xu' 'Weitao Yang' 'Jingshu Zhao'\n 'Zhijian Duan' 'Feng Wang' 'Xiaotie Deng']"
] |
null | null | 2406.15468 | null | null | http://arxiv.org/pdf/2406.15468v1 | 2024-06-15T05:35:47Z | 2024-06-15T05:35:47Z | Reasoning or Simply Next Token Prediction? A Benchmark for
Stress-Testing Large Language Models | We propose MMLU-SR, a novel dataset designed to measure the true comprehension abilities of Large Language Models (LLMs) by challenging their performance in question-answering tasks with modified terms. We reasoned that an agent that ``truly'' understands a concept can still evaluate it when key terms are replaced by suitably defined alternate terms, and sought to differentiate such comprehension from mere text replacement. In our study, we modified standardized test questions by replacing a key term with a dummy word along with its definition. The key term could be in the context of questions, answers, or both questions and answers. Notwithstanding the high scores achieved by recent popular LLMs on the MMLU leaderboard, we found a substantial reduction in model performance after such replacement, suggesting poor comprehension. This new benchmark provides a rigorous benchmark for testing true model comprehension, and poses a challenge to the broader scientific community. | [
"['Wentian Wang' 'Paul Kantor' 'Jacob Feldman' 'Lazaros Gallos' 'Hao Wang']"
] |
null | null | 2406.15471 | null | null | http://arxiv.org/pdf/2406.15471v1 | 2024-06-15T14:44:43Z | 2024-06-15T14:44:43Z | Improving Large Models with Small models: Lower Costs and Better
Performance | Pretrained large models (PLMs), such as ChatGPT, have demonstrated remarkable performance across diverse tasks. However, the significant computational requirements of PLMs have discouraged most product teams from running or fine-tuning them. In such cases, to harness the exceptional performance of PLMs, one must rely on expensive APIs, thereby exacerbating the economic burden. Despite the overall inferior performance of small models, in specific distributions, they can achieve comparable or even superior results. Consequently, some input can be processed exclusively by small models. On the other hand, certain tasks can be broken down into multiple subtasks, some of which can be completed without powerful capabilities. Under these circumstances, small models can handle the simple subtasks, allowing large models to focus on challenging subtasks, thus improving the performance. We propose Data Shunt$^+$ (DS$^+$), a general paradigm for collaboration of small and large models. DS$^+$ not only substantially reduces the cost associated with querying large models but also effectively improves large models' performance. For instance, ChatGPT achieves an accuracy of $94.43%$ on Amazon Product sentiment analysis, and DS$^+$ achieves an accuracy of $95.64%$, while the cost has been reduced to only $31.18%$. Besides, experiments also prove that the proposed collaborative-based paradigm can better inject specific task knowledge into PLMs compared to fine-tuning. | [
"['Dong Chen' 'Shuo Zhang' 'Yueting Zhuang' 'Siliang Tang' 'Qidong Liu'\n 'Hua Wang' 'Mingliang Xu']"
] |
null | null | 2406.15472 | null | null | http://arxiv.org/pdf/2406.15472v1 | 2024-06-15T15:39:43Z | 2024-06-15T15:39:43Z | Hyperbolic sentence representations for solving Textual Entailment | Hyperbolic spaces have proven to be suitable for modeling data of hierarchical nature. As such we use the Poincare ball to embed sentences with the goal of proving how hyperbolic spaces can be used for solving Textual Entailment. To this end, apart from the standard datasets used for evaluating textual entailment, we developed two additional datasets. We evaluate against baselines of various backgrounds, including LSTMs, Order Embeddings and Euclidean Averaging, which comes as a natural counterpart to representing sentences into the Euclidean space. We consistently outperform the baselines on the SICK dataset and are second only to Order Embeddings on the SNLI dataset, for the binary classification version of the entailment task. | [
"['Igor Petrovski']"
] |
null | null | 2406.15479 | null | null | http://arxiv.org/pdf/2406.15479v1 | 2024-06-17T02:31:55Z | 2024-06-17T02:31:55Z | Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging | In the era of large language models, model merging is a promising way to combine multiple task-specific models into a single multitask model without extra training. However, two challenges remain: (a) interference between different models and (b) heterogeneous data during testing. Traditional model merging methods often show significant performance gaps compared to fine-tuned models due to these issues. Additionally, a one-size-fits-all model lacks flexibility for diverse test data, leading to performance degradation. We show that both shared and exclusive task-specific knowledge are crucial for merging performance, but directly merging exclusive knowledge hinders overall performance. In view of this, we propose Twin-Merging, a method that encompasses two principal stages: (1) modularizing knowledge into shared and exclusive components, with compression to reduce redundancy and enhance efficiency; (2) dynamically merging shared and task-specific knowledge based on the input. This approach narrows the performance gap between merged and fine-tuned models and improves adaptability to heterogeneous data. Extensive experiments on $12$ datasets for both discriminative and generative tasks demonstrate the effectiveness of our method, showing an average improvement of $28.34%$ in absolute normalized score for discriminative tasks and even surpassing the fine-tuned upper bound on the generative tasks. (Our implementation is available in https://github.com/LZY-the-boys/Twin-Mergin.) | [
"['Zhenyi Lu' 'Chenghao Fan' 'Wei Wei' 'Xiaoye Qu' 'Dangyang Chen'\n 'Yu Cheng']"
] |
null | null | 2406.15480 | null | null | http://arxiv.org/pdf/2406.15480v1 | 2024-06-17T03:07:41Z | 2024-06-17T03:07:41Z | On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion | Efficient fine-tuning of large language models for task-specific applications is imperative, yet the vast number of parameters in these models makes their training increasingly challenging. Despite numerous proposals for effective methods, a substantial memory overhead remains for gradient computations during updates. thm{Can we fine-tune a series of task-specific small models and transfer their knowledge directly to a much larger model without additional training?} In this paper, we explore weak-to-strong specialization using logit arithmetic, facilitating a direct answer to this question. Existing weak-to-strong methods often employ a static knowledge transfer ratio and a single small model for transferring complex knowledge, which leads to suboptimal performance. % To address this, To surmount these limitations, we propose a dynamic logit fusion approach that works with a series of task-specific small models, each specialized in a different task. This method adaptively allocates weights among these models at each decoding step, learning the weights through Kullback-Leibler divergence constrained optimization problems. We conduct extensive experiments across various benchmarks in both single-task and multi-task settings, achieving leading results. By transferring expertise from the 7B model to the 13B model, our method closes the performance gap by 96.4% in single-task scenarios and by 86.3% in multi-task scenarios compared to full fine-tuning of the 13B model. Notably, we achieve surpassing performance on unseen tasks. Moreover, we further demonstrate that our method can effortlessly integrate in-context learning for single tasks and task arithmetic for multi-task scenarios. (Our implementation is available in https://github.com/Facico/Dynamic-Logit-Fusion.) | [
"['Chenghao Fan' 'Zhenyi Lu' 'Wei Wei' 'Jie Tian' 'Xiaoye Qu'\n 'Dangyang Chen' 'Yu Cheng']"
] |
null | null | 2406.15483 | null | null | http://arxiv.org/pdf/2406.15483v1 | 2024-06-17T06:42:13Z | 2024-06-17T06:42:13Z | Duplicate Detection with GenAI | Customer data is often stored as records in Customer Relations Management systems (CRMs). Data which is manually entered into such systems by one of more users over time leads to data replication, partial duplication or fuzzy duplication. This in turn means that there no longer a single source of truth for customers, contacts, accounts, etc. Downstream business processes become increasing complex and contrived without a unique mapping between a record in a CRM and the target customer. Current methods to detect and de-duplicate records use traditional Natural Language Processing techniques known as Entity Matching. In this paper we show how using the latest advancements in Large Language Models and Generative AI can vastly improve the identification and repair of duplicated records. On common benchmark datasets we find an improvement in the accuracy of data de-duplication rates from 30 percent using NLP techniques to almost 60 percent using our proposed method. | [
"['Ian Ormesher']"
] |
null | null | 2406.15486 | null | null | http://arxiv.org/pdf/2406.15486v2 | 2024-06-28T08:55:17Z | 2024-06-17T11:05:15Z | SampleAttention: Near-Lossless Acceleration of Long Context LLM
Inference with Adaptive Structured Sparse Attention | Large language models (LLMs) now support extremely long context windows, but the quadratic complexity of vanilla attention results in significantly long Time-to-First-Token (TTFT) latency. Existing approaches to address this complexity require additional pretraining or finetuning, and often sacrifice model accuracy. In this paper, we first provide both theoretical and empirical foundations for near-lossless sparse attention. We find dynamically capturing head-specific sparse patterns at runtime with low overhead is crucial. To address this, we propose SampleAttention, an adaptive structured and near-lossless sparse attention. Leveraging observed significant sparse patterns, SampleAttention attends to a fixed percentage of adjacent tokens to capture local window patterns, and employs a two-stage query-guided key-value filtering approach, which adaptively select a minimum set of key-values with low overhead, to capture column stripe patterns. Comprehensive evaluations show that SampleAttention can seamlessly replace vanilla attention in off-the-shelf LLMs with nearly no accuracy loss, and reduces TTFT by up to $2.42times$ compared with FlashAttention. | [
"['Qianchao Zhu' 'Jiangfei Duan' 'Chang Chen' 'Siran Liu' 'Xiuhong Li'\n 'Guanyu Feng' 'Xin Lv' 'Huanqi Cao' 'Xiao Chuanfu' 'Xingcheng Zhang'\n 'Dahua Lin' 'Chao Yang']"
] |
null | null | 2406.15487 | null | null | http://arxiv.org/pdf/2406.15487v2 | 2024-07-08T20:15:33Z | 2024-06-18T00:02:15Z | Improving Text-To-Audio Models with Synthetic Captions | It is an open challenge to obtain high quality training data, especially captions, for text-to-audio models. Although prior methods have leveraged textit{text-only language models} to augment and improve captions, such methods have limitations related to scale and coherence between audio and captions. In this work, we propose an audio captioning pipeline that uses an textit{audio language model} to synthesize accurate and diverse captions for audio at scale. We leverage this pipeline to produce a dataset of synthetic captions for AudioSet, named texttt{AF-AudioSet}, and then evaluate the benefit of pre-training text-to-audio models on these synthetic captions. Through systematic evaluations on AudioCaps and MusicCaps, we find leveraging our pipeline and synthetic captions leads to significant improvements on audio generation quality, achieving a new textit{state-of-the-art}. | [
"['Zhifeng Kong' 'Sang-gil Lee' 'Deepanway Ghosal' 'Navonil Majumder'\n 'Ambuj Mehrish' 'Rafael Valle' 'Soujanya Poria' 'Bryan Catanzaro']"
] |
null | null | 2406.15490 | null | null | http://arxiv.org/pdf/2406.15490v1 | 2024-06-18T13:01:30Z | 2024-06-18T13:01:30Z | Causal Discovery Inspired Unsupervised Domain Adaptation for
Emotion-Cause Pair Extraction | This paper tackles the task of emotion-cause pair extraction in the unsupervised domain adaptation setting. The problem is challenging as the distributions of the events causing emotions in target domains are dramatically different than those in source domains, despite the distributions of emotional expressions between domains are overlapped. Inspired by causal discovery, we propose a novel deep latent model in the variational autoencoder (VAE) framework, which not only captures the underlying latent structures of data but also utilizes the easily transferable knowledge of emotions as the bridge to link the distributions of events in different domains. To facilitate knowledge transfer across domains, we also propose a novel variational posterior regularization technique to disentangle the latent representations of emotions from those of events in order to mitigate the damage caused by the spurious correlations related to the events in source domains. Through extensive experiments, we demonstrate that our model outperforms the strongest baseline by approximately 11.05% on a Chinese benchmark and 2.45% on a English benchmark in terms of weighted-average F1 score. The source code will be publicly available upon acceptance. | [
"['Yuncheng Hua' 'Yujin Huang' 'Shuo Huang' 'Tao Feng' 'Lizhen Qu'\n 'Chris Bain' 'Richard Bassed' 'Gholamreza Haffari']"
] |
null | null | 2406.15492 | null | null | http://arxiv.org/pdf/2406.15492v1 | 2024-06-18T18:37:23Z | 2024-06-18T18:37:23Z | On the Principles behind Opinion Dynamics in Multi-Agent Systems of
Large Language Models | We study the evolution of opinions inside a population of interacting large language models (LLMs). Every LLM needs to decide how much funding to allocate to an item with three initial possibilities: full, partial, or no funding. We identify biases that drive the exchange of opinions based on the LLM's tendency to (i) find consensus with the other LLM's opinion, (ii) display caution when specifying funding, and (iii) consider ethical concerns in its opinion. We find these biases are affected by the perceived absence of compelling reasons for opinion change, the perceived willingness to engage in discussion, and the distribution of allocation values. Moreover, tensions among biases can lead to the survival of funding for items with negative connotations. We also find that the final distribution of full, partial, and no funding opinions is more diverse when an LLM freely forms its opinion after an interaction than when its opinion is a multiple-choice selection among the three allocation options. In the latter case, consensus or polarization is generally attained. When agents are aware of past opinions, they seek to maintain consistency with them, and more diverse updating rules emerge. Our study is performed using a Llama 3 LLM. | [
"['Pedro Cisneros-Velarde']"
] |
null | null | 2406.15500 | null | null | http://arxiv.org/pdf/2406.15500v1 | 2024-06-19T12:07:22Z | 2024-06-19T12:07:22Z | Hidden Variables unseen by Random Forests | Random Forests are widely claimed to capture interactions well. However, some simple examples suggest that they perform poorly in the presence of certain pure interactions that the conventional CART criterion struggles to capture during tree construction. We argue that simple alternative partitioning schemes used in the tree growing procedure can enhance identification of these interactions. In a simulation study we compare these variants to conventional Random Forests and Extremely Randomized trees. Our results validate that the modifications considered enhance the model's fitting ability in scenarios where pure interactions play a crucial role. | [
"['Ricardo Blum' 'Munir Hiabu' 'Enno Mammen' 'Joseph Theo Meyer']"
] |
null | null | 2406.15504 | null | null | http://arxiv.org/pdf/2406.15504v1 | 2024-06-19T16:43:56Z | 2024-06-19T16:43:56Z | Dr.E Bridges Graphs with Large Language Models through Words | Significant efforts have been directed toward integrating powerful Large Language Models (LLMs) with diverse modalities, particularly focusing on the fusion of vision, language, and audio data. However, the graph-structured data, inherently rich in structural and domain-specific knowledge, have not yet been gracefully adapted to LLMs. Existing methods either describe the graph with raw text, suffering the loss of graph structural information, or feed Graph Neural Network (GNN) embeddings directly into LLM at the cost of losing semantic representation. To bridge this gap, we introduce an innovative, end-to-end modality-aligning framework, equipped with a pretrained Dual-Residual Vector Quantized-Variational AutoEncoder (Dr.E). This framework is specifically designed to facilitate token-level alignment with LLMs, enabling an effective translation of the intrinsic `language' of graphs into comprehensible natural language. Our experimental evaluations on standard GNN node classification tasks demonstrate competitive performance against other state-of-the-art approaches. Additionally, our framework ensures interpretability, efficiency, and robustness, with its effectiveness further validated under both fine-tuning and few-shot settings. This study marks the first successful endeavor to achieve token-level alignment between GNNs and LLMs. | [
"['Zipeng Liu' 'Likang Wu' 'Ming He' 'Zhong Guan' 'Hongke Zhao' 'Nan Feng']"
] |
null | null | 2406.15507 | null | null | http://arxiv.org/pdf/2406.15507v1 | 2024-06-19T21:40:35Z | 2024-06-19T21:40:35Z | Few-shot Knowledge Graph Relational Reasoning via Subgraph Adaptation | Few-shot Knowledge Graph (KG) Relational Reasoning aims to predict unseen triplets (i.e., query triplets) for rare relations in KGs, given only several triplets of these relations as references (i.e., support triplets). This task has gained significant traction due to the widespread use of knowledge graphs in various natural language processing applications. Previous approaches have utilized meta-training methods and manually constructed meta-relation sets to tackle this task. Recent efforts have focused on edge-mask-based methods, which exploit the structure of the contextualized graphs of target triplets (i.e., a subgraph containing relevant triplets in the KG). However, existing edge-mask-based methods have limitations in extracting insufficient information from KG and are highly influenced by spurious information in KG. To overcome these challenges, we propose SAFER (Subgraph Adaptation for Few-shot Relational Reasoning), a novel approach that effectively adapts the information in contextualized graphs to various subgraphs generated from support and query triplets to perform the prediction. Specifically, SAFER enables the extraction of more comprehensive information from support triplets while minimizing the impact of spurious information when predicting query triplets. Experimental results on three prevalent datasets demonstrate the superiority of our proposed framework SAFER. | [
"['Haochen Liu' 'Song Wang' 'Chen Chen' 'Jundong Li']"
] |
null | null | 2406.15508 | null | null | http://arxiv.org/pdf/2406.15508v1 | 2024-06-20T00:17:28Z | 2024-06-20T00:17:28Z | What Teaches Robots to Walk, Teaches Them to Trade too -- Regime
Adaptive Execution using Informed Data and LLMs | Machine learning techniques applied to the problem of financial market forecasting struggle with dynamic regime switching, or underlying correlation and covariance shifts in true (hidden) market variables. Drawing inspiration from the success of reinforcement learning in robotics, particularly in agile locomotion adaptation of quadruped robots to unseen terrains, we introduce an innovative approach that leverages world knowledge of pretrained LLMs (aka. 'privileged information' in robotics) and dynamically adapts them using intrinsic, natural market rewards using LLM alignment technique we dub as "Reinforcement Learning from Market Feedback" (**RLMF**). Strong empirical results demonstrate the efficacy of our method in adapting to regime shifts in financial markets, a challenge that has long plagued predictive models in this domain. The proposed algorithmic framework outperforms best-performing SOTA LLM models on the existing (FLARE) benchmark stock-movement (SM) tasks by more than 15% improved accuracy. On the recently proposed NIFTY SM task, our adaptive policy outperforms the SOTA best performing trillion parameter models like GPT-4. The paper details the dual-phase, teacher-student architecture and implementation of our model, the empirical results obtained, and an analysis of the role of language embeddings in terms of Information Gain. | [
"['Raeid Saqur']"
] |
null | null | 2406.15509 | null | null | http://arxiv.org/pdf/2406.15509v1 | 2024-06-20T00:47:01Z | 2024-06-20T00:47:01Z | Machine Learning Visualization Tool for Exploring Parameterized
Hydrodynamics | We are interested in the computational study of shock hydrodynamics, i.e. problems involving compressible solids, liquids, and gases that undergo large deformation. These problems are dynamic and nonlinear and can exhibit complex instabilities. Due to advances in high performance computing it is possible to parameterize a hydrodynamic problem and perform a computational study yielding $mathcal{O}left({rm TB}right)$ of simulation state data. We present an interactive machine learning tool that can be used to compress, browse, and interpolate these large simulation datasets. This tool allows computational scientists and researchers to quickly visualize "what-if" situations, perform sensitivity analyses, and optimize complex hydrodynamic experiments. | [
"['C. F. Jekel' 'D. M. Sterbentz' 'T. M. Stitt' 'P. Mocz' 'R. N. Rieben'\n 'D. A. White' 'J. L. Belof']"
] |
null | null | 2406.15515 | null | null | http://arxiv.org/pdf/2406.15515v1 | 2024-06-20T19:08:54Z | 2024-06-20T19:08:54Z | Machine Learning Models for Accurately Predicting Properties of CsPbCl3
Perovskite Quantum Dots | Perovskite Quantum Dots (PQDs) have a promising future for several applications due to their unique properties. This study investigates the effectiveness of Machine Learning (ML) in predicting the size, absorbance (1S abs) and photoluminescence (PL) properties of $mathrm{CsPbCl}_3$ PQDs using synthesizing features as the input dataset. the study employed ML models of Support Vector Regression (SVR), Nearest Neighbour Distance (NND), Random Forest (RF), Gradient Boosting Machine (GBM), Decision Tree (DT) and Deep Learning (DL). Although all models performed highly accurate results, SVR and NND demonstrated the best accurate property prediction by achieving excellent performance on the test and training datasets, with high $mathrm{R}^2$ and low Root Mean Squared Error (RMSE) and low Mean Absolute Error (MAE) metric values. Given that ML is becoming more superior, its ability to understand the QDs field could prove invaluable to shape the future of nanomaterials designing. | [
"['Mehmet Sıddık Çadırcı' 'Musa Çadırcı']"
] |
null | null | 2406.15518 | null | null | http://arxiv.org/pdf/2406.15518v1 | 2024-06-21T01:37:39Z | 2024-06-21T01:37:39Z | Steering Without Side Effects: Improving Post-Deployment Control of
Language Models | Language models (LMs) have been shown to behave unexpectedly post-deployment. For example, new jailbreaks continually arise, allowing model misuse, despite extensive red-teaming and adversarial training from developers. Given most model queries are unproblematic and frequent retraining results in unstable user experience, methods for mitigation of worst-case behavior should be targeted. One such method is classifying inputs as potentially problematic, then selectively applying steering vectors on these problematic inputs, i.e. adding particular vectors to model hidden states. However, steering vectors can also negatively affect model performance, which will be an issue on cases where the classifier was incorrect. We present KL-then-steer (KTS), a technique that decreases the side effects of steering while retaining its benefits, by first training a model to minimize Kullback-Leibler (KL) divergence between a steered and unsteered model on benign inputs, then steering the model that has undergone this training. Our best method prevents 44% of jailbreak attacks compared to the original Llama-2-chat-7B model while maintaining helpfulness (as measured by MT-Bench) on benign requests almost on par with the original LM. To demonstrate the generality and transferability of our method beyond jailbreaks, we show that our KTS model can be steered to reduce bias towards user-suggested answers on TruthfulQA. Code is available: https://github.com/AsaCooperStickland/kl-then-steer. | [
"['Asa Cooper Stickland' 'Alexander Lyzhov' 'Jacob Pfau' 'Salsabila Mahdi'\n 'Samuel R. Bowman']"
] |
null | null | 2406.15523 | null | null | http://arxiv.org/pdf/2406.15523v1 | 2024-06-21T04:07:43Z | 2024-06-21T04:07:43Z | Unifying Unsupervised Graph-Level Anomaly Detection and
Out-of-Distribution Detection: A Benchmark | To build safe and reliable graph machine learning systems, unsupervised graph-level anomaly detection (GLAD) and unsupervised graph-level out-of-distribution (OOD) detection (GLOD) have received significant attention in recent years. Though those two lines of research indeed share the same objective, they have been studied independently in the community due to distinct evaluation setups, creating a gap that hinders the application and evaluation of methods from one to the other. To bridge the gap, in this work, we present a Unified Benchmark for unsupervised Graph-level OOD and anomaly Detection (our method), a comprehensive evaluation framework that unifies GLAD and GLOD under the concept of generalized graph-level OOD detection. Our benchmark encompasses 35 datasets spanning four practical anomaly and OOD detection scenarios, facilitating the comparison of 16 representative GLAD/GLOD methods. We conduct multi-dimensional analyses to explore the effectiveness, generalizability, robustness, and efficiency of existing methods, shedding light on their strengths and limitations. Furthermore, we provide an open-source codebase (https://github.com/UB-GOLD/UB-GOLD) of our method to foster reproducible research and outline potential directions for future investigations based on our insights. | [
"['Yili Wang' 'Yixin Liu' 'Xu Shen' 'Chenyu Li' 'Kaize Ding' 'Rui Miao'\n 'Ying Wang' 'Shirui Pan' 'Xin Wang']"
] |
null | null | 2406.15524 | null | null | http://arxiv.org/pdf/2406.15524v1 | 2024-06-21T05:13:34Z | 2024-06-21T05:13:34Z | Rethinking Pruning Large Language Models: Benefits and Pitfalls of
Reconstruction Error Minimization | This work suggests fundamentally rethinking the current practice of pruning large language models (LLMs). The way it is done is by divide and conquer: split the model into submodels, sequentially prune them, and reconstruct predictions of the dense counterparts on small calibration data one at a time; the final model is obtained simply by putting the resulting sparse submodels together. While this approach enables pruning under memory constraints, it generates high reconstruction errors. In this work, we first present an array of reconstruction techniques that can significantly reduce this error by more than $90%$. Unwittingly, however, we discover that minimizing reconstruction error is not always ideal and can overfit the given calibration data, resulting in rather increased language perplexity and poor performance at downstream tasks. We find out that a strategy of self-generating calibration data can mitigate this trade-off between reconstruction and generalization, suggesting new directions in the presence of both benefits and pitfalls of reconstruction for pruning LLMs. | [
"['Sungbin Shin' 'Wonpyo Park' 'Jaeho Lee' 'Namhoon Lee']"
] |
null | null | 2406.15527 | null | null | http://arxiv.org/pdf/2406.15527v1 | 2024-06-21T07:38:55Z | 2024-06-21T07:38:55Z | Data Efficient Evaluation of Large Language Models and Text-to-Image
Models via Adaptive Sampling | Evaluating LLMs and text-to-image models is a computationally intensive task often overlooked. Efficient evaluation is crucial for understanding the diverse capabilities of these models and enabling comparisons across a growing number of new models and benchmarks. To address this, we introduce SubLIME, a data-efficient evaluation framework that employs adaptive sampling techniques, such as clustering and quality-based methods, to create representative subsets of benchmarks. Our approach ensures statistically aligned model rankings compared to full datasets, evidenced by high Pearson correlation coefficients. Empirical analysis across six NLP benchmarks reveals that: (1) quality-based sampling consistently achieves strong correlations (0.85 to 0.95) with full datasets at a 10% sampling rate such as Quality SE and Quality CPD (2) clustering methods excel in specific benchmarks such as MMLU (3) no single method universally outperforms others across all metrics. Extending this framework, we leverage the HEIM leaderboard to cover 25 text-to-image models on 17 different benchmarks. SubLIME dynamically selects the optimal technique for each benchmark, significantly reducing evaluation costs while preserving ranking integrity and score distribution. Notably, a minimal sampling rate of 1% proves effective for benchmarks like MMLU. Additionally, we demonstrate that employing difficulty-based sampling to target more challenging benchmark segments enhances model differentiation with broader score distributions. We also combine semantic search, tool use, and GPT-4 review to identify redundancy across benchmarks within specific LLM categories, such as coding benchmarks. This allows us to further reduce the number of samples needed to maintain targeted rank preservation. Overall, SubLIME offers a versatile and cost-effective solution for the robust evaluation of LLMs and text-to-image models. | [
"['Cong Xu' 'Gayathri Saranathan' 'Mahammad Parwez Alam' 'Arpit Shah'\n 'James Lim' 'Soon Yee Wong' 'Foltin Martin' 'Suparna Bhattacharya']"
] |
null | null | 2406.15529 | null | null | http://arxiv.org/pdf/2406.15529v1 | 2024-06-21T11:50:57Z | 2024-06-21T11:50:57Z | Supersonic OT: Fast Unconditionally Secure Oblivious Transfer | Oblivious Transfer (OT) is a fundamental cryptographic protocol with applications in secure Multi-Party Computation, Federated Learning, and Private Set Intersection. With the advent of quantum computing, it is crucial to develop unconditionally secure core primitives like OT to ensure their continued security in the post-quantum era. Despite over four decades since OT's introduction, the literature has predominantly relied on computational assumptions, except in cases using unconventional methods like noisy channels or a fully trusted party. Introducing "Supersonic OT", a highly efficient and unconditionally secure OT scheme that avoids public-key-based primitives, we offer an alternative to traditional approaches. Supersonic OT enables a receiver to obtain a response of size O(1). Its simple (yet non-trivial) design facilitates easy security analysis and implementation. The protocol employs a basic secret-sharing scheme, controlled swaps, the one-time pad, and a third-party helper who may be corrupted by a semi-honest adversary. Our implementation and runtime analysis indicate that a single instance of Supersonic OT completes in 0.35 milliseconds, making it up to 2000 times faster than the state-of-the-art base OT. | [
"['Aydin Abadi' 'Yvo Desmedt']"
] |
null | null | 2406.15534 | null | null | http://arxiv.org/pdf/2406.15534v1 | 2024-06-21T14:19:10Z | 2024-06-21T14:19:10Z | Geneverse: A collection of Open-source Multimodal Large Language Models
for Genomic and Proteomic Research | The applications of large language models (LLMs) are promising for biomedical and healthcare research. Despite the availability of open-source LLMs trained using a wide range of biomedical data, current research on the applications of LLMs to genomics and proteomics is still limited. To fill this gap, we propose a collection of finetuned LLMs and multimodal LLMs (MLLMs), known as Geneverse, for three novel tasks in genomic and proteomic research. The models in Geneverse are trained and evaluated based on domain-specific datasets, and we use advanced parameter-efficient finetuning techniques to achieve the model adaptation for tasks including the generation of descriptions for gene functions, protein function inference from its structure, and marker gene selection from spatial transcriptomic data. We demonstrate that adapted LLMs and MLLMs perform well for these tasks and may outperform closed-source large-scale models based on our evaluations focusing on both truthfulness and structural correctness. All of the training strategies and base models we used are freely accessible. | [
"['Tianyu Liu' 'Yijia Xiao' 'Xiao Luo' 'Hua Xu' 'W. Jim Zheng'\n 'Hongyu Zhao']"
] |
null | null | 2406.15540 | null | null | http://arxiv.org/pdf/2406.15540v1 | 2024-06-21T17:39:57Z | 2024-06-21T17:39:57Z | Specify What? Enhancing Neural Specification Synthesis by Symbolic
Methods | We investigate how combinations of Large Language Models (LLMs) and symbolic analyses can be used to synthesise specifications of C programs. The LLM prompts are augmented with outputs from two formal methods tools in the Frama-C ecosystem, Pathcrawler and EVA, to produce C program annotations in the specification language ACSL. We demonstrate how the addition of symbolic analysis to the workflow impacts the quality of annotations: information about input/output examples from Pathcrawler produce more context-aware annotations, while the inclusion of EVA reports yields annotations more attuned to runtime errors. In addition, we show that the method infers rather the programs intent than its behaviour, by generating specifications for buggy programs and observing robustness of the result against bugs. | [
"['George Granberry' 'Wolfgang Ahrendt' 'Moa Johansson']"
] |
null | null | 2406.15565 | null | null | http://arxiv.org/pdf/2406.15565v1 | 2024-06-21T18:04:13Z | 2024-06-21T18:04:13Z | Unseen Object Reasoning with Shared Appearance Cues | This paper introduces an innovative approach to open world recognition (OWR), where we leverage knowledge acquired from known objects to address the recognition of previously unseen objects. The traditional method of object modeling relies on supervised learning with strict closed-set assumptions, presupposing that objects encountered during inference are already known at the training phase. However, this assumption proves inadequate for real-world scenarios due to the impracticality of accounting for the immense diversity of objects. Our hypothesis posits that object appearances can be represented as collections of "shareable" mid-level features, arranged in constellations to form object instances. By adopting this framework, we can efficiently dissect and represent both known and unknown objects in terms of their appearance cues. Our paper introduces a straightforward yet elegant method for modeling novel or unseen objects, utilizing established appearance cues and accounting for inherent uncertainties. This representation not only enables the detection of out-of-distribution objects or novel categories among unseen objects but also facilitates a deeper level of reasoning, empowering the identification of the superclass to which an unknown instance belongs. This novel approach holds promise for advancing open world recognition in diverse applications. | [
"['Paridhi Singh' 'Arun Kumar']"
] |
null | null | 2406.15567 | null | null | http://arxiv.org/pdf/2406.15567v1 | 2024-06-21T18:05:35Z | 2024-06-21T18:05:35Z | SAIL: Self-Improving Efficient Online Alignment of Large Language Models | Reinforcement Learning from Human Feedback (RLHF) is a key method for aligning large language models (LLMs) with human preferences. However, current offline alignment approaches like DPO, IPO, and SLiC rely heavily on fixed preference datasets, which can lead to sub-optimal performance. On the other hand, recent literature has focused on designing online RLHF methods but still lacks a unified conceptual formulation and suffers from distribution shift issues. To address this, we establish that online LLM alignment is underpinned by bilevel optimization. By reducing this formulation to an efficient single-level first-order method (using the reward-policy equivalence), our approach generates new samples and iteratively refines model alignment by exploring responses and regulating preference labels. In doing so, we permit alignment methods to operate in an online and self-improving manner, as well as generalize prior online RLHF methods as special cases. Compared to state-of-the-art iterative RLHF methods, our approach significantly improves alignment performance on open-sourced datasets with minimal computational overhead. | [
"['Mucong Ding' 'Souradip Chakraborty' 'Vibhu Agrawal' 'Zora Che'\n 'Alec Koppel' 'Mengdi Wang' 'Amrit Bedi' 'Furong Huang']"
] |
null | null | 2406.15568 | null | null | http://arxiv.org/pdf/2406.15568v2 | 2024-07-09T12:04:03Z | 2024-06-21T18:06:30Z | Robust Reinforcement Learning from Corrupted Human Feedback | Reinforcement learning from human feedback (RLHF) provides a principled framework for aligning AI systems with human preference data. For various reasons, e.g., personal bias, context ambiguity, lack of training, etc, human annotators may give incorrect or inconsistent preference labels. To tackle this challenge, we propose a robust RLHF approach -- $R^3M$, which models the potentially corrupted preference label as sparse outliers. Accordingly, we formulate the robust reward learning as an $ell_1$-regularized maximum likelihood estimation problem. Computationally, we develop an efficient alternating optimization algorithm, which only incurs negligible computational overhead compared with the standard RLHF approach. Theoretically, we prove that under proper regularity conditions, $R^3M$ can consistently learn the underlying reward and identify outliers, provided that the number of outlier labels scales sublinearly with the preference sample size. Furthermore, we remark that $R^3M$ is versatile and can be extended to various preference optimization methods, including direct preference optimization (DPO). Our experiments on robotic control and natural language generation with large language models (LLMs) show that $R^3M$ improves robustness of the reward against several types of perturbations to the preference data. | [
"['Alexander Bukharin' 'Ilgee Hong' 'Haoming Jiang' 'Zichong Li'\n 'Qingru Zhang' 'Zixuan Zhang' 'Tuo Zhao']"
] |
null | null | 2406.15570 | null | null | http://arxiv.org/pdf/2406.15570v1 | 2024-06-21T18:07:46Z | 2024-06-21T18:07:46Z | DEM: Distribution Edited Model for Training with Mixed Data
Distributions | Training with mixed data distributions is a common and important part of creating multi-task and instruction-following models. The diversity of the data distributions and cost of joint training makes the optimization procedure extremely challenging. Data mixing methods partially address this problem, albeit having a sub-optimal performance across data sources and require multiple expensive training runs. In this paper, we propose a simple and efficient alternative for better optimization of the data sources by combining models individually trained on each data source with the base model using basic element-wise vector operations. The resulting model, namely Distribution Edited Model (DEM), is 11x cheaper than standard data mixing and outperforms strong baselines on a variety of benchmarks, yielding up to 6.2% improvement on MMLU, 11.5% on BBH, 16.1% on DROP, and 9.3% on HELM with models of size 3B to 13B. Notably, DEM does not require full re-training when modifying a single data-source, thus making it very flexible and scalable for training with diverse data sources. | [
"['Dhananjay Ram' 'Aditya Rawal' 'Momchil Hardalov' 'Nikolaos Pappas'\n 'Sheng Zha']"
] |
null | null | 2406.15575 | null | null | http://arxiv.org/pdf/2406.15575v1 | 2024-06-21T18:22:11Z | 2024-06-21T18:22:11Z | Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training
Complexity | Graph Neural Networks (GNNs) are widely applied to graph learning problems such as node classification. When scaling up the underlying graphs of GNNs to a larger size, we are forced to either train on the complete graph and keep the full graph adjacency and node embeddings in memory (which is often infeasible) or mini-batch sample the graph (which results in exponentially growing computational complexities with respect to the number of GNN layers). Various sampling-based and historical-embedding-based methods are proposed to avoid this exponential growth of complexities. However, none of these solutions eliminates the linear dependence on graph size. This paper proposes a sketch-based algorithm whose training time and memory grow sublinearly with respect to graph size by training GNNs atop a few compact sketches of graph adjacency and node embeddings. Based on polynomial tensor-sketch (PTS) theory, our framework provides a novel protocol for sketching non-linear activations and graph convolution matrices in GNNs, as opposed to existing methods that sketch linear weights or gradients in neural networks. In addition, we develop a locality-sensitive hashing (LSH) technique that can be trained to improve the quality of sketches. Experiments on large-graph benchmarks demonstrate the scalability and competitive performance of our Sketch-GNNs versus their full-size GNN counterparts. | [
"['Mucong Ding' 'Tahseen Rabbani' 'Bang An' 'Evan Z Wang' 'Furong Huang']"
] |
null | null | 2406.15594 | null | null | http://arxiv.org/pdf/2406.15594v1 | 2024-06-21T18:52:03Z | 2024-06-21T18:52:03Z | Detecting and Classifying Flares in High-Resolution Solar Spectra with
Supervised Machine Learning | Flares are a well-studied aspect of the Sun's magnetic activity. Detecting and classifying solar flares can inform the analysis of contamination caused by stellar flares in exoplanet transmission spectra. In this paper, we present a standardized procedure to classify solar flares with the aid of supervised machine learning. Using flare data from the RHESSI mission and solar spectra from the HARPS-N instrument, we trained several supervised machine learning models, and found that the best performing algorithm is a C-Support Vector Machine (SVC) with non-linear kernels, specifically Radial Basis Functions (RBF). The best-trained model, SVC with RBF kernels, achieves an average aggregate accuracy score of 0.65, and categorical accuracy scores of over 0.70 for the no-flare and weak-flare classes, respectively. In comparison, a blind classification algorithm would have an accuracy score of 0.33. Testing showed that the model is able to detect and classify solar flares in entirely new data with different characteristics and distributions from those of the training set. Future efforts could focus on enhancing classification accuracy, investigating the efficacy of alternative models, particularly deep learning models, and incorporating more datasets to extend the application of this framework to stars that host exoplanets. | [
"['Nicole Hao' 'Laura Flagg' 'Ray Jayawardhana']"
] |
null | null | 2406.15599 | null | null | http://arxiv.org/pdf/2406.15599v1 | 2024-06-21T18:57:38Z | 2024-06-21T18:57:38Z | Pareto-Optimal Learning from Preferences with Hidden Context | Ensuring AI models align with human values is essential for their safety and functionality. Reinforcement learning from human feedback (RLHF) uses human preferences to achieve this alignment. However, preferences sourced from diverse populations can result in point estimates of human values that may be sub-optimal or unfair to specific groups. We propose Pareto Optimal Preference Learning (POPL), which frames discrepant group preferences as objectives with potential trade-offs, aiming for policies that are Pareto-optimal on the preference dataset. POPL utilizes Lexicase selection, an iterative process to select diverse and Pareto-optimal solutions. Our empirical evaluations demonstrate that POPL surpasses baseline methods in learning sets of reward functions, effectively catering to distinct groups without access to group numbers or membership labels. Furthermore, we illustrate that POPL can serve as a foundation for techniques optimizing specific notions of group fairness, ensuring inclusive and equitable AI model alignment. | [
"['Ryan Boldi' 'Li Ding' 'Lee Spector' 'Scott Niekum']"
] |
null | null | 2406.15612 | null | null | http://arxiv.org/pdf/2406.15612v2 | 2024-06-28T14:23:49Z | 2024-06-21T19:27:46Z | Catastrophic-risk-aware reinforcement learning with
extreme-value-theory-based policy gradients | This paper tackles the problem of mitigating catastrophic risk (which is risk with very low frequency but very high severity) in the context of a sequential decision making process. This problem is particularly challenging due to the scarcity of observations in the far tail of the distribution of cumulative costs (negative rewards). A policy gradient algorithm is developed, that we call POTPG. It is based on approximations of the tail risk derived from extreme value theory. Numerical experiments highlight the out-performance of our method over common benchmarks, relying on the empirical distribution. An application to financial risk management, more precisely to the dynamic hedging of a financial option, is presented. | [
"['Parisa Davar' 'Frédéric Godin' 'Jose Garrido']"
] |
null | null | 2406.15613 | null | null | http://arxiv.org/pdf/2406.15613v1 | 2024-06-21T19:28:50Z | 2024-06-21T19:28:50Z | MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local
Explanations | With the increasing use of black-box Machine Learning (ML) techniques in critical applications, there is a growing demand for methods that can provide transparency and accountability for model predictions. As a result, a large number of local explainability methods for black-box models have been developed and popularized. However, machine learning explanations are still hard to evaluate and compare due to the high dimensionality, heterogeneous representations, varying scales, and stochastic nature of some of these methods. Topological Data Analysis (TDA) can be an effective method in this domain since it can be used to transform attributions into uniform graph representations, providing a common ground for comparison across different explanation methods. We present a novel topology-driven visual analytics tool, Mountaineer, that allows ML practitioners to interactively analyze and compare these representations by linking the topological graphs back to the original data distribution, model predictions, and feature attributions. Mountaineer facilitates rapid and iterative exploration of ML explanations, enabling experts to gain deeper insights into the explanation techniques, understand the underlying data distributions, and thus reach well-founded conclusions about model behavior. Furthermore, we demonstrate the utility of Mountaineer through two case studies using real-world data. In the first, we show how Mountaineer enabled us to compare black-box ML explanations and discern regions of and causes of disagreements between different explanations. In the second, we demonstrate how the tool can be used to compare and understand ML models themselves. Finally, we conducted interviews with three industry experts to help us evaluate our work. | [
"['Parikshit Solunke' 'Vitoria Guardieiro' 'Joao Rulff' 'Peter Xenopoulos'\n 'Gromit Yeuk-Yin Chan' 'Brian Barr' 'Luis Gustavo Nonato' 'Claudio Silva']"
] |
null | null | 2406.15617 | null | null | http://arxiv.org/pdf/2406.15617v1 | 2024-06-21T19:40:30Z | 2024-06-21T19:40:30Z | BrowNNe: Brownian Nonlocal Neurons & Activation Functions | It is generally thought that the use of stochastic activation functions in deep learning architectures yield models with superior generalization abilities. However, a sufficiently rigorous statement and theoretical proof of this heuristic is lacking in the literature. In this paper, we provide several novel contributions to the literature in this regard. Defining a new notion of nonlocal directional derivative, we analyze its theoretical properties (existence and convergence). Second, using a probabilistic reformulation, we show that nonlocal derivatives are epsilon-sub gradients, and derive sample complexity results for convergence of stochastic gradient descent-like methods using nonlocal derivatives. Finally, using our analysis of the nonlocal gradient of Holder continuous functions, we observe that sample paths of Brownian motion admit nonlocal directional derivatives, and the nonlocal derivatives of Brownian motion are seen to be Gaussian processes with computable mean and standard deviation. Using the theory of nonlocal directional derivatives, we solve a highly nondifferentiable and nonconvex model problem of parameter estimation on image articulation manifolds. Using Brownian motion infused ReLU activation functions with the nonlocal gradient in place of the usual gradient during backpropagation, we also perform experiments on multiple well-studied deep learning architectures. Our experiments indicate the superior generalization capabilities of Brownian neural activation functions in low-training data regimes, where the use of stochastic neurons beats the deterministic ReLU counterpart. | [
"['Sriram Nagaraj' 'Truman Hickok']"
] |
null | null | 2406.15619 | null | null | http://arxiv.org/pdf/2406.15619v1 | 2024-06-21T19:55:34Z | 2024-06-21T19:55:34Z | Physics Informed Machine Learning (PIML) methods for estimating the
remaining useful lifetime (RUL) of aircraft engines | This paper is aimed at using the newly developing field of physics informed machine learning (PIML) to develop models for predicting the remaining useful lifetime (RUL) aircraft engines. We consider the well-known benchmark NASA Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) data as the main data for this paper, which consists of sensor outputs in a variety of different operating modes. C-MAPSS is a well-studied dataset with much existing work in the literature that address RUL prediction with classical and deep learning methods. In the absence of published empirical physical laws governing the C-MAPSS data, our approach first uses stochastic methods to estimate the governing physics models from the noisy time series data. In our approach, we model the various sensor readings as being governed by stochastic differential equations, and we estimate the corresponding transition density mean and variance functions of the underlying processes. We then augment LSTM (long-short term memory) models with the learned mean and variance functions during training and inferencing. Our PIML based approach is different from previous methods, and we use the data to first learn the physics. Our results indicate that PIML discovery and solutions methods are well suited for this problem and outperform previous data-only deep learning methods for this data set and task. Moreover, the framework developed herein is flexible, and can be adapted to other situations (other sensor modalities or combined multi-physics environments), including cases where the underlying physics is only partially observed or known. | [
"['Sriram Nagaraj' 'Truman Hickok']"
] |
null | null | 2406.15625 | null | null | http://arxiv.org/pdf/2406.15625v1 | 2024-06-21T20:02:22Z | 2024-06-21T20:02:22Z | Shortcomings of LLMs for Low-Resource Translation: Retrieval and
Understanding are Both the Problem | This work investigates the in-context learning abilities of pretrained large language models (LLMs) when instructed to translate text from a low-resource language into a high-resource language as part of an automated machine translation pipeline. We conduct a set of experiments translating Southern Quechua to Spanish and examine the informativity of various types of information retrieved from a constrained database of digitized pedagogical materials (dictionaries and grammar lessons) and parallel corpora. Using both automatic and human evaluation of model output, we conduct ablation studies that manipulate (1) context type (morpheme translations, grammar descriptions, and corpus examples), (2) retrieval methods (automated vs. manual), and (3) model type. Our results suggest that even relatively small LLMs are capable of utilizing prompt context for zero-shot low-resource translation when provided a minimally sufficient amount of relevant linguistic information. However, the variable effects of prompt type, retrieval method, model type, and language-specific factors highlight the limitations of using even the best LLMs as translation systems for the majority of the world's 7,000+ languages and their speakers. | [
"['Sara Court' 'Micha Elsner']"
] |
null | null | 2406.15627 | null | null | http://arxiv.org/pdf/2406.15627v1 | 2024-06-21T20:06:31Z | 2024-06-21T20:06:31Z | Benchmarking Uncertainty Quantification Methods for Large Language
Models with LM-Polygraph | Uncertainty quantification (UQ) is becoming increasingly recognized as a critical component of applications that rely on machine learning (ML). The rapid proliferation of large language models (LLMs) has stimulated researchers to seek efficient and effective approaches to UQ in text generation tasks, as in addition to their emerging capabilities, these models have introduced new challenges for building safe applications. As with other ML models, LLMs are prone to make incorrect predictions, ``hallucinate'' by fabricating claims, or simply generate low-quality output for a given input. UQ is a key element in dealing with these challenges. However research to date on UQ methods for LLMs has been fragmented, with disparate evaluation methods. In this work, we tackle this issue by introducing a novel benchmark that implements a collection of state-of-the-art UQ baselines, and provides an environment for controllable and consistent evaluation of novel techniques by researchers in various text generation tasks. Our benchmark also supports the assessment of confidence normalization methods in terms of their ability to provide interpretable scores. Using our benchmark, we conduct a large-scale empirical investigation of UQ and normalization techniques across nine tasks and shed light on the most promising approaches. | [
"['Roman Vashurin' 'Ekaterina Fadeeva' 'Artem Vazhentsev' 'Akim Tsvigun'\n 'Daniil Vasilev' 'Rui Xing' 'Abdelrahman Boda Sadallah'\n 'Lyudmila Rvanova' 'Sergey Petrakov' 'Alexander Panchenko'\n 'Timothy Baldwin' 'Preslav Nakov' 'Maxim Panov' 'Artem Shelmanov']"
] |
null | null | 2406.15635 | null | null | http://arxiv.org/pdf/2406.15635v1 | 2024-06-21T20:24:03Z | 2024-06-21T20:24:03Z | DataFreeShield: Defending Adversarial Attacks without Training Data | Recent advances in adversarial robustness rely on an abundant set of training data, where using external or additional datasets has become a common setting. However, in real life, the training data is often kept private for security and privacy issues, while only the pretrained weight is available to the public. In such scenarios, existing methods that assume accessibility to the original data become inapplicable. Thus we investigate the pivotal problem of data-free adversarial robustness, where we try to achieve adversarial robustness without accessing any real data. Through a preliminary study, we highlight the severity of the problem by showing that robustness without the original dataset is difficult to achieve, even with similar domain datasets. To address this issue, we propose DataFreeShield, which tackles the problem from two perspectives: surrogate dataset generation and adversarial training using the generated data. Through extensive validation, we show that DataFreeShield outperforms baselines, demonstrating that the proposed method sets the first entirely data-free solution for the adversarial robustness problem. | [
"['Hyeyoon Lee' 'Kanghyun Choi' 'Dain Kwon' 'Sunjong Park'\n 'Mayoore Selvarasa Jaiswal' 'Noseong Park' 'Jonghyun Choi' 'Jinho Lee']"
] |
null | null | 2406.15638 | null | null | http://arxiv.org/pdf/2406.15638v1 | 2024-06-21T20:34:08Z | 2024-06-21T20:34:08Z | Root Cause Analysis of Anomalies in 5G RAN Using Graph Neural Network
and Transformer | The emergence of 5G technology marks a significant milestone in developing telecommunication networks, enabling exciting new applications such as augmented reality and self-driving vehicles. However, these improvements bring an increased management complexity and a special concern in dealing with failures, as the applications 5G intends to support heavily rely on high network performance and low latency. Thus, automatic self-healing solutions have become effective in dealing with this requirement, allowing a learning-based system to automatically detect anomalies and perform Root Cause Analysis (RCA). However, there are inherent challenges to the implementation of such intelligent systems. First, there is a lack of suitable data for anomaly detection and RCA, as labelled data for failure scenarios is uncommon. Secondly, current intelligent solutions are tailored to LTE networks and do not fully capture the spatio-temporal characteristics present in the data. Considering this, we utilize a calibrated simulator, Simu5G, and generate open-source data for normal and failure scenarios. Using this data, we propose Simba, a state-of-the-art approach for anomaly detection and root cause analysis in 5G Radio Access Networks (RANs). We leverage Graph Neural Networks to capture spatial relationships while a Transformer model is used to learn the temporal dependencies of the data. We implement a prototype of Simba and evaluate it over multiple failures. The outcomes are compared against existing solutions to confirm the superiority of Simba. | [
"['Antor Hasan' 'Conrado Boeira' 'Khaleda Papry' 'Yue Ju' 'Zhongwen Zhu'\n 'Israat Haque']"
] |
null | null | 2406.15647 | null | null | http://arxiv.org/pdf/2406.15647v2 | 2024-06-25T18:26:07Z | 2024-06-21T20:56:12Z | Generating Music with Structure Using Self-Similarity as Attention | Despite the innovations in deep learning and generative AI, creating long term structure as well as the layers of repeated structure common in musical works remains an open challenge in music generation. We propose an attention layer that uses a novel approach applying user-supplied self-similarity matrices to previous time steps, and demonstrate it in our Similarity Incentivized Neural Generator (SING) system, a deep learning autonomous music generation system with two layers. The first is a vanilla Long Short Term Memory layer, and the second is the proposed attention layer. During generation, this attention mechanism imposes a suggested structure from a template piece on the generated music. We train SING on the MAESTRO dataset using a novel variable batching method, and compare its performance to the same model without the attention mechanism. The addition of our proposed attention mechanism significantly improves the network's ability to replicate specific structures, and it performs better on an unseen test set than a model without the attention mechanism. | [
"['Sophia Hager' 'Kathleen Hablutzel' 'Katherine M. Kinnaird']"
] |
null | null | 2406.15648 | null | null | http://arxiv.org/pdf/2406.15648v1 | 2024-06-21T20:56:35Z | 2024-06-21T20:56:35Z | Testing the Feasibility of Linear Programs with Bandit Feedback | While the recent literature has seen a surge in the study of constrained bandit problems, all existing methods for these begin by assuming the feasibility of the underlying problem. We initiate the study of testing such feasibility assumptions, and in particular address the problem in the linear bandit setting, thus characterising the costs of feasibility testing for an unknown linear program using bandit feedback. Concretely, we test if $exists x: Ax ge 0$ for an unknown $A in mathbb{R}^{m times d}$, by playing a sequence of actions $x_tin mathbb{R}^d$, and observing $Ax_t + mathrm{noise}$ in response. By identifying the hypothesis as determining the sign of the value of a minimax game, we construct a novel test based on low-regret algorithms and a nonasymptotic law of iterated logarithms. We prove that this test is reliable, and adapts to the `signal level,' $Gamma,$ of any instance, with mean sample costs scaling as $widetilde{O}(d^2/Gamma^2)$. We complement this by a minimax lower bound of $Omega(d/Gamma^2)$ for sample costs of reliable tests, dominating prior asymptotic lower bounds by capturing the dependence on $d$, and thus elucidating a basic insight missing in the extant literature on such problems. | [
"['Aditya Gangrade' 'Aditya Gopalan' 'Venkatesh Saligrama' 'Clayton Scott']"
] |
null | null | 2406.15659 | null | null | http://arxiv.org/pdf/2406.15659v1 | 2024-06-21T21:33:51Z | 2024-06-21T21:33:51Z | Contextual Sprint Classification in Soccer Based on Deep Learning | The analysis of high-intensity runs (or sprints) in soccer has long been a topic of interest for sports science researchers and practitioners. In particular, recent studies suggested contextualizing sprints based on their tactical purposes to better understand the physical-tactical requirements of modern match-play. However, they have a limitation in scalability, as human experts have to manually classify hundreds of sprints for every match. To address this challenge, this paper proposes a deep learning framework for automatically classifying sprints in soccer into contextual categories. The proposed model covers the permutation-invariant and sequential nature of multi-agent trajectories in soccer by deploying Set Transformers and a bidirectional GRU. We train the model with category labels made through the collaboration of human annotators and a rule-based classifier. Experimental results show that our model classifies sprints in the test dataset into 15 categories with the accuracy of 77.65%, implying the potential of the proposed framework for facilitating the integrated analysis of soccer sprints at scale. | [
"['Hyunsung Kim' 'Gun-Hee Joe' 'Jinsung Yoon' 'Sang-Ki Ko']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.