categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.03061 | null | null | http://arxiv.org/pdf/2406.03061v1 | 2024-06-05T08:39:10Z | 2024-06-05T08:39:10Z | Predicting unobserved climate time series data at distant areas via
spatial correlation using reservoir computing | Collecting time series data spatially distributed in many locations is often important for analyzing climate change and its impacts on ecosystems. However, comprehensive spatial data collection is not always feasible, requiring us to predict climate variables at some locations. This study focuses on a prediction of climatic elements, specifically near-surface temperature and pressure, at a target location apart from a data observation point. Our approach uses two prediction methods: reservoir computing (RC), known as a machine learning framework with low computational requirements, and vector autoregression models (VAR), recognized as a statistical method for analyzing time series data. Our results show that the accuracy of the predictions degrades with the distance between the observation and target locations. We quantitatively estimate the distance in which effective predictions are possible. We also find that in the context of climate data, a geographical distance is associated with data correlation, and a strong data correlation significantly improves the prediction accuracy with RC. In particular, RC outperforms VAR in predicting highly correlated data within the predictive range. These findings suggest that machine learning-based methods can be used more effectively to predict climatic elements in remote locations by assessing the distance to them from the data observation point in advance. Our study on low-cost and accurate prediction of climate variables has significant value for climate change strategies. | [
"['Shihori Koyama' 'Daisuke Inoue' 'Hiroaki Yoshida' 'Kazuyuki Aihara'\n 'Gouhei Tanaka']"
] |
null | null | 2406.03064 | null | null | http://arxiv.org/pdf/2406.03064v1 | 2024-06-05T08:47:30Z | 2024-06-05T08:47:30Z | Path-Specific Causal Reasoning for Fairness-aware Cognitive Diagnosis | Cognitive Diagnosis~(CD), which leverages students and exercise data to predict students' proficiency levels on different knowledge concepts, is one of fundamental components in Intelligent Education. Due to the scarcity of student-exercise interaction data, most existing methods focus on making the best use of available data, such as exercise content and student information~(e.g., educational context). Despite the great progress, the abuse of student sensitive information has not been paid enough attention. Due to the important position of CD in Intelligent Education, employing sensitive information when making diagnosis predictions will cause serious social issues. Moreover, data-driven neural networks are easily misled by the shortcut between input data and output prediction, exacerbating this problem. Therefore, it is crucial to eliminate the negative impact of sensitive information in CD models. In response, we argue that sensitive attributes of students can also provide useful information, and only the shortcuts directly related to the sensitive information should be eliminated from the diagnosis process. Thus, we employ causal reasoning and design a novel Path-Specific Causal Reasoning Framework (PSCRF) to achieve this goal. Specifically, we first leverage an encoder to extract features and generate embeddings for general information and sensitive information of students. Then, we design a novel attribute-oriented predictor to decouple the sensitive attributes, in which fairness-related sensitive features will be eliminated and other useful information will be retained. Finally, we designed a multi-factor constraint to ensure the performance of fairness and diagnosis performance simultaneously. Extensive experiments over real-world datasets (e.g., PISA dataset) demonstrate the effectiveness of our proposed PSCRF. | [
"['Dacao Zhang' 'Kun Zhang' 'Le Wu' 'Mi Tian' 'Richang Hong' 'Meng Wang']"
] |
null | null | 2406.03065 | null | null | http://arxiv.org/pdf/2406.03065v1 | 2024-06-05T08:49:51Z | 2024-06-05T08:49:51Z | Decision Boundary-aware Knowledge Consolidation Generates Better
Instance-Incremental Learner | Instance-incremental learning (IIL) focuses on learning continually with data of the same classes. Compared to class-incremental learning (CIL), the IIL is seldom explored because IIL suffers less from catastrophic forgetting (CF). However, besides retaining knowledge, in real-world deployment scenarios where the class space is always predefined, continual and cost-effective model promotion with the potential unavailability of previous data is a more essential demand. Therefore, we first define a new and more practical IIL setting as promoting the model's performance besides resisting CF with only new observations. Two issues have to be tackled in the new IIL setting: 1) the notorious catastrophic forgetting because of no access to old data, and 2) broadening the existing decision boundary to new observations because of concept drift. To tackle these problems, our key insight is to moderately broaden the decision boundary to fail cases while retain old boundary. Hence, we propose a novel decision boundary-aware distillation method with consolidating knowledge to teacher to ease the student learning new knowledge. We also establish the benchmarks on existing datasets Cifar-100 and ImageNet. Notably, extensive experiments demonstrate that the teacher model can be a better incremental learner than the student model, which overturns previous knowledge distillation-based methods treating student as the main role. | [
"['Qiang Nie' 'Weifu Fu' 'Yuhuan Lin' 'Jialin Li' 'Yifeng Zhou' 'Yong Liu'\n 'Lei Zhu' 'Chengjie Wang']"
] |
null | null | 2406.03068 | null | null | http://arxiv.org/pdf/2406.03068v1 | 2024-06-05T08:51:08Z | 2024-06-05T08:51:08Z | How Truncating Weights Improves Reasoning in Language Models | In addition to the ability to generate fluent text in various languages, large language models have been successful at tasks that involve basic forms of logical "reasoning" over their context. Recent work found that selectively removing certain components from weight matrices in pre-trained models can improve such reasoning capabilities. We investigate this phenomenon further by carefully studying how certain global associations tend to be stored in specific weight components or Transformer blocks, in particular feed-forward layers. Such associations may hurt predictions in reasoning tasks, and removing the corresponding components may then improve performance. We analyze how this arises during training, both empirically and theoretically, on a two-layer Transformer trained on a basic reasoning task with noise, a toy associative memory model, and on the Pythia family of pre-trained models tested on simple reasoning tasks. | [
"['Lei Chen' 'Joan Bruna' 'Alberto Bietti']"
] |
null | null | 2406.03072 | null | null | http://arxiv.org/pdf/2406.03072v2 | 2024-06-27T15:05:17Z | 2024-06-05T08:57:41Z | Local to Global: Learning Dynamics and Effect of Initialization for
Transformers | In recent years, transformer-based models have revolutionized deep learning, particularly in sequence modeling. To better understand this phenomenon, there is a growing interest in using Markov input processes to study transformers. However, our current understanding in this regard remains limited with many fundamental questions about how transformers learn Markov chains still unanswered. In this paper, we address this by focusing on first-order Markov chains and single-layer transformers, providing a comprehensive characterization of the learning dynamics in this context. Specifically, we prove that transformer parameters trained on next-token prediction loss can either converge to global or local minima, contingent on the initialization and the Markovian data properties, and we characterize the precise conditions under which this occurs. To the best of our knowledge, this is the first result of its kind highlighting the role of initialization. We further demonstrate that our theoretical findings are corroborated by empirical evidence. Based on these insights, we provide guidelines for the initialization of transformer parameters and demonstrate their effectiveness. Finally, we outline several open problems in this arena. Code is available at: https://github.com/Bond1995/Markov. | [
"['Ashok Vardhan Makkuva' 'Marco Bondaschi' 'Chanakya Ekbote'\n 'Adway Girish' 'Alliot Nagle' 'Hyeji Kim' 'Michael Gastpar']"
] |
null | null | 2406.03078 | null | null | http://arxiv.org/pdf/2406.03078v1 | 2024-06-05T09:05:55Z | 2024-06-05T09:05:55Z | Towards Federated Domain Unlearning: Verification Methodologies and
Challenges | Federated Learning (FL) has evolved as a powerful tool for collaborative model training across multiple entities, ensuring data privacy in sensitive sectors such as healthcare and finance. However, the introduction of the Right to Be Forgotten (RTBF) poses new challenges, necessitating federated unlearning to delete data without full model retraining. Traditional FL unlearning methods, not originally designed with domain specificity in mind, inadequately address the complexities of multi-domain scenarios, often affecting the accuracy of models in non-targeted domains or leading to uniform forgetting across all domains. Our work presents the first comprehensive empirical study on Federated Domain Unlearning, analyzing the characteristics and challenges of current techniques in multi-domain contexts. We uncover that these methods falter, particularly because they neglect the nuanced influences of domain-specific data, which can lead to significant performance degradation and inaccurate model behavior. Our findings reveal that unlearning disproportionately affects the model's deeper layers, erasing critical representational subspaces acquired during earlier training phases. In response, we propose novel evaluation methodologies tailored for Federated Domain Unlearning, aiming to accurately assess and verify domain-specific data erasure without compromising the model's overall integrity and performance. This investigation not only highlights the urgent need for domain-centric unlearning strategies in FL but also sets a new precedent for evaluating and implementing these techniques effectively. | [
"['Kahou Tam' 'Kewei Xu' 'Li Li' 'Huazhu Fu']"
] |
null | null | 2406.03082 | null | null | http://arxiv.org/pdf/2406.03082v1 | 2024-06-05T09:11:46Z | 2024-06-05T09:11:46Z | Learning Solutions of Stochastic Optimization Problems with Bayesian
Neural Networks | Mathematical solvers use parametrized Optimization Problems (OPs) as inputs to yield optimal decisions. In many real-world settings, some of these parameters are unknown or uncertain. Recent research focuses on predicting the value of these unknown parameters using available contextual features, aiming to decrease decision regret by adopting end-to-end learning approaches. However, these approaches disregard prediction uncertainty and therefore make the mathematical solver susceptible to provide erroneous decisions in case of low-confidence predictions. We propose a novel framework that models prediction uncertainty with Bayesian Neural Networks (BNNs) and propagates this uncertainty into the mathematical solver with a Stochastic Programming technique. The differentiable nature of BNNs and differentiable mathematical solvers allow for two different learning approaches: In the Decoupled learning approach, we update the BNN weights to increase the quality of the predictions' distribution of the OP parameters, while in the Combined learning approach, we update the weights aiming to directly minimize the expected OP's cost function in a stochastic end-to-end fashion. We do an extensive evaluation using synthetic data with various noise properties and a real dataset, showing that decisions regret are generally lower (better) with both proposed methods. | [
"['Alan A. Lahoud' 'Erik Schaffernicht' 'Johannes A. Stork']"
] |
null | null | 2406.03085 | null | null | http://arxiv.org/pdf/2406.03085v1 | 2024-06-05T09:19:54Z | 2024-06-05T09:19:54Z | Exploring User Retrieval Integration towards Large Language Models for
Cross-Domain Sequential Recommendation | Cross-Domain Sequential Recommendation (CDSR) aims to mine and transfer users' sequential preferences across different domains to alleviate the long-standing cold-start issue. Traditional CDSR models capture collaborative information through user and item modeling while overlooking valuable semantic information. Recently, Large Language Model (LLM) has demonstrated powerful semantic reasoning capabilities, motivating us to introduce them to better capture semantic information. However, introducing LLMs to CDSR is non-trivial due to two crucial issues: seamless information integration and domain-specific generation. To this end, we propose a novel framework named URLLM, which aims to improve the CDSR performance by exploring the User Retrieval approach and domain grounding on LLM simultaneously. Specifically, we first present a novel dual-graph sequential model to capture the diverse information, along with an alignment and contrastive learning method to facilitate domain knowledge transfer. Subsequently, a user retrieve-generation model is adopted to seamlessly integrate the structural information into LLM, fully harnessing its emergent inferencing ability. Furthermore, we propose a domain-specific strategy and a refinement module to prevent out-of-domain generation. Extensive experiments on Amazon demonstrated the information integration and domain-specific generation ability of URLLM in comparison to state-of-the-art baselines. Our code is available at https://github.com/TingJShen/URLLM | [
"['Tingjia Shen' 'Hao Wang' 'Jiaqing Zhang' 'Sirui Zhao' 'Liangyue Li'\n 'Zulong Chen' 'Defu Lian' 'Enhong Chen']"
] |
null | null | 2406.03086 | null | null | http://arxiv.org/pdf/2406.03086v1 | 2024-06-05T09:22:19Z | 2024-06-05T09:22:19Z | Task-Oriented Wireless Communications for Collaborative Perception in
Intelligent Unmanned Systems | Collaborative Perception (CP) has shown great potential to achieve more holistic and reliable environmental perception in intelligent unmanned systems (IUSs). However, implementing CP still faces key challenges due to the characteristics of the CP task and the dynamics of wireless channels. In this article, a task-oriented wireless communication framework is proposed to jointly optimize the communication scheme and the CP procedure. We first propose channel-adaptive compression and robust fusion approaches to extract and exploit the most valuable semantic information under wireless communication constraints. We then propose a task-oriented distributed scheduling algorithm to identify the best collaborators for CP under dynamic environments. The main idea is learning while scheduling, where the collaboration utility is effectively learned with low computation and communication overhead. Case studies are carried out in connected autonomous driving scenarios to verify the proposed framework. Finally, we identify several future research directions. | [
"['Sheng Zhou' 'Yukuan Jia' 'Ruiqing Mao' 'Zhaojun Nan' 'Yuxuan Sun'\n 'Zhisheng Niu']"
] |
null | null | 2406.03087 | null | null | http://arxiv.org/pdf/2406.03087v1 | 2024-06-05T09:24:10Z | 2024-06-05T09:24:10Z | Lossless Image Compression Using Multi-level Dictionaries: Binary Images | Lossless image compression is required in various applications to reduce storage or transmission costs of images, while requiring the reconstructed images to have zero information loss compared to the original. Existing lossless image compression methods either have simple design but poor compression performance, or complex design, better performance, but with no performance guarantees. In our endeavor to develop a lossless image compression method with low complexity and guaranteed performance, we argue that compressibility of a color image is essentially derived from the patterns in its spatial structure, intensity variations, and color variations. Thus, we divide the overall design of a lossless image compression scheme into three parts that exploit corresponding redundancies. We further argue that the binarized version of an image captures its fundamental spatial structure and in this work, we propose a scheme for lossless compression of binary images. The proposed scheme first learns dictionaries of $16times16$, $8times8$, $4times4$, and $2times 2$ square pixel patterns from various datasets of binary images. It then uses these dictionaries to encode binary images. These dictionaries have various interesting properties that are further exploited to construct an efficient scheme. Our preliminary results show that the proposed scheme consistently outperforms existing conventional and learning based lossless compression approaches, and provides, on average, as much as $1.5times$ better performance than a common general purpose lossless compression scheme (WebP), more than $3times$ better performance than a state of the art learning based scheme, and better performance than a specialized scheme for binary image compression (JBIG2). | [
"['Samar Agnihotri' 'Renu Rameshan' 'Ritwik Ghosal']"
] |
null | null | 2406.03088 | null | null | http://arxiv.org/pdf/2406.03088v1 | 2024-06-05T09:25:18Z | 2024-06-05T09:25:18Z | HASS: Hardware-Aware Sparsity Search for Dataflow DNN Accelerator | Deep Neural Networks (DNNs) excel in learning hierarchical representations from raw data, such as images, audio, and text. To compute these DNN models with high performance and energy efficiency, these models are usually deployed onto customized hardware accelerators. Among various accelerator designs, dataflow architecture has shown promising performance due to its layer-pipelined structure and its scalability in data parallelism. Exploiting weights and activations sparsity can further enhance memory storage and computation efficiency. However, existing approaches focus on exploiting sparsity in non-dataflow accelerators, which cannot be applied onto dataflow accelerators because of the large hardware design space introduced. As such, this could miss opportunities to find an optimal combination of sparsity features and hardware designs. In this paper, we propose a novel approach to exploit unstructured weights and activations sparsity for dataflow accelerators, using software and hardware co-optimization. We propose a Hardware-Aware Sparsity Search (HASS) to systematically determine an efficient sparsity solution for dataflow accelerators. Over a set of models, we achieve an efficiency improvement ranging from 1.3$times$ to 4.2$times$ compared to existing sparse designs, which are either non-dataflow or non-hardware-aware. Particularly, the throughput of MobileNetV3 can be optimized to 4895 images per second. HASS is open-source: url{https://github.com/Yu-Zhewen/HASS} | [
"['Zhewen Yu' 'Sudarshan Sreeram' 'Krish Agrawal' 'Junyi Wu'\n 'Alexander Montgomerie-Corcoran' 'Cheng Zhang' 'Jianyi Cheng'\n 'Christos-Savvas Bouganis' 'Yiren Zhao']"
] |
null | null | 2406.03095 | null | null | http://arxiv.org/pdf/2406.03095v2 | 2024-06-06T05:28:27Z | 2024-06-05T09:36:15Z | EgoSurgery-Tool: A Dataset of Surgical Tool and Hand Detection from
Egocentric Open Surgery Videos | Surgical tool detection is a fundamental task for understanding egocentric open surgery videos. However, detecting surgical tools presents significant challenges due to their highly imbalanced class distribution, similar shapes and similar textures, and heavy occlusion. The lack of a comprehensive large-scale dataset compounds these challenges. In this paper, we introduce EgoSurgery-Tool, an extension of the existing EgoSurgery-Phase dataset, which contains real open surgery videos captured using an egocentric camera attached to the surgeon's head, along with phase annotations. EgoSurgery-Tool has been densely annotated with surgical tools and comprises over 49K surgical tool bounding boxes across 15 categories, constituting a large-scale surgical tool detection dataset. EgoSurgery-Tool also provides annotations for hand detection with over 46K hand-bounding boxes, capturing hand-object interactions that are crucial for understanding activities in egocentric open surgery. EgoSurgery-Tool is superior to existing datasets due to its larger scale, greater variety of surgical tools, more annotations, and denser scenes. We conduct a comprehensive analysis of EgoSurgery-Tool using nine popular object detectors to assess their effectiveness in both surgical tool and hand detection. The dataset will be released at https://github.com/Fujiry0/EgoSurgery. | [
"['Ryo Fujii' 'Hideo Saito' 'Hiroki Kajita']"
] |
null | null | 2406.03097 | null | null | http://arxiv.org/pdf/2406.03097v1 | 2024-06-05T09:40:08Z | 2024-06-05T09:40:08Z | Enhancing the Resilience of Graph Neural Networks to Topological
Perturbations in Sparse Graphs | Graph neural networks (GNNs) have been extensively employed in node classification. Nevertheless, recent studies indicate that GNNs are vulnerable to topological perturbations, such as adversarial attacks and edge disruptions. Considerable efforts have been devoted to mitigating these challenges. For example, pioneering Bayesian methodologies, including GraphSS and LlnDT, incorporate Bayesian label transitions and topology-based label sampling to strengthen the robustness of GNNs. However, GraphSS is hindered by slow convergence, while LlnDT faces challenges in sparse graphs. To overcome these limitations, we propose a novel label inference framework, TraTopo, which combines topology-driven label propagation, Bayesian label transitions, and link analysis via random walks. TraTopo significantly surpasses its predecessors on sparse graphs by utilizing random walk sampling, specifically targeting isolated nodes for link prediction, thus enhancing its effectiveness in topological sampling contexts. Additionally, TraTopo employs a shortest-path strategy to refine link prediction, thereby reducing predictive overhead and improving label inference accuracy. Empirical evaluations highlight TraTopo's superiority in node classification, significantly exceeding contemporary GCN models in accuracy. | [
"['Shuqi He' 'Jun Zhuang' 'Ding Wang' 'Luyao Peng' 'Jun Song']"
] |
null | null | 2406.03099 | null | null | http://arxiv.org/pdf/2406.03099v2 | 2024-06-06T07:46:26Z | 2024-06-05T09:42:43Z | Graph Convolutional Branch and Bound | This article demonstrates the effectiveness of employing a deep learning model in an optimization pipeline. Specifically, in a generic exact algorithm for a NP problem, multiple heuristic criteria are usually used to guide the search of the optimum within the set of all feasible solutions. In this context, neural networks can be leveraged to rapidly acquire valuable information, enabling the identification of a more expedient path in this vast space. So, after the explanation of the tackled traveling salesman problem, the implemented branch and bound for its classical resolution is described. This algorithm is then compared with its hybrid version termed "graph convolutional branch and bound" that integrates the previous branch and bound with a graph convolutional neural network. The empirical results obtained highlight the efficacy of this approach, leading to conclusive findings and suggesting potential directions for future research. | [
"['Lorenzo Sciandra' 'Roberto Esposito' 'Andrea Cesare Grosso'\n 'Laura Sacerdote' 'Cristina Zucca']"
] |
null | null | 2406.03102 | null | null | http://arxiv.org/pdf/2406.03102v1 | 2024-06-05T09:45:26Z | 2024-06-05T09:45:26Z | DEER: A Delay-Resilient Framework for Reinforcement Learning with
Variable Delays | Classic reinforcement learning (RL) frequently confronts challenges in tasks involving delays, which cause a mismatch between received observations and subsequent actions, thereby deviating from the Markov assumption. Existing methods usually tackle this issue with end-to-end solutions using state augmentation. However, these black-box approaches often involve incomprehensible processes and redundant information in the information states, causing instability and potentially undermining the overall performance. To alleviate the delay challenges in RL, we propose $textbf{DEER (Delay-resilient Encoder-Enhanced RL)}$, a framework designed to effectively enhance the interpretability and address the random delay issues. DEER employs a pretrained encoder to map delayed states, along with their variable-length past action sequences resulting from different delays, into hidden states, which is trained on delay-free environment datasets. In a variety of delayed scenarios, the trained encoder can seamlessly integrate with standard RL algorithms without requiring additional modifications and enhance the delay-solving capability by simply adapting the input dimension of the original algorithms. We evaluate DEER through extensive experiments on Gym and Mujoco environments. The results confirm that DEER is superior to state-of-the-art RL algorithms in both constant and random delay settings. | [
"['Bo Xia' 'Yilun Kong' 'Yongzhe Chang' 'Bo Yuan' 'Zhiheng Li'\n 'Xueqian Wang' 'Bin Liang']"
] |
null | null | 2406.03120 | null | null | http://arxiv.org/pdf/2406.03120v1 | 2024-06-05T10:13:55Z | 2024-06-05T10:13:55Z | RevRIR: Joint Reverberant Speech and Room Impulse Response Embedding
using Contrastive Learning with Application to Room Shape Classification | This paper focuses on room fingerprinting, a task involving the analysis of an audio recording to determine the specific volume and shape of the room in which it was captured. While it is relatively straightforward to determine the basic room parameters from the Room Impulse Responses (RIR), doing so from a speech signal is a cumbersome task. To address this challenge, we introduce a dual-encoder architecture that facilitates the estimation of room parameters directly from speech utterances. During pre-training, one encoder receives the RIR while the other processes the reverberant speech signal. A contrastive loss function is employed to embed the speech and the acoustic response jointly. In the fine-tuning stage, the specific classification task is trained. In the test phase, only the reverberant utterance is available, and its embedding is used for the task of room shape classification. The proposed scheme is extensively evaluated using simulated acoustic environments. | [
"['Jacob Bitterman' 'Daniel Levi' 'Hilel Hagai Diamandi' 'Sharon Gannot'\n 'Tal Rosenwein']"
] |
null | null | 2406.03121 | null | null | http://arxiv.org/pdf/2406.03121v1 | 2024-06-05T10:15:16Z | 2024-06-05T10:15:16Z | MESS: Modern Electronic Structure Simulations | Electronic structure simulation (ESS) has been used for decades to provide quantitative scientific insights on an atomistic scale, enabling advances in chemistry, biology, and materials science, among other disciplines. Following standard practice in scientific computing, the software packages driving these studies have been implemented in compiled languages such as FORTRAN and C. However, the recent introduction of machine learning (ML) into these domains has meant that ML models must be coded in these languages, or that complex software bridges have to be built between ML models in Python and these large compiled software systems. This is in contrast with recent progress in modern ML frameworks which aim to optimise both ease of use and high performance by harnessing hardware acceleration of tensor programs defined in Python. We introduce MESS: a modern electronic structure simulation package implemented in JAX; porting the ESS code to the ML world. We outline the costs and benefits of following the software development practices used in ML for this important scientific workload. MESS shows significant speedups n widely available hardware accelerators and simultaneously opens a clear pathway towards combining ESS with ML. MESS is available at https://github.com/graphcore-research/mess. | [
"['Hatem Helal' 'Andrew Fitzgibbon']"
] |
null | null | 2406.03136 | null | null | http://arxiv.org/pdf/2406.03136v1 | 2024-06-05T10:44:08Z | 2024-06-05T10:44:08Z | Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based
Models | We study the computational limits of Low-Rank Adaptation (LoRA) update for finetuning transformer-based models using fine-grained complexity theory. Our key observation is that the existence of low-rank decompositions within the gradient computation of LoRA adaptation leads to possible algorithmic speedup. This allows us to (i) identify a phase transition behavior and (ii) prove the existence of nearly linear algorithms by controlling the LoRA update computation term by term, assuming the Strong Exponential Time Hypothesis (SETH). For the former, we identify a sharp transition in the efficiency of all possible rank-$r$ LoRA update algorithms for transformers, based on specific norms resulting from the multiplications of the input sequence $mathbf{X}$, pretrained weights $mathbf{W^star}$, and adapter matrices $alpha mathbf{B} mathbf{A} / r$. Specifically, we derive a shared upper bound threshold for such norms and show that efficient (sub-quadratic) approximation algorithms of LoRA exist only below this threshold. For the latter, we prove the existence of nearly linear approximation algorithms for LoRA adaptation by utilizing the hierarchical low-rank structures of LoRA gradients and approximating the gradients with a series of chained low-rank approximations. To showcase our theory, we consider two practical scenarios: partial (e.g., only $mathbf{W}_V$ and $mathbf{W}_Q$) and full adaptations (e.g., $mathbf{W}_Q$, $mathbf{W}_V$, and $mathbf{W}_K$) of weights in attention heads. | [
"['Jerry Yao-Chieh Hu' 'Maojiang Su' 'En-Jui Kuo' 'Zhao Song' 'Han Liu']"
] |
null | null | 2406.03140 | null | null | http://arxiv.org/pdf/2406.03140v1 | 2024-06-05T10:51:17Z | 2024-06-05T10:51:17Z | Continual Traffic Forecasting via Mixture of Experts | The real-world traffic networks undergo expansion through the installation of new sensors, implying that the traffic patterns continually evolve over time. Incrementally training a model on the newly added sensors would make the model forget the past knowledge, i.e., catastrophic forgetting, while retraining the model on the entire network to capture these changes is highly inefficient. To address these challenges, we propose a novel Traffic Forecasting Mixture of Experts (TFMoE) for traffic forecasting under evolving networks. The main idea is to segment the traffic flow into multiple homogeneous groups, and assign an expert model responsible for a specific group. This allows each expert model to concentrate on learning and adapting to a specific set of patterns, while minimizing interference between the experts during training, thereby preventing the dilution or replacement of prior knowledge, which is a major cause of catastrophic forgetting. Through extensive experiments on a real-world long-term streaming network dataset, PEMSD3-Stream, we demonstrate the effectiveness and efficiency of TFMoE. Our results showcase superior performance and resilience in the face of catastrophic forgetting, underscoring the effectiveness of our approach in dealing with continual learning for traffic flow forecasting in long-term streaming networks. | [
"['Sanghyun Lee' 'Chanyoung Park']"
] |
null | null | 2406.03141 | null | null | http://arxiv.org/pdf/2406.03141v1 | 2024-06-05T10:54:18Z | 2024-06-05T10:54:18Z | Floating Anchor Diffusion Model for Multi-motif Scaffolding | Motif scaffolding seeks to design scaffold structures for constructing proteins with functions derived from the desired motif, which is crucial for the design of vaccines and enzymes. Previous works approach the problem by inpainting or conditional generation. Both of them can only scaffold motifs with fixed positions, and the conditional generation cannot guarantee the presence of motifs. However, prior knowledge of the relative motif positions in a protein is not readily available, and constructing a protein with multiple functions in one protein is more general and significant because of the synergies between functions. We propose a Floating Anchor Diffusion (FADiff) model. FADiff allows motifs to float rigidly and independently in the process of diffusion, which guarantees the presence of motifs and automates the motif position design. Our experiments demonstrate the efficacy of FADiff with high success rates and designable novel scaffolds. To the best of our knowledge, FADiff is the first work to tackle the challenge of scaffolding multiple motifs without relying on the expertise of relative motif positions in the protein. Code is available at https://github.com/aim-uofa/FADiff. | [
"['Ke Liu' 'Weian Mao' 'Shuaike Shen' 'Xiaoran Jiao' 'Zheng Sun' 'Hao Chen'\n 'Chunhua Shen']"
] |
null | null | 2406.03142 | null | null | http://arxiv.org/pdf/2406.03142v1 | 2024-06-05T10:55:11Z | 2024-06-05T10:55:11Z | On the Power of Randomization in Fair Classification and Representation | Fair classification and fair representation learning are two important problems in supervised and unsupervised fair machine learning, respectively. Fair classification asks for a classifier that maximizes accuracy on a given data distribution subject to fairness constraints. Fair representation maps a given data distribution over the original feature space to a distribution over a new representation space such that all classifiers over the representation satisfy fairness. In this paper, we examine the power of randomization in both these problems to minimize the loss of accuracy that results when we impose fairness constraints. Previous work on fair classification has characterized the optimal fair classifiers on a given data distribution that maximize accuracy subject to fairness constraints, e.g., Demographic Parity (DP), Equal Opportunity (EO), and Predictive Equality (PE). We refine these characterizations to demonstrate when the optimal randomized fair classifiers can surpass their deterministic counterparts in accuracy. We also show how the optimal randomized fair classifier that we characterize can be obtained as a solution to a convex optimization problem. Recent work has provided techniques to construct fair representations for a given data distribution such that any classifier over this representation satisfies DP. However, the classifiers on these fair representations either come with no or weak accuracy guarantees when compared to the optimal fair classifier on the original data distribution. Extending our ideas for randomized fair classification, we improve on these works, and construct DP-fair, EO-fair, and PE-fair representations that have provably optimal accuracy and suffer no accuracy loss compared to the optimal DP-fair, EO-fair, and PE-fair classifiers respectively on the original data distribution. | [
"['Sushant Agarwal' 'Amit Deshpande']"
] |
null | null | 2406.03144 | null | null | http://arxiv.org/pdf/2406.03144v1 | 2024-06-05T11:00:03Z | 2024-06-05T11:00:03Z | A Combination Model for Time Series Prediction using LSTM via Extracting
Dynamic Features Based on Spatial Smoothing and Sequential General
Variational Mode Decomposition | In order to solve the problems such as difficult to extract effective features and low accuracy of sales volume prediction caused by complex relationships such as market sales volume in time series prediction, we proposed a time series prediction method of market sales volume based on Sequential General VMD and spatial smoothing Long short-term memory neural network (SS-LSTM) combination model. Firstly, the spatial smoothing algorithm is used to decompose and calculate the sample data of related industry sectors affected by the linkage effect of market sectors, extracting modal features containing information via Sequential General VMD on overall market and specific price trends; Then, according to the background of different Market data sets, LSTM network is used to model and predict the price of fundamental data and modal characteristics. The experimental results of data prediction with seasonal and periodic trends show that this method can achieve higher price prediction accuracy and more accurate accuracy in specific market contexts compared to traditional prediction methods Describe the changes in market sales volume. | [
"['Jianyu Liu' 'Wei Chen' 'Yong Zhang' 'Zhenfeng Chen' 'Bin Wan'\n 'Jinwei Hu']"
] |
null | null | 2406.03145 | null | null | http://arxiv.org/pdf/2406.03145v2 | 2024-06-06T15:12:55Z | 2024-06-05T11:00:27Z | E(n) Equivariant Message Passing Cellular Networks | This paper introduces E(n) Equivariant Message Passing Cellular Networks (EMPCNs), an extension of E(n) Equivariant Graph Neural Networks to CW-complexes. Our approach addresses two aspects of geometric message passing networks: 1) enhancing their expressiveness by incorporating arbitrary cells, and 2) achieving this in a computationally efficient way with a decoupled EMPCNs technique. We demonstrate that EMPCNs achieve close to state-of-the-art performance on multiple tasks without the need for steerability, including many-body predictions and motion capture. Moreover, ablation studies confirm that decoupled EMPCNs exhibit stronger generalization capabilities than their non-topologically informed counterparts. These findings show that EMPCNs can be used as a scalable and expressive framework for higher-order message passing in geometric and topological graphs | [
"['Veljko Kovač' 'Erik J. Bekkers' 'Pietro Liò' 'Floor Eijkelboom']"
] |
null | null | 2406.03146 | null | null | http://arxiv.org/pdf/2406.03146v1 | 2024-06-05T11:01:42Z | 2024-06-05T11:01:42Z | Tiny models from tiny data: Textual and null-text inversion for few-shot
distillation | Few-shot image classification involves classifying images using very few training examples. Recent vision foundation models show excellent few-shot transfer abilities, but are large and slow at inference. Using knowledge distillation, the capabilities of high-performing but slow models can be transferred to tiny, efficient models. However, common distillation methods require a large set of unlabeled data, which is not available in the few-shot setting. To overcome this lack of data, there has been a recent interest in using synthetic data. We expand on this work by presenting a novel diffusion model inversion technique (TINT) combining the diversity of textual inversion with the specificity of null-text inversion. Using this method in a few-shot distillation pipeline leads to state-of-the-art accuracy among small student models on popular benchmarks, while being significantly faster than prior work. This allows us to push even tiny models to high accuracy using only a tiny application-specific dataset, albeit relying on extra data for pre-training. Popular few-shot benchmarks involve evaluation over a large number of episodes, which is computationally cumbersome for methods involving synthetic data generation. Therefore, we also present a theoretical analysis on how the variance of the accuracy estimator depends on the number of episodes and query examples, and use these results to lower the computational effort required for method evaluation. In addition, to further motivate the use of generative models in few-shot distillation, we demonstrate that our method performs better compared to training on real data mined from the dataset used to train the diffusion model. Source code will be made available at https://github.com/pixwse/tiny2. | [
"['Erik Landolsi' 'Fredrik Kahl']"
] |
null | null | 2406.03148 | null | null | http://arxiv.org/pdf/2406.03148v1 | 2024-06-05T11:06:33Z | 2024-06-05T11:06:33Z | Aligning Transformers with Weisfeiler-Leman | Graph neural network architectures aligned with the $k$-dimensional Weisfeiler--Leman ($k$-WL) hierarchy offer theoretically well-understood expressive power. However, these architectures often fail to deliver state-of-the-art predictive performance on real-world graphs, limiting their practical utility. While recent works aligning graph transformer architectures with the $k$-WL hierarchy have shown promising empirical results, employing transformers for higher orders of $k$ remains challenging due to a prohibitive runtime and memory complexity of self-attention as well as impractical architectural assumptions, such as an infeasible number of attention heads. Here, we advance the alignment of transformers with the $k$-WL hierarchy, showing stronger expressivity results for each $k$, making them more feasible in practice. In addition, we develop a theoretical framework that allows the study of established positional encodings such as Laplacian PEs and SPE. We evaluate our transformers on the large-scale PCQM4Mv2 dataset, showing competitive predictive performance with the state-of-the-art and demonstrating strong downstream performance when fine-tuning them on small-scale molecular datasets. Our code is available at https://github.com/luis-mueller/wl-transformers. | [
"['Luis Müller' 'Christopher Morris']"
] |
null | null | 2406.03150 | null | null | http://arxiv.org/pdf/2406.03150v1 | 2024-06-05T11:15:43Z | 2024-06-05T11:15:43Z | Sample-specific Masks for Visual Reprogramming-based Prompting | Visual reprogramming (VR) is a prompting technique that aims to re-purpose a pre-trained model (e.g., a classifier on ImageNet) to target tasks (e.g., medical data prediction) by learning a small-scale pattern added into input images instead of tuning considerable parameters within the model. The location of the pattern within input samples is usually determined by a pre-defined mask shared across all samples. In this paper, we show that the shared mask potentially limits VR's generalization and increases its approximation error due to the lack of sample-level adaptation. Motivated by this finding, we design a new framework for VR called sample-specific multi-channel masks (SMM). Specifically, SMM employs a lightweight ConvNet and patch-wise interpolation to generate sample-specific three-channel masks instead of a shared and pre-defined mask. Since we generate different masks for individual samples, SMM is theoretically shown to reduce approximation error for the target tasks compared with existing state-of-the-art VR methods. We also empirically demonstrate its performance gain on both ResNet and ViT. The success of SMM further highlights the broader applicability of VR in leveraging the latent knowledge of pre-trained models for various target tasks. Our code is available at https://github.com/tmlr-group/SMM. | [
"['Chengyi Cai' 'Zesheng Ye' 'Lei Feng' 'Jianzhong Qi' 'Feng Liu']"
] |
null | null | 2406.03151 | null | null | http://arxiv.org/pdf/2406.03151v2 | 2024-06-06T09:30:11Z | 2024-06-05T11:15:45Z | Which Side Are You On? A Multi-task Dataset for End-to-End Argument
Summarisation and Evaluation | With the recent advances of large language models (LLMs), it is no longer infeasible to build an automated debate system that helps people to synthesise persuasive arguments. Previous work attempted this task by integrating multiple components. In our work, we introduce an argument mining dataset that captures the end-to-end process of preparing an argumentative essay for a debate, which covers the tasks of claim and evidence identification (Task 1 ED), evidence convincingness ranking (Task 2 ECR), argumentative essay summarisation and human preference ranking (Task 3 ASR) and metric learning for automated evaluation of resulting essays, based on human feedback along argument quality dimensions (Task 4 SQE). Our dataset contains 14k examples of claims that are fully annotated with the various properties supporting the aforementioned tasks. We evaluate multiple generative baselines for each of these tasks, including representative LLMs. We find, that while they show promising results on individual tasks in our benchmark, their end-to-end performance on all four tasks in succession deteriorates significantly, both in automated measures as well as in human-centred evaluation. This challenge presented by our proposed dataset motivates future research on end-to-end argument mining and summarisation. The repository of this project is available at https://github.com/HarrywillDr/ArgSum-Datatset | [
"['Hao Li' 'Yuping Wu' 'Viktor Schlegel' 'Riza Batista-Navarro'\n 'Tharindu Madusanka' 'Iqra Zahid' 'Jiayan Zeng' 'Xiaochi Wang'\n 'Xinran He' 'Yizhi Li' 'Goran Nenadic']"
] |
null | null | 2406.03152 | null | null | http://arxiv.org/pdf/2406.03152v1 | 2024-06-05T11:16:55Z | 2024-06-05T11:16:55Z | Dynamic Spectral Clustering with Provable Approximation Guarantee | This paper studies clustering algorithms for dynamically evolving graphs ${G_t}$, in which new edges (and potential new vertices) are added into a graph, and the underlying cluster structure of the graph can gradually change. The paper proves that, under some mild condition on the cluster-structure, the clusters of the final graph $G_T$ of $n_T$ vertices at time $T$ can be well approximated by a dynamic variant of the spectral clustering algorithm. The algorithm runs in amortised update time $O(1)$ and query time $o(n_T)$. Experimental studies on both synthetic and real-world datasets further confirm the practicality of our designed algorithm. | [
"['Steinar Laenen' 'He Sun']"
] |
null | null | 2406.03154 | null | null | http://arxiv.org/pdf/2406.03154v2 | 2024-06-06T12:58:17Z | 2024-06-05T11:30:16Z | Detecting Model Misspecification in Amortized Bayesian Inference with
Neural Networks: An Extended Investigation | Recent advances in probabilistic deep learning enable efficient amortized Bayesian inference in settings where the likelihood function is only implicitly defined by a simulation program (simulation-based inference; SBI). But how faithful is such inference if the simulation represents reality somewhat inaccurately, that is, if the true system behavior at test time deviates from the one seen during training? We conceptualize the types of such model misspecification arising in SBI and systematically investigate how the performance of neural posterior approximators gradually deteriorates as a consequence, making inference results less and less trustworthy. To notify users about this problem, we propose a new misspecification measure that can be trained in an unsupervised fashion (i.e., without training data from the true distribution) and reliably detects model misspecification at test time. Our experiments clearly demonstrate the utility of our new measure both on toy examples with an analytical ground-truth and on representative scientific tasks in cell biology, cognitive decision making, disease outbreak dynamics, and computer vision. We show how the proposed misspecification test warns users about suspicious outputs, raises an alarm when predictions are not trustworthy, and guides model designers in their search for better simulators. | [
"['Marvin Schmitt' 'Paul-Christian Bürkner' 'Ullrich Köthe'\n 'Stefan T. Radev']"
] |
null | null | 2406.03157 | null | null | http://arxiv.org/pdf/2406.03157v2 | 2024-06-07T08:43:24Z | 2024-06-05T11:35:38Z | A Combination Model Based on Sequential General Variational Mode
Decomposition Method for Time Series Prediction | Accurate prediction of financial time series is a key concern for market economy makers and investors. The article selects online store sales and Australian beer sales as representatives of non-stationary, trending, and seasonal financial time series, and constructs a new SGVMD-ARIMA combination model in a non-linear combination way to predict financial time series. The ARIMA model, LSTM model, and other classic decomposition prediction models are used as control models to compare the accuracy of different models. The empirical results indicate that the constructed combination prediction model has universal advantages over the single prediction model and linear combination prediction model of the control group. Within the prediction interval, our proposed combination model has improved advantages over traditional decomposition prediction control group models. | [
"['Wei Chen' 'Yuanyuan Yang' 'Jianyu Liu']"
] |
null | null | 2406.03161 | null | null | http://arxiv.org/pdf/2406.03161v1 | 2024-06-05T11:42:46Z | 2024-06-05T11:42:46Z | Ethical considerations of use of hold-out sets in clinical prediction
model management | Clinical prediction models are statistical or machine learning models used to quantify the risk of a certain health outcome using patient data. These can then inform potential interventions on patients, causing an effect called performative prediction: predictions inform interventions which influence the outcome they were trying to predict, leading to a potential underestimation of risk in some patients if a model is updated on this data. One suggested resolution to this is the use of hold-out sets, in which a set of patients do not receive model derived risk scores, such that a model can be safely retrained. We present an overview of clinical and research ethics regarding potential implementation of hold-out sets for clinical prediction models in health settings. We focus on the ethical principles of beneficence, non-maleficence, autonomy and justice. We also discuss informed consent, clinical equipoise, and truth-telling. We present illustrative cases of potential hold-out set implementations and discuss statistical issues arising from different hold-out set sampling methods. We also discuss differences between hold-out sets and randomised control trials, in terms of ethics and statistical issues. Finally, we give practical recommendations for researchers interested in the use hold-out sets for clinical prediction models. | [
"['Louis Chislett' 'Louis JM Aslett' 'Alisha R Davies'\n 'Catalina A Vallejos' 'James Liley']"
] |
null | null | 2406.03164 | null | null | http://arxiv.org/pdf/2406.03164v1 | 2024-06-05T11:56:54Z | 2024-06-05T11:56:54Z | Topological Neural Networks go Persistent, Equivariant, and Continuous | Topological Neural Networks (TNNs) incorporate higher-order relational information beyond pairwise interactions, enabling richer representations than Graph Neural Networks (GNNs). Concurrently, topological descriptors based on persistent homology (PH) are being increasingly employed to augment the GNNs. We investigate the benefits of integrating these two paradigms. Specifically, we introduce TopNets as a broad framework that subsumes and unifies various methods in the intersection of GNNs/TNNs and PH such as (generalizations of) RePHINE and TOGL. TopNets can also be readily adapted to handle (symmetries in) geometric complexes, extending the scope of TNNs and PH to spatial settings. Theoretically, we show that PH descriptors can provably enhance the expressivity of simplicial message-passing networks. Empirically, (continuous and E(n)-equivariant extensions of) TopNets achieve strong performance across diverse tasks, including antibody design, molecular dynamics simulation, and drug property prediction. | [
"['Yogesh Verma' 'Amauri H Souza' 'Vikas Garg']"
] |
null | null | 2406.03171 | null | null | http://arxiv.org/pdf/2406.03171v1 | 2024-06-05T12:03:27Z | 2024-06-05T12:03:27Z | High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent
Implicit Regularization | This paper studies kernel ridge regression in high dimensions under covariate shifts and analyzes the role of importance re-weighting. We first derive the asymptotic expansion of high dimensional kernels under covariate shifts. By a bias-variance decomposition, we theoretically demonstrate that the re-weighting strategy allows for decreasing the variance. For bias, we analyze the regularization of the arbitrary or well-chosen scale, showing that the bias can behave very differently under different regularization scales. In our analysis, the bias and variance can be characterized by the spectral decay of a data-dependent regularized kernel: the original kernel matrix associated with an additional re-weighting matrix, and thus the re-weighting strategy can be regarded as a data-dependent regularization for better understanding. Besides, our analysis provides asymptotic expansion of kernel functions/vectors under covariate shift, which has its own interest. | [
"['Yihang Chen' 'Fanghui Liu' 'Taiji Suzuki' 'Volkan Cevher']"
] |
null | null | 2406.03172 | null | null | http://arxiv.org/pdf/2406.03172v1 | 2024-06-05T12:03:45Z | 2024-06-05T12:03:45Z | Initialization-enhanced Physics-Informed Neural Network with Domain
Decomposition (IDPINN) | We propose a new physics-informed neural network framework, IDPINN, based on the enhancement of initialization and domain decomposition to improve prediction accuracy. We train a PINN using a small dataset to obtain an initial network structure, including the weighted matrix and bias, which initializes the PINN for each subdomain. Moreover, we leverage the smoothness condition on the interface to enhance the prediction performance. We numerically evaluated it on several forward problems and demonstrated the benefits of IDPINN in terms of accuracy. | [
"['Chenhao Si' 'Ming Yan']"
] |
null | null | 2406.03193 | null | null | http://arxiv.org/pdf/2406.03193v1 | 2024-06-05T12:23:02Z | 2024-06-05T12:23:02Z | Graph Neural Network Explanations are Fragile | Explainable Graph Neural Network (GNN) has emerged recently to foster the trust of using GNNs. Existing GNN explainers are developed from various perspectives to enhance the explanation performance. We take the first step to study GNN explainers under adversarial attack--We found that an adversary slightly perturbing graph structure can ensure GNN model makes correct predictions, but the GNN explainer yields a drastically different explanation on the perturbed graph. Specifically, we first formulate the attack problem under a practical threat model (i.e., the adversary has limited knowledge about the GNN explainer and a restricted perturbation budget). We then design two methods (i.e., one is loss-based and the other is deduction-based) to realize the attack. We evaluate our attacks on various GNN explainers and the results show these explainers are fragile. | [
"['Jiate Li' 'Meng Pang' 'Yun Dong' 'Jinyuan Jia' 'Binghui Wang']"
] |
null | null | 2406.03198 | null | null | http://arxiv.org/pdf/2406.03198v1 | 2024-05-28T04:36:15Z | 2024-05-28T04:36:15Z | The Impossibility of Fair LLMs | The need for fair AI is increasingly clear in the era of general-purpose systems such as ChatGPT, Gemini, and other large language models (LLMs). However, the increasing complexity of human-AI interaction and its social impacts have raised questions of how fairness standards could be applied. Here, we review the technical frameworks that machine learning researchers have used to evaluate fairness, such as group fairness and fair representations, and find that their application to LLMs faces inherent limitations. We show that each framework either does not logically extend to LLMs or presents a notion of fairness that is intractable for LLMs, primarily due to the multitudes of populations affected, sensitive attributes, and use cases. To address these challenges, we develop guidelines for the more realistic goal of achieving fairness in particular use cases: the criticality of context, the responsibility of LLM developers, and the need for stakeholder participation in an iterative process of design and evaluation. Moreover, it may eventually be possible and even necessary to use the general-purpose capabilities of AI systems to address fairness challenges as a form of scalable AI-assisted alignment. | [
"['Jacy Anthis' 'Kristian Lum' 'Michael Ekstrand' 'Avi Feller'\n \"Alexander D'Amour\" 'Chenhao Tan']"
] |
null | null | 2406.03199 | null | null | http://arxiv.org/pdf/2406.03199v1 | 2024-05-24T13:33:11Z | 2024-05-24T13:33:11Z | Bayesian WeakS-to-Strong from Text Classification to Generation | Advances in large language models raise the question of how alignment techniques will adapt as models become increasingly complex and humans will only be able to supervise them weakly. Weak-to-Strong mimics such a scenario where weak model supervision attempts to harness the full capabilities of a much stronger model. This work extends Weak-to-Strong to WeakS-to-Strong by exploring an ensemble of weak models which simulate the variability in human opinions. Confidence scores are estimated using a Bayesian approach to guide the WeakS-to-Strong generalization. Furthermore, we extend the application of WeakS-to-Strong from text classification tasks to text generation tasks where more advanced strategies are investigated for supervision. Moreover, direct preference optimization is applied to advance the student model's preference learning, beyond the basic learning framework of teacher forcing. Results demonstrate the effectiveness of the proposed approach for the reliability of a strong student model, showing potential for superalignment. | [
"['Ziyun Cui' 'Ziyang Zhang' 'Wen Wu' 'Guangzhi Sun' 'Chao Zhang']"
] |
null | null | 2406.03209 | null | null | http://arxiv.org/pdf/2406.03209v1 | 2024-06-05T12:45:23Z | 2024-06-05T12:45:23Z | Challenges and Considerations in the Evaluation of Bayesian Causal
Discovery | Representing uncertainty in causal discovery is a crucial component for experimental design, and more broadly, for safe and reliable causal decision making. Bayesian Causal Discovery (BCD) offers a principled approach to encapsulating this uncertainty. Unlike non-Bayesian causal discovery, which relies on a single estimated causal graph and model parameters for assessment, evaluating BCD presents challenges due to the nature of its inferred quantity - the posterior distribution. As a result, the research community has proposed various metrics to assess the quality of the approximate posterior. However, there is, to date, no consensus on the most suitable metric(s) for evaluation. In this work, we reexamine this question by dissecting various metrics and understanding their limitations. Through extensive empirical evaluation, we find that many existing metrics fail to exhibit a strong correlation with the quality of approximation to the true posterior, especially in scenarios with low sample sizes where BCD is most desirable. We highlight the suitability (or lack thereof) of these metrics under two distinct factors: the identifiability of the underlying causal model and the quantity of available data. Both factors affect the entropy of the true posterior, indicating that the current metrics are less fitting in settings of higher entropy. Our findings underline the importance of a more nuanced evaluation of new methods by taking into account the nature of the true posterior, as well as guide and motivate the development of new evaluation procedures for this challenge. | [
"['Amir Mohammad Karimi Mamaghan' 'Panagiotis Tigas'\n 'Karl Henrik Johansson' 'Yarin Gal' 'Yashas Annadani' 'Stefan Bauer']"
] |
null | null | 2406.03212 | null | null | http://arxiv.org/pdf/2406.03212v1 | 2024-06-05T12:51:20Z | 2024-06-05T12:51:20Z | Inferring the time-varying coupling of dynamical systems with temporal
convolutional autoencoders | Most approaches for assessing causality in complex dynamical systems fail when the interactions between variables are inherently non-linear and non-stationary. Here we introduce Temporal Autoencoders for Causal Inference (TACI), a methodology that combines a new surrogate data metric for assessing causal interactions with a novel two-headed machine learning architecture to identify and measure the direction and strength of time-varying causal interactions. Through tests on both synthetic and real-world datasets, we demonstrate TACI's ability to accurately quantify dynamic causal interactions across a variety of systems. Our findings display the method's effectiveness compared to existing approaches and also highlight our approach's potential to build a deeper understanding of the mechanisms that underlie time-varying interactions in physical and biological systems. | [
"['Josuan Calderon' 'Gordon J. Berman']"
] |
null | null | 2406.03216 | null | null | http://arxiv.org/pdf/2406.03216v1 | 2024-06-05T12:53:37Z | 2024-06-05T12:53:37Z | Choice of PEFT Technique in Continual Learning: Prompt Tuning is Not All
You Need | Recent Continual Learning (CL) methods have combined pretrained Transformers with prompt tuning, a parameter-efficient fine-tuning (PEFT) technique. We argue that the choice of prompt tuning in prior works was an undefended and unablated decision, which has been uncritically adopted by subsequent research, but warrants further research to understand its implications. In this paper, we conduct this research and find that the choice of prompt tuning as a PEFT method hurts the overall performance of the CL system. To illustrate this, we replace prompt tuning with LoRA in two state-of-the-art continual learning methods: Learning to Prompt and S-Prompts. These variants consistently achieve higher accuracy across a wide range of domain-incremental and class-incremental benchmarks, while being competitive in inference speed. Our work highlights a crucial argument: unexamined choices can hinder progress in the field, and rigorous ablations, such as the PEFT method, are required to drive meaningful adoption of CL techniques in real-world applications. | [
"['Martin Wistuba' 'Prabhu Teja Sivaprasad' 'Lukas Balles'\n 'Giovanni Zappella']"
] |
null | null | 2406.03229 | null | null | http://arxiv.org/pdf/2406.03229v4 | 2024-07-09T10:23:53Z | 2024-06-05T13:06:17Z | Global Clipper: Enhancing Safety and Reliability of Transformer-based
Object Detection Models | As transformer-based object detection models progress, their impact in critical sectors like autonomous vehicles and aviation is expected to grow. Soft errors causing bit flips during inference have significantly impacted DNN performance, altering predictions. Traditional range restriction solutions for CNNs fall short for transformers. This study introduces the Global Clipper and Global Hybrid Clipper, effective mitigation strategies specifically designed for transformer-based models. It significantly enhances their resilience to soft errors and reduces faulty inferences to ~ 0%. We also detail extensive testing across over 64 scenarios involving two transformer models (DINO-DETR and Lite-DETR) and two CNN models (YOLOv3 and SSD) using three datasets, totalling approximately 3.3 million inferences, to assess model robustness comprehensively. Moreover, the paper explores unique aspects of attention blocks in transformers and their operational differences from CNNs. | [
"['Qutub Syed Sha' 'Michael Paulitsch' 'Karthik Pattabiraman'\n 'Korbinian Hagn' 'Fabian Oboril' 'Cornelius Buerkle' 'Kay-Ulrich Scholl'\n 'Gereon Hinz' 'Alois Knoll']"
] |
null | null | 2406.03230 | null | null | http://arxiv.org/pdf/2406.03230v3 | 2024-07-09T04:39:46Z | 2024-06-05T13:06:33Z | Defending Large Language Models Against Attacks With Residual Stream
Activation Analysis | The widespread adoption of Large Language Models (LLMs), exemplified by OpenAI's ChatGPT, brings to the forefront the imperative to defend against adversarial threats on these models. These attacks, which manipulate an LLM's output by introducing malicious inputs, undermine the model's integrity and the trust users place in its outputs. In response to this challenge, our paper presents an innovative defensive strategy, given white box access to an LLM, that harnesses residual activation analysis between transformer layers of the LLM. We apply a novel methodology for analyzing distinctive activation patterns in the residual streams for attack prompt classification. We curate multiple datasets to demonstrate how this method of classification has high accuracy across multiple types of attack scenarios, including our newly-created attack dataset. Furthermore, we enhance the model's resilience by integrating safety fine-tuning techniques for LLMs in order to measure its effect on our capability to detect attacks. The results underscore the effectiveness of our approach in enhancing the detection and mitigation of adversarial inputs, advancing the security framework within which LLMs operate. | [
"['Amelia Kawasaki' 'Andrew Davis' 'Houssam Abbas']"
] |
null | null | 2406.03231 | null | null | http://arxiv.org/pdf/2406.03231v1 | 2024-06-05T13:06:52Z | 2024-06-05T13:06:52Z | CommonPower: Supercharging Machine Learning for Smart Grids | The growing complexity of power system management has led to an increased interest in the use of reinforcement learning (RL). However, no tool for comprehensive and realistic benchmarking of RL in smart grids exists. One prerequisite for such a comparison is a safeguarding mechanism since vanilla RL controllers can not guarantee the satisfaction of system constraints. Other central requirements include flexible modeling of benchmarking scenarios, credible baselines, and the possibility to investigate the impact of forecast uncertainties. Our Python tool CommonPower is the first modular framework addressing these needs. CommonPower offers a unified interface for single-agent and multi-agent RL training algorithms and includes a built-in model predictive control approach based on a symbolic representation of the system equations. This makes it possible to combine model predictive controllers with RL controllers in the same system. Leveraging the symbolic system model, CommonPower facilitates the study of safeguarding strategies via the flexible formulation of safety layers. Furthermore equipped with a generic forecasting interface, CommonPower constitutes a versatile tool significantly augmenting the exploration of safe RL controllers in smart grids on several dimensions. | [
"['Michael Eichelbeck' 'Hannah Markgraf' 'Matthias Althoff']"
] |
null | null | 2406.03234 | null | null | http://arxiv.org/pdf/2406.03234v1 | 2024-06-05T13:13:58Z | 2024-06-05T13:13:58Z | Fine-Grained Causal Dynamics Learning with Quantization for Improving
Robustness in Reinforcement Learning | Causal dynamics learning has recently emerged as a promising approach to enhancing robustness in reinforcement learning (RL). Typically, the goal is to build a dynamics model that makes predictions based on the causal relationships among the entities. Despite the fact that causal connections often manifest only under certain contexts, existing approaches overlook such fine-grained relationships and lack a detailed understanding of the dynamics. In this work, we propose a novel dynamics model that infers fine-grained causal structures and employs them for prediction, leading to improved robustness in RL. The key idea is to jointly learn the dynamics model with a discrete latent variable that quantizes the state-action space into subgroups. This leads to recognizing meaningful context that displays sparse dependencies, where causal structures are learned for each subgroup throughout the training. Experimental results demonstrate the robustness of our method to unseen states and locally spurious correlations in downstream tasks where fine-grained causal reasoning is crucial. We further illustrate the effectiveness of our subgroup-based approach with quantization in discovering fine-grained causal relationships compared to prior methods. | [
"['Inwoo Hwang' 'Yunhyeok Kwak' 'Suhyung Choi' 'Byoung-Tak Zhang'\n 'Sanghack Lee']"
] |
null | null | 2406.03242 | null | null | http://arxiv.org/pdf/2406.03242v1 | 2024-06-05T13:18:55Z | 2024-06-05T13:18:55Z | Variational Pseudo Marginal Methods for Jet Reconstruction in Particle
Physics | Reconstructing jets, which provide vital insights into the properties and histories of subatomic particles produced in high-energy collisions, is a main problem in data analyses in collider physics. This intricate task deals with estimating the latent structure of a jet (binary tree) and involves parameters such as particle energy, momentum, and types. While Bayesian methods offer a natural approach for handling uncertainty and leveraging prior knowledge, they face significant challenges due to the super-exponential growth of potential jet topologies as the number of observed particles increases. To address this, we introduce a Combinatorial Sequential Monte Carlo approach for inferring jet latent structures. As a second contribution, we leverage the resulting estimator to develop a variational inference algorithm for parameter learning. Building on this, we introduce a variational family using a pseudo-marginal framework for a fully Bayesian treatment of all variables, unifying the generative model with the inference process. We illustrate our method's effectiveness through experiments using data generated with a collider physics generative model, highlighting superior speed and accuracy across a range of tasks. | [
"['Hanming Yang' 'Antonio Khalil Moretti' 'Sebastian Macaluso'\n 'Philippe Chlenski' 'Christian A. Naesseth' \"Itsik Pe'er\"]"
] |
null | null | 2406.03243 | null | null | http://arxiv.org/pdf/2406.03243v1 | 2024-06-05T13:20:18Z | 2024-06-05T13:20:18Z | Llumnix: Dynamic Scheduling for Large Language Model Serving | Inference serving for large language models (LLMs) is the key to unleashing their potential in people's daily lives. However, efficient LLM serving remains challenging today because the requests are inherently heterogeneous and unpredictable in terms of resource and latency requirements, as a result of the diverse applications and the dynamic execution nature of LLMs. Existing systems are fundamentally limited in handling these characteristics and cause problems such as severe queuing delays, poor tail latencies, and SLO violations. We introduce Llumnix, an LLM serving system that reacts to such heterogeneous and unpredictable requests by runtime rescheduling across multiple model instances. Similar to context switching across CPU cores in modern operating systems, Llumnix reschedules requests to improve load balancing and isolation, mitigate resource fragmentation, and differentiate request priorities and SLOs. Llumnix implements the rescheduling with an efficient and scalable live migration mechanism for requests and their in-memory states, and exploits it in a dynamic scheduling policy that unifies the multiple rescheduling scenarios elegantly. Our evaluations show that Llumnix improves tail latencies by an order of magnitude, accelerates high-priority requests by up to 1.5x, and delivers up to 36% cost savings while achieving similar tail latencies, compared against state-of-the-art LLM serving systems. Llumnix is publicly available at https://github.com/AlibabaPAI/llumnix. | [
"['Biao Sun' 'Ziming Huang' 'Hanyu Zhao' 'Wencong Xiao' 'Xinyi Zhang'\n 'Yong Li' 'Wei Lin']"
] |
null | null | 2406.03249 | null | null | http://arxiv.org/pdf/2406.03249v1 | 2024-06-05T13:26:25Z | 2024-06-05T13:26:25Z | Near-field Beamforming for Extremely Large-scale MIMO Based on
Unsupervised Deep Learning | Extremely Large-scale Array (ELAA) is considered a frontier technology for future communication systems, pivotal in improving wireless systems' rate and spectral efficiency. However, as ELAA employs a multitude of antennas operating at higher frequencies, users are typically situated in the near-field region where the spherical wavefront propagates. This inevitably leads to a significant increase in the overhead of beam training, requiring complex two-dimensional beam searching in both the angle domain and the distance domain. To address this problem, we propose a near-field beamforming method based on unsupervised deep learning. Our convolutional neural network efficiently extracts complex channel state information features by strategically selecting padding and kernel size. We optimize the beamformers to maximize achievable rates in a multi-user network without relying on predefined custom codebooks. Upon deployment, the model requires solely the input of pre-estimated channel state information to derive the optimal beamforming vector. Simulation results show that our proposed scheme can obtain stable beamforming gain compared with the baseline scheme. Furthermore, owing to the inherent traits of deep learning methodologies, this approach substantially diminishes the beam training costs in near-field regions. | [
"['Jiali Nie' 'Yuanhao Cui' 'Zhaohui Yang' 'Weijie Yuan' 'Xiaojun Jing']"
] |
null | null | 2406.03253 | null | null | http://arxiv.org/pdf/2406.03253v2 | 2024-06-06T01:42:52Z | 2024-06-05T13:31:30Z | Generating Explanations for Cellular Neural Networks | Recent advancements in graph learning contributed to explaining predictions generated by Graph Neural Networks. However, existing methodologies often fall short when applied to real-world datasets. We introduce HOGE, a framework to capture higher-order structures using cell complexes, which excel at modeling higher-order relationships. In the real world, higher-order structures are ubiquitous like in molecules or social networks, thus our work significantly enhances the practical applicability of graph explanations. HOGE produces clearer and more accurate explanations compared to prior methods. Our method can be integrated with all existing graph explainers, ensuring seamless integration into current frameworks. We evaluate on GraphXAI benchmark datasets, HOGE achieves improved or comparable performance with minimal computational overhead. Ablation studies show that the performance gain observed can be attributed to the higher-order structures that come from introducing cell complexes. | [
"['Akshit Sinha' 'Sreeram Vennam' 'Charu Sharma' 'Ponnurangam Kumaraguru']"
] |
null | null | 2406.03255 | null | null | http://arxiv.org/pdf/2406.03255v1 | 2024-06-05T13:35:48Z | 2024-06-05T13:35:48Z | On the Maximal Local Disparity of Fairness-Aware Classifiers | Fairness has become a crucial aspect in the development of trustworthy machine learning algorithms. Current fairness metrics to measure the violation of demographic parity have the following drawbacks: (i) the average difference of model predictions on two groups cannot reflect their distribution disparity, and (ii) the overall calculation along all possible predictions conceals the extreme local disparity at or around certain predictions. In this work, we propose a novel fairness metric called Maximal Cumulative ratio Disparity along varying Predictions' neighborhood (MCDP), for measuring the maximal local disparity of the fairness-aware classifiers. To accurately and efficiently calculate the MCDP, we develop a provably exact and an approximate calculation algorithm that greatly reduces the computational complexity with low estimation error. We further propose a bi-level optimization algorithm using a differentiable approximation of the MCDP for improving the algorithmic fairness. Extensive experiments on both tabular and image datasets validate that our fair training algorithm can achieve superior fairness-accuracy trade-offs. | [
"['Jinqiu Jin' 'Haoxuan Li' 'Fuli Feng']"
] |
null | null | 2406.03258 | null | null | http://arxiv.org/pdf/2406.03258v1 | 2024-06-05T13:36:38Z | 2024-06-05T13:36:38Z | Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise | Constructing valid prediction intervals rather than point estimates is a well-established approach for uncertainty quantification in the regression setting. Models equipped with this capacity output an interval of values in which the ground truth target will fall with some prespecified probability. This is an essential requirement in many real-world applications where simple point predictions' inability to convey the magnitude and frequency of errors renders them insufficient for high-stakes decisions. Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the (non-parametric) distribution of outputs. This method is simple, computationally inexpensive, interpretable, assumption-free, and effective. However, it does require that the specific quantiles being learned are chosen a priori. This results in (a) intervals that are arbitrarily symmetric around the median which is sub-optimal for realistic skewed distributions, or (b) learning an excessive number of intervals. In this work, we propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint whilst maintaining its strengths. We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities (e.g. mean width) whilst retaining the essential coverage guarantees of quantile regression. | [
"['Thomas Pouplin' 'Alan Jeffares' 'Nabeel Seedat' 'Mihaela van der Schaar']"
] |
null | null | 2406.03260 | null | null | http://arxiv.org/pdf/2406.03260v1 | 2024-06-05T13:37:42Z | 2024-06-05T13:37:42Z | Feature learning in finite-width Bayesian deep linear networks with
multiple outputs and convolutional layers | Deep linear networks have been extensively studied, as they provide simplified models of deep learning. However, little is known in the case of finite-width architectures with multiple outputs and convolutional layers. In this manuscript, we provide rigorous results for the statistics of functions implemented by the aforementioned class of networks, thus moving closer to a complete characterization of feature learning in the Bayesian setting. Our results include: (i) an exact and elementary non-asymptotic integral representation for the joint prior distribution over the outputs, given in terms of a mixture of Gaussians; (ii) an analytical formula for the posterior distribution in the case of squared error loss function (Gaussian likelihood); (iii) a quantitative description of the feature learning infinite-width regime, using large deviation theory. From a physical perspective, deep architectures with multiple outputs or convolutional layers represent different manifestations of kernel shape renormalization, and our work provides a dictionary that translates this physics intuition and terminology into rigorous Bayesian statistics. | [
"['Federico Bassetti' 'Marco Gherardi' 'Alessandro Ingrosso'\n 'Mauro Pastore' 'Pietro Rotondo']"
] |
null | null | 2406.03263 | null | null | http://arxiv.org/pdf/2406.03263v1 | 2024-06-05T13:41:09Z | 2024-06-05T13:41:09Z | Deep Generative Models for Proton Zero Degree Calorimeter Simulations in
ALICE, CERN | Simulating detector responses is a crucial part of understanding the inner-workings of particle collisions in the Large Hadron Collider at CERN. The current reliance on statistical Monte-Carlo simulations strains CERN's computational grid, underscoring the urgency for more efficient alternatives. Addressing these challenges, recent proposals advocate for generative machine learning methods. In this study, we present an innovative deep learning simulation approach tailored for the proton Zero Degree Calorimeter in the ALICE experiment. Leveraging a Generative Adversarial Network model with Selective Diversity Increase loss, we directly simulate calorimeter responses. To enhance its capabilities in modeling a broad range of calorimeter response intensities, we expand the SDI-GAN architecture with additional regularization. Moreover, to improve the spatial fidelity of the generated data, we introduce an auxiliary regressor network. Our method offers a significant speedup when comparing to the traditional Monte-Carlo based approaches. | [
"['Patryk Będkowski' 'Jan Dubiński' 'Kamil Deja' 'Przemysław Rokita']"
] |
null | null | 2406.03264 | null | null | http://arxiv.org/pdf/2406.03264v1 | 2024-06-05T13:41:26Z | 2024-06-05T13:41:26Z | No-Regret Algorithms for Safe Bayesian Optimization with Monotonicity
Constraints | We consider the problem of sequentially maximizing an unknown function $f$ over a set of actions of the form $(s,mathbf{x})$, where the selected actions must satisfy a safety constraint with respect to an unknown safety function $g$. We model $f$ and $g$ as lying in a reproducing kernel Hilbert space (RKHS), which facilitates the use of Gaussian process methods. While existing works for this setting have provided algorithms that are guaranteed to identify a near-optimal safe action, the problem of attaining low cumulative regret has remained largely unexplored, with a key challenge being that expanding the safe region can incur high regret. To address this challenge, we show that if $g$ is monotone with respect to just the single variable $s$ (with no such constraint on $f$), sublinear regret becomes achievable with our proposed algorithm. In addition, we show that a modified version of our algorithm is able to attain sublinear regret (for suitably defined notions of regret) for the task of finding a near-optimal $s$ corresponding to every $mathbf{x}$, as opposed to only finding the global safe optimum. Our findings are supported with empirical evaluations on various objective and safety functions. | [
"['Arpan Losalka' 'Jonathan Scarlett']"
] |
null | null | 2406.03272 | null | null | http://arxiv.org/pdf/2406.03272v1 | 2024-06-05T13:50:59Z | 2024-06-05T13:50:59Z | Multi-Microphone Speech Emotion Recognition using the Hierarchical
Token-semantic Audio Transformer Architecture | Most emotion recognition systems fail in real-life situations (in the wild scenarios) where the audio is contaminated by reverberation. Our study explores new methods to alleviate the performance degradation of Speech Emotion Recognition (SER) algorithms and develop a more robust system for adverse conditions. We propose processing multi-microphone signals to address these challenges and improve emotion classification accuracy. We adopt a state-of-the-art transformer model, the Hierarchical Token-semantic Audio Transformer (HTS-AT), to handle multi-channel audio inputs. We evaluate two strategies: averaging mel-spectrograms across channels and summing patch-embedded representations. Our multimicrophone model achieves superior performance compared to single-channel baselines when tested on real-world reverberant environments. | [
"['Ohad Cohen' 'Gershon Hazan' 'Sharon Gannot']"
] |
null | null | 2406.03276 | null | null | http://arxiv.org/pdf/2406.03276v2 | 2024-07-03T21:22:00Z | 2024-06-05T13:53:20Z | Revisiting Scalable Hessian Diagonal Approximations for Applications in
Reinforcement Learning | Second-order information is valuable for many applications but challenging to compute. Several works focus on computing or approximating Hessian diagonals, but even this simplification introduces significant additional costs compared to computing a gradient. In the absence of efficient exact computation schemes for Hessian diagonals, we revisit an early approximation scheme proposed by Becker and LeCun (1989, BL89), which has a cost similar to gradients and appears to have been overlooked by the community. We introduce HesScale, an improvement over BL89, which adds negligible extra computation. On small networks, we find that this improvement is of higher quality than all alternatives, even those with theoretical guarantees, such as unbiasedness, while being much cheaper to compute. We use this insight in reinforcement learning problems where small networks are used and demonstrate HesScale in second-order optimization and scaling the step-size parameter. In our experiments, HesScale optimizes faster than existing methods and improves stability through step-size scaling. These findings are promising for scaling second-order methods in larger models in the future. | [
"['Mohamed Elsayed' 'Homayoon Farrahi' 'Felix Dangel' 'A. Rupam Mahmood']"
] |
null | null | 2406.03278 | null | null | http://arxiv.org/pdf/2406.03278v1 | 2024-06-05T13:53:47Z | 2024-06-05T13:53:47Z | Using GNN property predictors as molecule generators | Graph neural networks (GNNs) have emerged as powerful tools to accurately predict materials and molecular properties in computational discovery pipelines. In this article, we exploit the invertible nature of these neural networks to directly generate molecular structures with desired electronic properties. Starting from a random graph or an existing molecule, we perform a gradient ascent while holding the GNN weights fixed in order to optimize its input, the molecular graph, towards the target property. Valence rules are enforced strictly through a judicious graph construction. The method relies entirely on the property predictor; no additional training is required on molecular structures. We demonstrate the application of this method by generating molecules with specific DFT-verified energy gaps and octanol-water partition coefficients (logP). Our approach hits target properties with rates comparable to or better than state-of-the-art generative models while consistently generating more diverse molecules. | [
"['Félix Therrien' 'Edward H. Sargent' 'Oleksandr Voznyy']"
] |
null | null | 2406.03280 | null | null | http://arxiv.org/pdf/2406.03280v3 | 2024-06-14T07:19:51Z | 2024-06-05T13:54:28Z | FusionBench: A Comprehensive Benchmark of Deep Model Fusion | Deep model fusion is an emerging technique that unifies the predictions or parameters of several deep neural networks into a single model in a cost-effective and data-efficient manner. This enables the unified model to take advantage of the original models' strengths, potentially exceeding their performance. Although a variety of deep model fusion techniques have been introduced, their evaluations tend to be inconsistent and often inadequate to validate their effectiveness and robustness against distribution shifts. To address this issue, we introduce FusionBench, which is the first comprehensive benchmark dedicated to deep model fusion. FusionBench covers a wide range of tasks, including open-vocabulary image classification, text classification, and text-to-text generation. Each category includes up to eight tasks with corresponding task-specific models, featuring both full fine-tuning and LoRA fine-tuning, as well as models of different sizes, to ensure fair and balanced comparisons of various multi-task model fusion techniques across different tasks, model scales, and fine-tuning strategies. We implement and evaluate a broad spectrum of deep model fusion techniques. These techniques range from model ensemble methods, which combine the predictions to improve the overall performance, to model merging, which integrates different models into a single one, and model mixing methods, which upscale or recombine the components of the original models. FusionBench now contains 26 distinct tasks, 74 fine-tuned models, and 16 fusion techniques, and we are committed to consistently expanding the benchmark with more tasks, models, and fusion techniques. In addition, we offer a well-documented set of resources and guidelines to aid researchers in understanding and replicating the benchmark results. Homepage https://github.com/tanganke/fusion_bench | [
"['Anke Tang' 'Li Shen' 'Yong Luo' 'Han Hu' 'Bo Du' 'Dacheng Tao']"
] |
null | null | 2406.03287 | null | null | http://arxiv.org/pdf/2406.03287v1 | 2024-06-05T13:59:03Z | 2024-06-05T13:59:03Z | SpikeLM: Towards General Spike-Driven Language Modeling via Elastic
Bi-Spiking Mechanisms | Towards energy-efficient artificial intelligence similar to the human brain, the bio-inspired spiking neural networks (SNNs) have advantages of biological plausibility, event-driven sparsity, and binary activation. Recently, large-scale language models exhibit promising generalization capability, making it a valuable issue to explore more general spike-driven models. However, the binary spikes in existing SNNs fail to encode adequate semantic information, placing technological challenges for generalization. This work proposes the first fully spiking mechanism for general language tasks, including both discriminative and generative ones. Different from previous spikes with {0,1} levels, we propose a more general spike formulation with bi-directional, elastic amplitude, and elastic frequency encoding, while still maintaining the addition nature of SNNs. In a single time step, the spike is enhanced by direction and amplitude information; in spike frequency, a strategy to control spike firing rate is well designed. We plug this elastic bi-spiking mechanism in language modeling, named SpikeLM. It is the first time to handle general language tasks with fully spike-driven models, which achieve much higher accuracy than previously possible. SpikeLM also greatly bridges the performance gap between SNNs and ANNs in language modeling. Our code is available at https://github.com/Xingrun-Xing/SpikeLM. | [
"['Xingrun Xing' 'Zheng Zhang' 'Ziyi Ni' 'Shitao Xiao' 'Yiming Ju'\n 'Siqi Fan' 'Yequan Wang' 'Jiajun Zhang' 'Guoqi Li']"
] |
null | null | 2406.03288 | null | null | http://arxiv.org/pdf/2406.03288v1 | 2024-06-05T13:59:05Z | 2024-06-05T13:59:05Z | Embarrassingly Parallel GFlowNets | GFlowNets are a promising alternative to MCMC sampling for discrete compositional random variables. Training GFlowNets requires repeated evaluations of the unnormalized target distribution or reward function. However, for large-scale posterior sampling, this may be prohibitive since it incurs traversing the data several times. Moreover, if the data are distributed across clients, employing standard GFlowNets leads to intensive client-server communication. To alleviate both these issues, we propose embarrassingly parallel GFlowNet (EP-GFlowNet). EP-GFlowNet is a provably correct divide-and-conquer method to sample from product distributions of the form $R(cdot) propto R_1(cdot) ... R_N(cdot)$ -- e.g., in parallel or federated Bayes, where each $R_n$ is a local posterior defined on a data partition. First, in parallel, we train a local GFlowNet targeting each $R_n$ and send the resulting models to the server. Then, the server learns a global GFlowNet by enforcing our newly proposed emph{aggregating balance} condition, requiring a single communication step. Importantly, EP-GFlowNets can also be applied to multi-objective optimization and model reuse. Our experiments illustrate the EP-GFlowNets's effectiveness on many tasks, including parallel Bayesian phylogenetics, multi-objective multiset, sequence generation, and federated Bayesian structure learning. | [
"['Tiago da Silva' 'Luiz Max Carvalho' 'Amauri Souza' 'Samuel Kaski'\n 'Diego Mesquita']"
] |
null | null | 2406.03314 | null | null | http://arxiv.org/pdf/2406.03314v2 | 2024-06-10T16:09:03Z | 2024-06-05T14:26:45Z | Reproducibility study of FairAC | This work aims to reproduce the findings of the paper "Fair Attribute Completion on Graph with Missing Attributes" written by Guo, Chu, and Li arXiv:2302.12977 by investigating the claims made in the paper. This paper suggests that the results of the original paper are reproducible and thus, the claims hold. However, the claim that FairAC is a generic framework for many downstream tasks is very broad and could therefore only be partially tested. Moreover, we show that FairAC is generalizable to various datasets and sensitive attributes and show evidence that the improvement in group fairness of the FairAC framework does not come at the expense of individual fairness. Lastly, the codebase of FairAC has been refactored and is now easily applicable for various datasets and models. | [
"['Gijs de Jong' 'Macha J. Meijer' 'Derck W. E. Prinzhorn' 'Harold Ruiter']"
] |
null | null | 2406.03324 | null | null | http://arxiv.org/pdf/2406.03324v1 | 2024-06-05T14:37:42Z | 2024-06-05T14:37:42Z | UDQL: Bridging The Gap between MSE Loss and The Optimal Value Function
in Offline Reinforcement Learning | The Mean Square Error (MSE) is commonly utilized to estimate the solution of the optimal value function in the vast majority of offline reinforcement learning (RL) models and has achieved outstanding performance. However, we find that its principle can lead to overestimation phenomenon for the value function. In this paper, we first theoretically analyze overestimation phenomenon led by MSE and provide the theoretical upper bound of the overestimated error. Furthermore, to address it, we propose a novel Bellman underestimated operator to counteract overestimation phenomenon and then prove its contraction characteristics. At last, we propose the offline RL algorithm based on underestimated operator and diffusion policy model. Extensive experimental results on D4RL tasks show that our method can outperform state-of-the-art offline RL algorithms, which demonstrates that our theoretical analysis and underestimation way are effective for offline RL tasks. | [
"['Yu Zhang' 'Rui Yu' 'Zhipeng Yao' 'Wenyuan Zhang' 'Jun Wang'\n 'Liming Zhang']"
] |
null | null | 2406.03334 | null | null | http://arxiv.org/pdf/2406.03334v1 | 2024-06-05T14:49:15Z | 2024-06-05T14:49:15Z | Reparameterization invariance in approximate Bayesian inference | Current approximate posteriors in Bayesian neural networks (BNNs) exhibit a crucial limitation: they fail to maintain invariance under reparameterization, i.e. BNNs assign different posterior densities to different parametrizations of identical functions. This creates a fundamental flaw in the application of Bayesian principles as it breaks the correspondence between uncertainty over the parameters with uncertainty over the parametrized function. In this paper, we investigate this issue in the context of the increasingly popular linearized Laplace approximation. Specifically, it has been observed that linearized predictives alleviate the common underfitting problems of the Laplace approximation. We develop a new geometric view of reparametrizations from which we explain the success of linearization. Moreover, we demonstrate that these reparameterization invariance properties can be extended to the original neural network predictive using a Riemannian diffusion process giving a straightforward algorithm for approximate posterior sampling, which empirically improves posterior fit. | [
"['Hrittik Roy' 'Marco Miani' 'Carl Henrik Ek' 'Philipp Hennig'\n 'Marvin Pförtner' 'Lukas Tatzel' 'Søren Hauberg']"
] |
null | null | 2406.03337 | null | null | http://arxiv.org/pdf/2406.03337v2 | 2024-06-06T06:27:57Z | 2024-06-05T14:52:43Z | Identifying latent state transition in non-linear dynamical systems | This work aims to improve generalization and interpretability of dynamical systems by recovering the underlying lower-dimensional latent states and their time evolutions. Previous work on disentangled representation learning within the realm of dynamical systems focused on the latent states, possibly with linear transition approximations. As such, they cannot identify nonlinear transition dynamics, and hence fail to reliably predict complex future behavior. Inspired by the advances in nonlinear ICA, we propose a state-space modeling framework in which we can identify not just the latent states but also the unknown transition function that maps the past states to the present. We introduce a practical algorithm based on variational auto-encoders and empirically demonstrate in realistic synthetic settings that we can (i) recover latent state dynamics with high accuracy, (ii) correspondingly achieve high future prediction accuracy, and (iii) adapt fast to new environments. | [
"['Çağlar Hızlı' 'Çağatay Yıldız' 'Matthias Bethge' 'ST John'\n 'Pekka Marttinen']"
] |
null | null | 2406.03341 | null | null | http://arxiv.org/pdf/2406.03341v1 | 2024-06-05T14:58:32Z | 2024-06-05T14:58:32Z | Tackling GenAI Copyright Issues: Originality Estimation and
Genericization | The rapid progress of generative AI technology has sparked significant copyright concerns, leading to numerous lawsuits filed against AI developers. While some studies explore methods to mitigate copyright risks by steering the outputs of generative models away from those resembling copyrighted data, little attention has been paid to the question of how much of a resemblance is undesirable; more original or unique data are afforded stronger protection, and the threshold level of resemblance for constituting infringement correspondingly lower. Here, leveraging this principle, we propose a genericization method that modifies the outputs of a generative model to make them more generic and less likely to infringe copyright. To achieve this, we introduce a metric for quantifying the level of originality of data in a manner that is consistent with the legal framework. This metric can be practically estimated by drawing samples from a generative model, which is then used for the genericization process. Experiments demonstrate that our genericization method successfully modifies the output of a text-to-image generative model so that it produces more generic, copyright-compliant images. | [
"['Hiroaki Chiba-Okabe' 'Weijie J. Su']"
] |
null | null | 2406.03345 | null | null | http://arxiv.org/pdf/2406.03345v2 | 2024-06-06T09:45:59Z | 2024-06-05T15:04:27Z | Feature Contamination: Neural Networks Learn Uncorrelated Features and
Fail to Generalize | Learning representations that generalize under distribution shifts is critical for building robust machine learning models. However, despite significant efforts in recent years, algorithmic advances in this direction have been limited. In this work, we seek to understand the fundamental difficulty of out-of-distribution generalization with deep neural networks. We first empirically show that perhaps surprisingly, even allowing a neural network to explicitly fit the representations obtained from a teacher network that can generalize out-of-distribution is insufficient for the generalization of the student network. Then, by a theoretical study of two-layer ReLU networks optimized by stochastic gradient descent (SGD) under a structured feature model, we identify a fundamental yet unexplored feature learning proclivity of neural networks, feature contamination: neural networks can learn uncorrelated features together with predictive features, resulting in generalization failure under distribution shifts. Notably, this mechanism essentially differs from the prevailing narrative in the literature that attributes the generalization failure to spurious correlations. Overall, our results offer new insights into the non-linear feature learning dynamics of neural networks and highlight the necessity of considering inductive biases in out-of-distribution generalization. | [
"['Tianren Zhang' 'Chujie Zhao' 'Guanyu Chen' 'Yizhou Jiang' 'Feng Chen']"
] |
null | null | 2406.03346 | null | null | http://arxiv.org/pdf/2406.03346v2 | 2024-06-26T15:55:02Z | 2024-06-05T15:04:28Z | Normalizing Flows for Conformal Regression | Conformal Prediction (CP) algorithms estimate the uncertainty of a prediction model by calibrating its outputs on labeled data. The same calibration scheme usually applies to any model and data without modifications. The obtained prediction intervals are valid by construction but could be inefficient, i.e. unnecessarily big, if the prediction errors are not uniformly distributed over the input space. We present a general scheme to localize the intervals by training the calibration process. The standard prediction error is replaced by an optimized distance metric that depends explicitly on the object attributes. Learning the optimal metric is equivalent to training a Normalizing Flow that acts on the joint distribution of the errors and the inputs. Unlike the Error Reweighting CP algorithm of Papadopoulos et al. (2008), the framework allows estimating the gap between nominal and empirical conditional validity. The approach is compatible with existing locally-adaptive CP strategies based on re-weighting the calibration samples and applies to any point-prediction model without retraining. | [
"['Nicolo Colombo']"
] |
null | null | 2406.03348 | null | null | http://arxiv.org/pdf/2406.03348v1 | 2024-06-05T15:05:24Z | 2024-06-05T15:05:24Z | Position: A Call to Action for a Human-Centered AutoML Paradigm | Automated machine learning (AutoML) was formed around the fundamental objectives of automatically and efficiently configuring machine learning (ML) workflows, aiding the research of new ML algorithms, and contributing to the democratization of ML by making it accessible to a broader audience. Over the past decade, commendable achievements in AutoML have primarily focused on optimizing predictive performance. This focused progress, while substantial, raises questions about how well AutoML has met its broader, original goals. In this position paper, we argue that a key to unlocking AutoML's full potential lies in addressing the currently underexplored aspect of user interaction with AutoML systems, including their diverse roles, expectations, and expertise. We envision a more human-centered approach in future AutoML research, promoting the collaborative design of ML systems that tightly integrates the complementary strengths of human expertise and AutoML methodologies. | [
"['Marius Lindauer' 'Florian Karl' 'Anne Klier' 'Julia Moosbauer'\n 'Alexander Tornede' 'Andreas Mueller' 'Frank Hutter' 'Matthias Feurer'\n 'Bernd Bischl']"
] |
null | null | 2406.03356 | null | null | http://arxiv.org/pdf/2406.03356v1 | 2024-06-05T15:12:29Z | 2024-06-05T15:12:29Z | Cooperative learning of Pl@ntNet's Artificial Intelligence algorithm:
how does it work and how can we improve it? | Deep learning models for plant species identification rely on large annotated datasets. The PlantNet system enables global data collection by allowing users to upload and annotate plant observations, leading to noisy labels due to diverse user skills. Achieving consensus is crucial for training, but the vast scale of collected data makes traditional label aggregation strategies challenging. Existing methods either retain all observations, resulting in noisy training data or selectively keep those with sufficient votes, discarding valuable information. Additionally, as many species are rarely observed, user expertise can not be evaluated as an inter-user agreement: otherwise, botanical experts would have a lower weight in the AI training step than the average user. Our proposed label aggregation strategy aims to cooperatively train plant identification AI models. This strategy estimates user expertise as a trust score per user based on their ability to identify plant species from crowdsourced data. The trust score is recursively estimated from correctly identified species given the current estimated labels. This interpretable score exploits botanical experts' knowledge and the heterogeneity of users. Subsequently, our strategy removes unreliable observations but retains those with limited trusted annotations, unlike other approaches. We evaluate PlantNet's strategy on a released large subset of the PlantNet database focused on European flora, comprising over 6M observations and 800K users. We demonstrate that estimating users' skills based on the diversity of their expertise enhances labeling performance. Our findings emphasize the synergy of human annotation and data filtering in improving AI performance for a refined dataset. We explore incorporating AI-based votes alongside human input. This can further enhance human-AI interactions to detect unreliable observations. | [
"['Tanguy Lefort' 'Antoine Affouard' 'Benjamin Charlier'\n 'Jean-Christophe Lombardo' 'Mathias Chouet' 'Hervé Goëau' 'Joseph Salmon'\n 'Pierre Bonnet' 'Alexis Joly']"
] |
null | null | 2406.03361 | null | null | http://arxiv.org/pdf/2406.03361v1 | 2024-06-05T15:14:58Z | 2024-06-05T15:14:58Z | What Matters in Hierarchical Search for Combinatorial Reasoning
Problems? | Efficiently tackling combinatorial reasoning problems, particularly the notorious NP-hard tasks, remains a significant challenge for AI research. Recent efforts have sought to enhance planning by incorporating hierarchical high-level search strategies, known as subgoal methods. While promising, their performance against traditional low-level planners is inconsistent, raising questions about their application contexts. In this study, we conduct an in-depth exploration of subgoal-planning methods for combinatorial reasoning. We identify the attributes pivotal for leveraging the advantages of high-level search: hard-to-learn value functions, complex action spaces, presence of dead ends in the environment, or using data collected from diverse experts. We propose a consistent evaluation methodology to achieve meaningful comparisons between methods and reevaluate the state-of-the-art algorithms. | [
"['Michał Zawalski' 'Gracjan Góral' 'Michał Tyrolski' 'Emilia Wiśnios'\n 'Franciszek Budrowski' 'Łukasz Kuciński' 'Piotr Miłoś']"
] |
null | null | 2406.03369 | null | null | http://arxiv.org/pdf/2406.03369v1 | 2024-06-05T15:24:20Z | 2024-06-05T15:24:20Z | Posterior and variational inference for deep neural networks with
heavy-tailed weights | We consider deep neural networks in a Bayesian framework with a prior distribution sampling the network weights at random. Following a recent idea of Agapiou and Castillo (2023), who show that heavy-tailed prior distributions achieve automatic adaptation to smoothness, we introduce a simple Bayesian deep learning prior based on heavy-tailed weights and ReLU activation. We show that the corresponding posterior distribution achieves near-optimal minimax contraction rates, simultaneously adaptive to both intrinsic dimension and smoothness of the underlying function, in a variety of contexts including nonparametric regression, geometric data and Besov spaces. While most works so far need a form of model selection built-in within the prior distribution, a key aspect of our approach is that it does not require to sample hyperparameters to learn the architecture of the network. We also provide variational Bayes counterparts of the results, that show that mean-field variational approximations still benefit from near-optimal theoretical support. | [
"['Ismaël Castillo' 'Paul Egels']"
] |
null | null | 2406.03372 | null | null | http://arxiv.org/pdf/2406.03372v1 | 2024-06-05T15:28:04Z | 2024-06-05T15:28:04Z | Training of Physical Neural Networks | Physical neural networks (PNNs) are a class of neural-like networks that leverage the properties of physical systems to perform computation. While PNNs are so far a niche research area with small-scale laboratory demonstrations, they are arguably one of the most underappreciated important opportunities in modern AI. Could we train AI models 1000x larger than current ones? Could we do this and also have them perform inference locally and privately on edge devices, such as smartphones or sensors? Research over the past few years has shown that the answer to all these questions is likely "yes, with enough research": PNNs could one day radically change what is possible and practical for AI systems. To do this will however require rethinking both how AI models work, and how they are trained - primarily by considering the problems through the constraints of the underlying hardware physics. To train PNNs at large scale, many methods including backpropagation-based and backpropagation-free approaches are now being explored. These methods have various trade-offs, and so far no method has been shown to scale to the same scale and performance as the backpropagation algorithm widely used in deep learning today. However, this is rapidly changing, and a diverse ecosystem of training techniques provides clues for how PNNs may one day be utilized to create both more efficient realizations of current-scale AI models, and to enable unprecedented-scale models. | [
"['Ali Momeni' 'Babak Rahmani' 'Benjamin Scellier' 'Logan G. Wright'\n 'Peter L. McMahon' 'Clara C. Wanjura' 'Yuhang Li' 'Anas Skalli'\n 'Natalia G. Berloff' 'Tatsuhiro Onodera' 'Ilker Oguz'\n 'Francesco Morichetti' 'Philipp del Hougne' 'Manuel Le Gallo'\n 'Abu Sebastian' 'Azalia Mirhoseini' 'Cheng Zhang' 'Danijela Marković'\n 'Daniel Brunner' 'Christophe Moser' 'Sylvain Gigan' 'Florian Marquardt'\n 'Aydogan Ozcan' 'Julie Grollier' 'Andrea J. Liu' 'Demetri Psaltis'\n 'Andrea Alù' 'Romain Fleury']"
] |
null | null | 2406.03386 | null | null | http://arxiv.org/pdf/2406.03386v1 | 2024-06-05T15:36:57Z | 2024-06-05T15:36:57Z | Learning Long Range Dependencies on Graphs via Random Walks | Message-passing graph neural networks (GNNs), while excelling at capturing local relationships, often struggle with long-range dependencies on graphs. Conversely, graph transformers (GTs) enable information exchange between all nodes but oversimplify the graph structure by treating them as a set of fixed-length vectors. This work proposes a novel architecture, NeuralWalker, that overcomes the limitations of both methods by combining random walks with message passing. NeuralWalker achieves this by treating random walks as sequences, allowing for the application of recent advances in sequence models in order to capture long-range dependencies within these walks. Based on this concept, we propose a framework that offers (1) more expressive graph representations through random walk sequences, (2) the ability to utilize any sequence model for capturing long-range dependencies, and (3) the flexibility by integrating various GNN and GT architectures. Our experimental evaluations demonstrate that NeuralWalker achieves significant performance improvements on 19 graph and node benchmark datasets, notably outperforming existing methods by up to 13% on the PascalVoc-SP and COCO-SP datasets. Code is available at https://github.com/BorgwardtLab/NeuralWalker. | [
"['Dexiong Chen' 'Till Hendrik Schulz' 'Karsten Borgwardt']"
] |
null | null | 2406.03390 | null | null | http://arxiv.org/pdf/2406.03390v2 | 2024-06-12T15:52:42Z | 2024-06-05T15:41:02Z | What Drives Online Popularity: Author, Content or Sharers? Estimating
Spread Dynamics with Bayesian Mixture Hawkes | The spread of content on social media is shaped by intertwining factors on three levels: the source, the content itself, and the pathways of content spread. At the lowest level, the popularity of the sharing user determines its eventual reach. However, higher-level factors such as the nature of the online item and the credibility of its source also play crucial roles in determining how widely and rapidly the online item spreads. In this work, we propose the Bayesian Mixture Hawkes (BMH) model to jointly learn the influence of source, content and spread. We formulate the BMH model as a hierarchical mixture model of separable Hawkes processes, accommodating different classes of Hawkes dynamics and the influence of feature sets on these classes. We test the BMH model on two learning tasks, cold-start popularity prediction and temporal profile generalization performance, applying to two real-world retweet cascade datasets referencing articles from controversial and traditional media publishers. The BMH model outperforms the state-of-the-art models and predictive baselines on both datasets and utilizes cascade- and item-level information better than the alternatives. Lastly, we perform a counter-factual analysis where we apply the trained publisher-level BMH models to a set of article headlines and show that effectiveness of headline writing style (neutral, clickbait, inflammatory) varies across publishers. The BMH model unveils differences in style effectiveness between controversial and reputable publishers, where we find clickbait to be notably more effective for reputable publishers as opposed to controversial ones, which links to the latter's overuse of clickbait. | [
"['Pio Calderon' 'Marian-Andrei Rizoiu']"
] |
null | null | 2406.03396 | null | null | http://arxiv.org/pdf/2406.03396v1 | 2024-06-05T15:53:25Z | 2024-06-05T15:53:25Z | Noisy Data Visualization using Functional Data Analysis | Data visualization via dimensionality reduction is an important tool in exploratory data analysis. However, when the data are noisy, many existing methods fail to capture the underlying structure of the data. The method called Empirical Intrinsic Geometry (EIG) was previously proposed for performing dimensionality reduction on high dimensional dynamical processes while theoretically eliminating all noise. However, implementing EIG in practice requires the construction of high-dimensional histograms, which suffer from the curse of dimensionality. Here we propose a new data visualization method called Functional Information Geometry (FIG) for dynamical processes that adapts the EIG framework while using approaches from functional data analysis to mitigate the curse of dimensionality. We experimentally demonstrate that the resulting method outperforms a variant of EIG designed for visualization in terms of capturing the true structure, hyperparameter robustness, and computational speed. We then use our method to visualize EEG brain measurements of sleep activity. | [
"['Haozhe Chen' 'Andres Felipe Duque Correa' 'Guy Wolf' 'Kevin R. Moon']"
] |
null | null | 2406.03398 | null | null | http://arxiv.org/pdf/2406.03398v2 | 2024-06-12T02:37:08Z | 2024-06-05T15:55:08Z | Methods for Class-Imbalanced Learning with Support Vector Machines: A
Review and an Empirical Evaluation | This paper presents a review on methods for class-imbalanced learning with the Support Vector Machine (SVM) and its variants. We first explain the structure of SVM and its variants and discuss their inefficiency in learning with class-imbalanced data sets. We introduce a hierarchical categorization of SVM-based models with respect to class-imbalanced learning. Specifically, we categorize SVM-based models into re-sampling, algorithmic, and fusion methods, and discuss the principles of the representative models in each category. In addition, we conduct a series of empirical evaluations to compare the performances of various representative SVM-based models in each category using benchmark imbalanced data sets, ranging from low to high imbalanced ratios. Our findings reveal that while algorithmic methods are less time-consuming owing to no data pre-processing requirements, fusion methods, which combine both re-sampling and algorithmic approaches, generally perform the best, but with a higher computational load. A discussion on research gaps and future research directions is provided. | [
"['Salim Rezvani' 'Farhad Pourpanah' 'Chee Peng Lim' 'Q. M. Jonathan Wu']"
] |
null | null | 2406.03402 | null | null | http://arxiv.org/pdf/2406.03402v1 | 2024-06-04T09:07:45Z | 2024-06-04T09:07:45Z | Mixed-Precision Over-The-Air Federated Learning via Approximated
Computing | Over-the-Air Federated Learning (OTA-FL) has been extensively investigated as a privacy-preserving distributed learning mechanism. Realistic systems will see FL clients with diverse size, weight, and power configurations. A critical research gap in existing OTA-FL research is the assumption of homogeneous client computational bit precision. Indeed, many clients may exploit approximate computing (AxC) where bit precisions are adjusted for energy and computational efficiency. The dynamic distribution of bit precision updates amongst FL clients poses an open challenge for OTA-FL, as is is incompatible in the wireless modulation superposition space. Here, we propose an AxC-based OTA-FL framework of clients with multiple precisions, demonstrating the following innovations: (i) optimize the quantization-performance trade-off for both server and clients within the constraints of varying edge computing capabilities and learning accuracy requirements, and (ii) develop heterogeneous gradient resolution OTA-FL modulation schemes to ensure compatibility with physical layer OTA aggregation. Our findings indicate that we can design modulation schemes that enable AxC based OTA-FL, which can achieve 50% faster and smoother server convergence and a performance enhancement for the lowest precision clients compared to a homogeneous precision approach. This demonstrates the great potential of our AxC-based OTA-FL approach in heterogeneous edge computing environments. | [
"['Jinsheng Yuan' 'Zhuangkun Wei' 'Weisi Guo']"
] |
null | null | 2406.03403 | null | null | http://arxiv.org/pdf/2406.03403v1 | 2024-06-04T15:37:14Z | 2024-06-04T15:37:14Z | Structure-based Drug Design Benchmark: Do 3D Methods Really Dominate? | Currently, the field of structure-based drug design is dominated by three main types of algorithms: search-based algorithms, deep generative models, and reinforcement learning. While existing works have typically focused on comparing models within a single algorithmic category, cross-algorithm comparisons remain scarce. In this paper, to fill the gap, we establish a benchmark to evaluate the performance of sixteen models across these different algorithmic foundations by assessing the pharmaceutical properties of the generated molecules and their docking affinities with specified target proteins. We highlight the unique advantages of each algorithmic approach and offer recommendations for the design of future SBDD models. We emphasize that 1D/2D ligand-centric drug design methods can be used in SBDD by treating the docking function as a black-box oracle, which is typically neglected. The empirical results show that 1D/2D methods achieve competitive performance compared with 3D-based methods that use the 3D structure of the target protein explicitly. Also, AutoGrow4, a 2D molecular graph-based genetic algorithm, dominates SBDD in terms of optimization ability. The relevant code is available in https://github.com/zkysfls/2024-sbdd-benchmark. | [
"['Kangyu Zheng' 'Yingzhou Lu' 'Zaixi Zhang' 'Zhongwei Wan' 'Yao Ma'\n 'Marinka Zitnik' 'Tianfan Fu']"
] |
null | null | 2406.03404 | null | null | http://arxiv.org/pdf/2406.03404v1 | 2024-06-04T04:43:54Z | 2024-06-04T04:43:54Z | ST-DPGAN: A Privacy-preserving Framework for Spatiotemporal Data
Generation | Spatiotemporal data is prevalent in a wide range of edge devices, such as those used in personal communication and financial transactions. Recent advancements have sparked a growing interest in integrating spatiotemporal analysis with large-scale language models. However, spatiotemporal data often contains sensitive information, making it unsuitable for open third-party access. To address this challenge, we propose a Graph-GAN-based model for generating privacy-protected spatiotemporal data. Our approach incorporates spatial and temporal attention blocks in the discriminator and a spatiotemporal deconvolution structure in the generator. These enhancements enable efficient training under Gaussian noise to achieve differential privacy. Extensive experiments conducted on three real-world spatiotemporal datasets validate the efficacy of our model. Our method provides a privacy guarantee while maintaining the data utility. The prediction model trained on our generated data maintains a competitive performance compared to the model trained on the original data. | [
"['Wei Shao' 'Rongyi Zhu' 'Cai Yang' 'Chandra Thapa' 'Muhammad Ejaz Ahmed'\n 'Seyit Camtepe' 'Rui Zhang' 'DuYong Kim' 'Hamid Menouar' 'Flora D. Salim']"
] |
null | null | 2406.03405 | null | null | http://arxiv.org/pdf/2406.03405v1 | 2024-06-02T15:54:25Z | 2024-06-02T15:54:25Z | Amalgam: A Framework for Obfuscated Neural Network Training on the Cloud | Training a proprietary Neural Network (NN) model with a proprietary dataset on the cloud comes at the risk of exposing the model architecture and the dataset to the cloud service provider. To tackle this problem, in this paper, we present an NN obfuscation framework, called Amalgam, to train NN models in a privacy-preserving manner in existing cloud-based environments. Amalgam achieves that by augmenting NN models and the datasets to be used for training with well-calibrated noise to "hide" both the original model architectures and training datasets from the cloud. After training, Amalgam extracts the original models from the augmented models and returns them to users. Our evaluation results with different computer vision and natural language processing models and datasets demonstrate that Amalgam: (i) introduces modest overheads into the training process without impacting its correctness, and (ii) does not affect the model's accuracy. | [
"['Sifat Ut Taki' 'Spyridon Mastorakis']"
] |
null | null | 2406.03406 | null | null | http://arxiv.org/pdf/2406.03406v1 | 2024-06-02T06:11:27Z | 2024-06-02T06:11:27Z | LncRNA-disease association prediction method based on heterogeneous
information completion and convolutional neural network | The emerging research shows that lncRNA has crucial research value in a series of complex human diseases. Therefore, the accurate identification of lncRNA-disease associations (LDAs) is very important for the warning and treatment of diseases. However, most of the existing methods have limitations in identifying nonlinear LDAs, and it remains a huge challenge to predict new LDAs. In this paper, a deep learning model based on a heterogeneous network and convolutional neural network (CNN) is proposed for lncRNA-disease association prediction, named HCNNLDA. The heterogeneous network containing the lncRNA, disease, and miRNA nodes, is constructed firstly. The embedding matrix of a lncRNA-disease node pair is constructed according to various biological premises about lncRNAs, diseases, and miRNAs. Then, the low-dimensional feature representation is fully learned by the convolutional neural network. In the end, the XGBoot classifier model is trained to predict the potential LDAs. HCNNLDA obtains a high AUC value of 0.9752 and AUPR of 0.9740 under the 5-fold cross-validation. The experimental results show that the proposed model has better performance than that of several latest prediction models. Meanwhile, the effectiveness of HCNNLDA in identifying novel LDAs is further demonstrated by case studies of three diseases. To sum up, HCNNLDA is a feasible calculation model to predict LDAs. | [
"['Wen-Yu Xi' 'Juan Wang' 'Yu-Lin Zhang' 'Jin-Xing Liu' 'Yin-Lian Gao']"
] |
null | null | 2406.03407 | null | null | http://arxiv.org/pdf/2406.03407v1 | 2024-06-02T03:41:52Z | 2024-06-02T03:41:52Z | Physics and geometry informed neural operator network with application
to acoustic scattering | In this paper, we introduce a physics and geometry informed neural operator network with application to the forward simulation of acoustic scattering. The development of geometry informed deep learning models capable of learning a solution operator for different computational domains is a problem of general importance for a variety of engineering applications. To this end, we propose a physics-informed deep operator network (DeepONet) capable of predicting the scattered pressure field for arbitrarily shaped scatterers using a geometric parameterization approach based on non-uniform rational B-splines (NURBS). This approach also results in parsimonious representations of non-trivial scatterer geometries. In contrast to existing physics-based approaches that require model re-evaluation when changing the computational domains, our trained model is capable of learning solution operator that can approximate physically-consistent scattered pressure field in just a few seconds for arbitrary rigid scatterer shapes; it follows that the computational time for forward simulations can improve (i.e. be reduced) by orders of magnitude in comparison to the traditional forward solvers. In addition, this approach can evaluate the scattered pressure field without the need for labeled training data. After presenting the theoretical approach, a comprehensive numerical study is also provided to illustrate the remarkable ability of this approach to simulate the acoustic pressure fields resulting from arbitrary combinations of arbitrary scatterer geometries. These results highlight the unique generalization capability of the proposed operator learning approach. | [
"['Siddharth Nair' 'Timothy F. Walsh' 'Greg Pickrell' 'Fabio Semperlotti']"
] |
null | null | 2406.03409 | null | null | http://arxiv.org/pdf/2406.03409v1 | 2024-06-01T11:25:03Z | 2024-06-01T11:25:03Z | Robust Knowledge Distillation Based on Feature Variance Against
Backdoored Teacher Model | Benefiting from well-trained deep neural networks (DNNs), model compression have captured special attention for computing resource limited equipment, especially edge devices. Knowledge distillation (KD) is one of the widely used compression techniques for edge deployment, by obtaining a lightweight student model from a well-trained teacher model released on public platforms. However, it has been empirically noticed that the backdoor in the teacher model will be transferred to the student model during the process of KD. Although numerous KD methods have been proposed, most of them focus on the distillation of a high-performing student model without robustness consideration. Besides, some research adopts KD techniques as effective backdoor mitigation tools, but they fail to perform model compression at the same time. Consequently, it is still an open problem to well achieve two objectives of robust KD, i.e., student model's performance and backdoor mitigation. To address these issues, we propose RobustKD, a robust knowledge distillation that compresses the model while mitigating backdoor based on feature variance. Specifically, RobustKD distinguishes the previous works in three key aspects: (1) effectiveness: by distilling the feature map of the teacher model after detoxification, the main task performance of the student model is comparable to that of the teacher model; (2) robustness: by reducing the characteristic variance between the teacher model and the student model, it mitigates the backdoor of the student model under backdoored teacher model scenario; (3) generic: RobustKD still has good performance in the face of multiple data models (e.g., WRN 28-4, Pyramid-200) and diverse DNNs (e.g., ResNet50, MobileNet). | [
"['Jinyin Chen' 'Xiaoming Zhao' 'Haibin Zheng' 'Xiao Li' 'Sheng Xiang'\n 'Haifeng Guo']"
] |
null | null | 2406.03428 | null | null | http://arxiv.org/pdf/2406.03428v1 | 2024-06-05T16:25:57Z | 2024-06-05T16:25:57Z | HelloFresh: LLM Evaluations on Streams of Real-World Human Editorial
Actions across X Community Notes and Wikipedia edits | Benchmarks have been essential for driving progress in machine learning. A better understanding of LLM capabilities on real world tasks is vital for safe development. Designing adequate LLM benchmarks is challenging: Data from real-world tasks is hard to collect, public availability of static evaluation data results in test data contamination and benchmark overfitting, and periodically generating new evaluation data is tedious and may result in temporally inconsistent results. We introduce HelloFresh, based on continuous streams of real-world data generated by intrinsically motivated human labelers. It covers recent events from X (formerly Twitter) community notes and edits of Wikipedia pages, mitigating the risk of test data contamination and benchmark overfitting. Any X user can propose an X note to add additional context to a misleading post (formerly tweet); if the community classifies it as helpful, it is shown with the post. Similarly, Wikipedia relies on community-based consensus, allowing users to edit articles or revert edits made by other users. Verifying whether an X note is helpful or whether a Wikipedia edit should be accepted are hard tasks that require grounding by querying the web. We backtest state-of-the-art LLMs supplemented with simple web search access and find that HelloFresh yields a temporally consistent ranking. To enable continuous evaluation on HelloFresh, we host a public leaderboard and periodically updated evaluation data at https://tinyurl.com/hello-fresh-LLM. | [
"['Tim Franzmeyer' 'Aleksandar Shtedritski' 'Samuel Albanie' 'Philip Torr'\n 'João F. Henriques' 'Jakob N. Foerster']"
] |
null | null | 2406.03434 | null | null | http://arxiv.org/pdf/2406.03434v1 | 2024-06-05T16:32:14Z | 2024-06-05T16:32:14Z | Unified PAC-Bayesian Study of Pessimism for Offline Policy Learning with
Regularized Importance Sampling | Off-policy learning (OPL) often involves minimizing a risk estimator based on importance weighting to correct bias from the logging policy used to collect data. However, this method can produce an estimator with a high variance. A common solution is to regularize the importance weights and learn the policy by minimizing an estimator with penalties derived from generalization bounds specific to the estimator. This approach, known as pessimism, has gained recent attention but lacks a unified framework for analysis. To address this gap, we introduce a comprehensive PAC-Bayesian framework to examine pessimism with regularized importance weighting. We derive a tractable PAC-Bayesian generalization bound that universally applies to common importance weight regularizations, enabling their comparison within a single framework. Our empirical results challenge common understanding, demonstrating the effectiveness of standard IW regularization techniques. | [
"['Imad Aouali' 'Victor-Emmanuel Brunel' 'David Rohde' 'Anna Korba']"
] |
null | null | 2406.03437 | null | null | http://arxiv.org/pdf/2406.03437v2 | 2024-06-06T16:13:41Z | 2024-06-05T16:33:30Z | Transfer Learning for Latent Variable Network Models | We study transfer learning for estimation in latent variable network models. In our setting, the conditional edge probability matrices given the latent variables are represented by $P$ for the source and $Q$ for the target. We wish to estimate $Q$ given two kinds of data: (1) edge data from a subgraph induced by an $o(1)$ fraction of the nodes of $Q$, and (2) edge data from all of $P$. If the source $P$ has no relation to the target $Q$, the estimation error must be $Omega(1)$. However, we show that if the latent variables are shared, then vanishing error is possible. We give an efficient algorithm that utilizes the ordering of a suitably defined graph distance. Our algorithm achieves $o(1)$ error and does not assume a parametric form on the source or target networks. Next, for the specific case of Stochastic Block Models we prove a minimax lower bound and show that a simple algorithm achieves this rate. Finally, we empirically demonstrate our algorithm's use on real-world and simulated graph transfer problems. | [
"['Akhil Jalan' 'Arya Mazumdar' 'Soumendu Sundar Mukherjee'\n 'Purnamrita Sarkar']"
] |
null | null | 2406.03441 | null | null | http://arxiv.org/pdf/2406.03441v1 | 2024-06-05T16:35:30Z | 2024-06-05T16:35:30Z | Cycles of Thought: Measuring LLM Confidence through Stable Explanations | In many high-risk machine learning applications it is essential for a model to indicate when it is uncertain about a prediction. While large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, their overconfidence in incorrect responses is still a well-documented failure mode. Traditional methods for ML uncertainty quantification can be difficult to directly adapt to LLMs due to the computational cost of implementation and closed-source nature of many models. A variety of black-box methods have recently been proposed, but these often rely on heuristics such as self-verbalized confidence. We instead propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer. While utilizing explanations is not a new idea in and of itself, by interpreting each possible model+explanation pair as a test-time classifier we can calculate a posterior answer distribution over the most likely of these classifiers. We demonstrate how a specific instance of this framework using explanation entailment as our classifier likelihood improves confidence score metrics (in particular AURC and AUROC) over baselines across five different datasets. We believe these results indicate that our framework is both a well-principled and effective way of quantifying uncertainty in LLMs. | [
"['Evan Becker' 'Stefano Soatto']"
] |
null | null | 2406.03445 | null | null | http://arxiv.org/pdf/2406.03445v1 | 2024-06-05T16:40:53Z | 2024-06-05T16:40:53Z | Pre-trained Large Language Models Use Fourier Features to Compute
Addition | Pre-trained large language models (LLMs) exhibit impressive mathematical reasoning capabilities, yet how they compute basic arithmetic, such as addition, remains unclear. This paper shows that pre-trained LLMs add numbers using Fourier features -- dimensions in the hidden state that represent numbers via a set of features sparse in the frequency domain. Within the model, MLP and attention layers use Fourier features in complementary ways: MLP layers primarily approximate the magnitude of the answer using low-frequency features, while attention layers primarily perform modular addition (e.g., computing whether the answer is even or odd) using high-frequency features. Pre-training is crucial for this mechanism: models trained from scratch to add numbers only exploit low-frequency features, leading to lower accuracy. Introducing pre-trained token embeddings to a randomly initialized model rescues its performance. Overall, our analysis demonstrates that appropriate pre-trained representations (e.g., Fourier features) can unlock the ability of Transformers to learn precise mechanisms for algorithmic tasks. | [
"['Tianyi Zhou' 'Deqing Fu' 'Vatsal Sharan' 'Robin Jia']"
] |
null | null | 2406.03447 | null | null | http://arxiv.org/pdf/2406.03447v1 | 2024-06-05T16:44:06Z | 2024-06-05T16:44:06Z | FILS: Self-Supervised Video Feature Prediction In Semantic Language
Space | This paper demonstrates a self-supervised approach for learning semantic video representations. Recent vision studies show that a masking strategy for vision and natural language supervision has contributed to developing transferable visual pretraining. Our goal is to achieve a more semantic video representation by leveraging the text related to the video content during the pretraining in a fully self-supervised manner. To this end, we present FILS, a novel self-supervised video Feature prediction In semantic Language Space (FILS). The vision model can capture valuable structured information by correctly predicting masked feature semantics in language space. It is learned using a patch-wise video-text contrastive strategy, in which the text representations act as prototypes for transforming vision features into a language space, which are then used as targets for semantically meaningful feature prediction using our masked encoder-decoder structure. FILS demonstrates remarkable transferability on downstream action recognition tasks, achieving state-of-the-art on challenging egocentric datasets, like Epic-Kitchens, Something-SomethingV2, Charades-Ego, and EGTEA, using ViT-Base. Our efficient method requires less computation and smaller batches compared to previous works. | [
"['Mona Ahmadian' 'Frank Guerin' 'Andrew Gilbert']"
] |
null | null | 2406.03458 | null | null | http://arxiv.org/pdf/2406.03458v1 | 2024-06-05T17:03:47Z | 2024-06-05T17:03:47Z | Distributional Adversarial Loss | A major challenge in defending against adversarial attacks is the enormous space of possible attacks that even a simple adversary might perform. To address this, prior work has proposed a variety of defenses that effectively reduce the size of this space. These include randomized smoothing methods that add noise to the input to take away some of the adversary's impact. Another approach is input discretization which limits the adversary's possible number of actions. Motivated by these two approaches, we introduce a new notion of adversarial loss which we call distributional adversarial loss, to unify these two forms of effectively weakening an adversary. In this notion, we assume for each original example, the allowed adversarial perturbation set is a family of distributions (e.g., induced by a smoothing procedure), and the adversarial loss over each example is the maximum loss over all the associated distributions. The goal is to minimize the overall adversarial loss. We show generalization guarantees for our notion of adversarial loss in terms of the VC-dimension of the hypothesis class and the size of the set of allowed adversarial distributions associated with each input. We also investigate the role of randomness in achieving robustness against adversarial attacks in the methods described above. We show a general derandomization technique that preserves the extent of a randomized classifier's robustness against adversarial attacks. We corroborate the procedure experimentally via derandomizing the Random Projection Filters framework of cite{dong2023adversarial}. Our procedure also improves the robustness of the model against various adversarial attacks. | [
"['Saba Ahmadi' 'Siddharth Bhandari' 'Avrim Blum' 'Chen Dan' 'Prabhav Jain']"
] |
null | null | 2406.03460 | null | null | http://arxiv.org/pdf/2406.03460v1 | 2024-06-05T17:07:39Z | 2024-06-05T17:07:39Z | The PESQetarian: On the Relevance of Goodhart's Law for Speech
Enhancement | To obtain improved speech enhancement models, researchers often focus on increasing performance according to specific instrumental metrics. However, when the same metric is used in a loss function to optimize models, it may be detrimental to aspects that the given metric does not see. The goal of this paper is to illustrate the risk of overfitting a speech enhancement model to the metric used for evaluation. For this, we introduce enhancement models that exploit the widely used PESQ measure. Our "PESQetarian" model achieves 3.82 PESQ on VB-DMD while scoring very poorly in a listening experiment. While the obtained PESQ value of 3.82 would imply "state-of-the-art" PESQ-performance on the VB-DMD benchmark, our examples show that when optimizing w.r.t. a metric, an isolated evaluation on the same metric may be misleading. Instead, other metrics should be included in the evaluation and the resulting performance predictions should be confirmed by listening. | [
"['Danilo de Oliveira' 'Simon Welker' 'Julius Richter' 'Timo Gerkmann']"
] |
null | null | 2406.03464 | null | null | http://arxiv.org/pdf/2406.03464v1 | 2024-06-05T17:12:38Z | 2024-06-05T17:12:38Z | Node-wise Filtering in Graph Neural Networks: A Mixture of Experts
Approach | Graph Neural Networks (GNNs) have proven to be highly effective for node classification tasks across diverse graph structural patterns. Traditionally, GNNs employ a uniform global filter, typically a low-pass filter for homophilic graphs and a high-pass filter for heterophilic graphs. However, real-world graphs often exhibit a complex mix of homophilic and heterophilic patterns, rendering a single global filter approach suboptimal. In this work, we theoretically demonstrate that a global filter optimized for one pattern can adversely affect performance on nodes with differing patterns. To address this, we introduce a novel GNN framework Node-MoE that utilizes a mixture of experts to adaptively select the appropriate filters for different nodes. Extensive experiments demonstrate the effectiveness of Node-MoE on both homophilic and heterophilic graphs. | [
"['Haoyu Han' 'Juanhui Li' 'Wei Huang' 'Xianfeng Tang' 'Hanqing Lu'\n 'Chen Luo' 'Hui Liu' 'Jiliang Tang']"
] |
null | null | 2406.03472 | null | null | http://arxiv.org/pdf/2406.03472v2 | 2024-06-28T17:44:28Z | 2024-06-05T17:25:29Z | Solving Differential Equations using Physics-Informed Deep Equilibrium
Models | This paper introduces Physics-Informed Deep Equilibrium Models (PIDEQs) for solving initial value problems (IVPs) of ordinary differential equations (ODEs). Leveraging recent advancements in deep equilibrium models (DEQs) and physics-informed neural networks (PINNs), PIDEQs combine the implicit output representation of DEQs with physics-informed training techniques. We validate PIDEQs using the Van der Pol oscillator as a benchmark problem, demonstrating their efficiency and effectiveness in solving IVPs. Our analysis includes key hyperparameter considerations for optimizing PIDEQ performance. By bridging deep learning and physics-based modeling, this work advances computational techniques for solving IVPs, with implications for scientific computing and engineering applications. | [
"['Bruno Machado Pacheco' 'Eduardo Camponogara']"
] |
null | null | 2406.03476 | null | null | http://arxiv.org/pdf/2406.03476v1 | 2024-06-05T17:29:15Z | 2024-06-05T17:29:15Z | Does your data spark joy? Performance gains from domain upsampling at
the end of training | Pretraining datasets for large language models (LLMs) have grown to trillions of tokens composed of large amounts of CommonCrawl (CC) web scrape along with smaller, domain-specific datasets. It is expensive to understand the impact of these domain-specific datasets on model capabilities as training at large FLOP scales is required to reveal significant changes to difficult and emergent benchmarks. Given the increasing cost of experimenting with pretraining data, how does one determine the optimal balance between the diversity in general web scrapes and the information density of domain specific data? In this work, we show how to leverage the smaller domain specific datasets by upsampling them relative to CC at the end of training to drive performance improvements on difficult benchmarks. This simple technique allows us to improve up to 6.90 pp on MMLU, 8.26 pp on GSM8K, and 6.17 pp on HumanEval relative to the base data mix for a 7B model trained for 1 trillion (T) tokens, thus rivaling Llama-2 (7B)$unicode{x2014}$a model trained for twice as long. We experiment with ablating the duration of domain upsampling from 5% to 30% of training and find that 10% to 20% percent is optimal for navigating the tradeoff between general language modeling capabilities and targeted benchmarks. We also use domain upsampling to characterize at scale the utility of individual datasets for improving various benchmarks by removing them during this final phase of training. This tool opens up the ability to experiment with the impact of different pretraining datasets at scale, but at an order of magnitude lower cost compared to full pretraining runs. | [
"['Cody Blakeney' 'Mansheej Paul' 'Brett W. Larsen' 'Sean Owen'\n 'Jonathan Frankle']"
] |
null | null | 2406.03478 | null | null | http://arxiv.org/pdf/2406.03478v1 | 2024-06-05T17:32:22Z | 2024-06-05T17:32:22Z | Convolutional Neural Networks and Vision Transformers for Fashion MNIST
Classification: A Literature Review | Our review explores the comparative analysis between Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) in the domain of image classification, with a particular focus on clothing classification within the e-commerce sector. Utilizing the Fashion MNIST dataset, we delve into the unique attributes of CNNs and ViTs. While CNNs have long been the cornerstone of image classification, ViTs introduce an innovative self-attention mechanism enabling nuanced weighting of different input data components. Historically, transformers have primarily been associated with Natural Language Processing (NLP) tasks. Through a comprehensive examination of existing literature, our aim is to unveil the distinctions between ViTs and CNNs in the context of image classification. Our analysis meticulously scrutinizes state-of-the-art methodologies employing both architectures, striving to identify the factors influencing their performance. These factors encompass dataset characteristics, image dimensions, the number of target classes, hardware infrastructure, and the specific architectures along with their respective top results. Our key goal is to determine the most appropriate architecture between ViT and CNN for classifying images in the Fashion MNIST dataset within the e-commerce industry, while taking into account specific conditions and needs. We highlight the importance of combining these two architectures with different forms to enhance overall performance. By uniting these architectures, we can take advantage of their unique strengths, which may lead to more precise and reliable models for e-commerce applications. CNNs are skilled at recognizing local patterns, while ViTs are effective at grasping overall context, making their combination a promising strategy for boosting image classification performance. | [
"['Sonia Bbouzidi' 'Ghazala Hcini' 'Imen Jdey' 'Fadoua Drira']"
] |
null | null | 2406.03482 | null | null | http://arxiv.org/pdf/2406.03482v1 | 2024-06-05T17:42:05Z | 2024-06-05T17:42:05Z | QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero
Overhead | Serving LLMs requires substantial memory due to the storage requirements of Key-Value (KV) embeddings in the KV cache, which grows with sequence length. An effective approach to compress KV cache is quantization. However, traditional quantization methods face significant memory overhead due to the need to store quantization constants (at least a zero point and a scale) in full precision per data block. Depending on the block size, this overhead can add 1 or 2 bits per quantized number. We introduce QJL, a new quantization approach that consists of a Johnson-Lindenstrauss (JL) transform followed by sign-bit quantization. In contrast to existing methods, QJL eliminates memory overheads by removing the need for storing quantization constants. We propose an asymmetric estimator for the inner product of two vectors and demonstrate that applying QJL to one vector and a standard JL transform without quantization to the other provides an unbiased estimator with minimal distortion. We have developed an efficient implementation of the QJL sketch and its corresponding inner product estimator, incorporating a lightweight CUDA kernel for optimized computation. When applied across various LLMs and NLP tasks to quantize the KV cache to only 3 bits, QJL demonstrates a more than fivefold reduction in KV cache memory usage without compromising accuracy, all while achieving faster runtime. Codes are available at url{https://github.com/amirzandieh/QJL}. | [
"['Amir Zandieh' 'Majid Daliri' 'Insu Han']"
] |
null | null | 2406.03485 | null | null | http://arxiv.org/pdf/2406.03485v1 | 2024-06-05T17:46:26Z | 2024-06-05T17:46:26Z | Highway Value Iteration Networks | Value iteration networks (VINs) enable end-to-end learning for planning tasks by employing a differentiable "planning module" that approximates the value iteration algorithm. However, long-term planning remains a challenge because training very deep VINs is difficult. To address this problem, we embed highway value iteration -- a recent algorithm designed to facilitate long-term credit assignment -- into the structure of VINs. This improvement augments the "planning module" of the VIN with three additional components: 1) an "aggregate gate," which constructs skip connections to improve information flow across many layers; 2) an "exploration module," crafted to increase the diversity of information and gradient flow in spatial dimensions; 3) a "filter gate" designed to ensure safe exploration. The resulting novel highway VIN can be trained effectively with hundreds of layers using standard backpropagation. In long-term planning tasks requiring hundreds of planning steps, deep highway VINs outperform both traditional VINs and several advanced, very deep NNs. | [
"['Yuhui Wang' 'Weida Li' 'Francesco Faccio' 'Qingyuan Wu'\n 'Jürgen Schmidhuber']"
] |
null | null | 2406.03494 | null | null | http://arxiv.org/pdf/2406.03494v1 | 2024-06-05T17:59:22Z | 2024-06-05T17:59:22Z | Solving Poisson Equations using Neural Walk-on-Spheres | We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations. Leveraging stochastic representations and Walk-on-Spheres methods, we develop novel losses for neural networks based on the recursive solution of Poisson equations on spheres inside the domain. The resulting method is highly parallelizable and does not require spatial gradients for the loss. We provide a comprehensive comparison against competing methods based on PINNs, the Deep Ritz method, and (backward) stochastic differential equations. In several challenging, high-dimensional numerical examples, we demonstrate the superiority of NWoS in accuracy, speed, and computational costs. Compared to commonly used PINNs, our approach can reduce memory usage and errors by orders of magnitude. Furthermore, we apply NWoS to problems in PDE-constrained optimization and molecular dynamics to show its efficiency in practical applications. | [
"['Hong Chul Nam' 'Julius Berner' 'Anima Anandkumar']"
] |
null | null | 2406.03495 | null | null | http://arxiv.org/pdf/2406.03495v1 | 2024-06-05T17:59:35Z | 2024-06-05T17:59:35Z | Grokking Modular Polynomials | Neural networks readily learn a subset of the modular arithmetic tasks, while failing to generalize on the rest. This limitation remains unmoved by the choice of architecture and training strategies. On the other hand, an analytical solution for the weights of Multi-layer Perceptron (MLP) networks that generalize on the modular addition task is known in the literature. In this work, we (i) extend the class of analytical solutions to include modular multiplication as well as modular addition with many terms. Additionally, we show that real networks trained on these datasets learn similar solutions upon generalization (grokking). (ii) We combine these "expert" solutions to construct networks that generalize on arbitrary modular polynomials. (iii) We hypothesize a classification of modular polynomials into learnable and non-learnable via neural networks training; and provide experimental evidence supporting our claims. | [
"['Darshil Doshi' 'Tianyu He' 'Aritra Das' 'Andrey Gromov']"
] |
null | null | 2406.03496 | null | null | http://arxiv.org/pdf/2406.03496v1 | 2024-06-05T17:59:40Z | 2024-06-05T17:59:40Z | Wings: Learning Multimodal LLMs without Text-only Forgetting | Multimodal large language models (MLLMs), initiated with a trained LLM, first align images with text and then fine-tune on multimodal mixed inputs. However, the MLLM catastrophically forgets the text-only instructions, which do not include images and can be addressed within the initial LLM. In this paper, we present Wings, a novel MLLM that excels in both text-only dialogues and multimodal comprehension. Analyzing MLLM attention in multimodal instructions reveals that text-only forgetting is related to the attention shifts from pre-image to post-image text. From that, we construct extra modules that act as the boosted learner to compensate for the attention shift. The complementary visual and textual learners, like "wings" on either side, are connected in parallel within each layer's attention block. Initially, image and text inputs are aligned with visual learners operating alongside the main attention, balancing focus on visual elements. Textual learners are later collaboratively integrated with attention-based routing to blend the outputs of the visual and textual learners. We design the Low-Rank Residual Attention (LoRRA) to guarantee high efficiency for learners. Our experimental results demonstrate that Wings outperforms equally-scaled MLLMs in both text-only and visual question-answering tasks. On a newly constructed Interleaved Image-Text (IIT) benchmark, Wings exhibits superior performance from text-only-rich to multimodal-rich question-answering tasks. | [
"['Yi-Kai Zhang' 'Shiyin Lu' 'Yang Li' 'Yanqing Ma' 'Qing-Guo Chen'\n 'Zhao Xu' 'Weihua Luo' 'Kaifu Zhang' 'De-Chuan Zhan' 'Han-Jia Ye']"
] |
null | null | 2406.03503 | null | null | http://arxiv.org/pdf/2406.03503v1 | 2024-06-02T16:11:38Z | 2024-06-02T16:11:38Z | Position: Rethinking Post-Hoc Search-Based Neural Approaches for Solving
Large-Scale Traveling Salesman Problems | Recent advancements in solving large-scale traveling salesman problems (TSP) utilize the heatmap-guided Monte Carlo tree search (MCTS) paradigm, where machine learning (ML) models generate heatmaps, indicating the probability distribution of each edge being part of the optimal solution, to guide MCTS in solution finding. However, our theoretical and experimental analysis raises doubts about the effectiveness of ML-based heatmap generation. In support of this, we demonstrate that a simple baseline method can outperform complex ML approaches in heatmap generation. Furthermore, we question the practical value of the heatmap-guided MCTS paradigm. To substantiate this, our findings show its inferiority to the LKH-3 heuristic despite the paradigm's reliance on problem-specific, hand-crafted strategies. For the future, we suggest research directions focused on developing more theoretically sound heatmap generation methods and exploring autonomous, generalizable ML approaches for combinatorial problems. The code is available for review: https://github.com/xyfffff/rethink_mcts_for_tsp. | [
"['Yifan Xia' 'Xianliang Yang' 'Zichuan Liu' 'Zhihao Liu' 'Lei Song'\n 'Jiang Bian']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.