categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2403.01673
null
null
http://arxiv.org/pdf/2403.01673v1
2024-03-04T01:52:40Z
2024-03-04T01:52:40Z
CATS: Enhancing Multivariate Time Series Forecasting by Constructing Auxiliary Time Series as Exogenous Variables
For Multivariate Time Series Forecasting (MTSF), recent deep learning applications show that univariate models frequently outperform multivariate ones. To address the difficiency in multivariate models, we introduce a method to Construct Auxiliary Time Series (CATS) that functions like a 2D temporal-contextual attention mechanism, which generates Auxiliary Time Series (ATS) from Original Time Series (OTS) to effectively represent and incorporate inter-series relationships for forecasting. Key principles of ATS - continuity, sparsity, and variability - are identified and implemented through different modules. Even with a basic 2-layer MLP as core predictor, CATS achieves state-of-the-art, significantly reducing complexity and parameters compared to previous multivariate models, marking it an efficient and transferable MTSF solution.
[ "['Jiecheng Lu' 'Xu Han' 'Yan Sun' 'Shihao Yang']" ]
null
null
2403.01695
null
null
http://arxiv.org/pdf/2403.01695v1
2024-03-04T03:09:28Z
2024-03-04T03:09:28Z
DyCE: Dynamic Configurable Exiting for Deep Learning Compression and Scaling
Modern deep learning (DL) models necessitate the employment of scaling and compression techniques for effective deployment in resource-constrained environments. Most existing techniques, such as pruning and quantization are generally static. On the other hand, dynamic compression methods, such as early exits, reduce complexity by recognizing the difficulty of input samples and allocating computation as needed. Dynamic methods, despite their superior flexibility and potential for co-existing with static methods, pose significant challenges in terms of implementation due to any changes in dynamic parts will influence subsequent processes. Moreover, most current dynamic compression designs are monolithic and tightly integrated with base models, thereby complicating the adaptation to novel base models. This paper introduces DyCE, an dynamic configurable early-exit framework that decouples design considerations from each other and from the base model. Utilizing this framework, various types and positions of exits can be organized according to predefined configurations, which can be dynamically switched in real-time to accommodate evolving performance-complexity requirements. We also propose techniques for generating optimized configurations based on any desired trade-off between performance and computational complexity. This empowers future researchers to focus on the improvement of individual exits without latent compromise of overall system performance. The efficacy of this approach is demonstrated through image classification tasks with deep CNNs. DyCE significantly reduces the computational complexity by 23.5% of ResNet152 and 25.9% of ConvNextv2-tiny on ImageNet, with accuracy reductions of less than 0.5%. Furthermore, DyCE offers advantages over existing dynamic methods in terms of real-time configuration and fine-grained performance tuning.
[ "['Qingyuan Wang' 'Barry Cardiff' 'Antoine Frappé' 'Benoit Larras'\n 'Deepu John']" ]
null
null
2403.01709
null
null
http://arxiv.org/pdf/2403.01709v1
2024-03-04T03:56:14Z
2024-03-04T03:56:14Z
Can LLMs Generate Architectural Design Decisions? -An Exploratory Empirical study
Architectural Knowledge Management (AKM) involves the organized handling of information related to architectural decisions and design within a project or organization. An essential artifact of AKM is the Architecture Decision Records (ADR), which documents key design decisions. ADRs are documents that capture decision context, decision made and various aspects related to a design decision, thereby promoting transparency, collaboration, and understanding. Despite their benefits, ADR adoption in software development has been slow due to challenges like time constraints and inconsistent uptake. Recent advancements in Large Language Models (LLMs) may help bridge this adoption gap by facilitating ADR generation. However, the effectiveness of LLM for ADR generation or understanding is something that has not been explored. To this end, in this work, we perform an exploratory study that aims to investigate the feasibility of using LLM for the generation of ADRs given the decision context. In our exploratory study, we utilize GPT and T5-based models with 0-shot, few-shot, and fine-tuning approaches to generate the Decision of an ADR given its Context. Our results indicate that in a 0-shot setting, state-of-the-art models such as GPT-4 generate relevant and accurate Design Decisions, although they fall short of human-level performance. Additionally, we observe that more cost-effective models like GPT-3.5 can achieve similar outcomes in a few-shot setting, and smaller models such as Flan-T5 can yield comparable results after fine-tuning. To conclude, this exploratory study suggests that LLM can generate Design Decisions, but further research is required to attain human-level generation and establish standardized widespread adoption.
[ "['Rudra Dhar' 'Karthik Vaidhyanathan' 'Vasudeva Varma']" ]
null
null
2403.01717
null
null
http://arxiv.org/pdf/2403.01717v2
2024-04-22T17:50:48Z
2024-03-04T04:10:24Z
Soft-constrained Schrodinger Bridge: a Stochastic Control Approach
Schr"{o}dinger bridge can be viewed as a continuous-time stochastic control problem where the goal is to find an optimally controlled diffusion process whose terminal distribution coincides with a pre-specified target distribution. We propose to generalize this problem by allowing the terminal distribution to differ from the target but penalizing the Kullback-Leibler divergence between the two distributions. We call this new control problem soft-constrained Schr"{o}dinger bridge (SSB). The main contribution of this work is a theoretical derivation of the solution to SSB, which shows that the terminal distribution of the optimally controlled process is a geometric mixture of the target and some other distribution. This result is further extended to a time series setting. One application is the development of robust generative diffusion models. We propose a score matching-based algorithm for sampling from geometric mixtures and showcase its use via a numerical example for the MNIST data set.
[ "['Jhanvi Garg' 'Xianyang Zhang' 'Quan Zhou']" ]
null
null
2403.01718
null
null
http://arxiv.org/pdf/2403.01718v1
2024-03-04T04:12:37Z
2024-03-04T04:12:37Z
$L_0$ Regularization of Field-Aware Factorization Machine through Ising Model
We examined the use of the Ising model as an $L_0$ regularization method for field-aware factorization machines (FFM). This approach improves generalization performance and has the advantage of simultaneously determining the best feature combinations for each of several groups. We can deepen the interpretation and understanding of the model from the similarities and differences in the features selected in each group.
[ "['Yasuharu Okamoto']" ]
null
null
2403.01723
null
null
http://arxiv.org/pdf/2403.01723v1
2024-03-04T04:32:28Z
2024-03-04T04:32:28Z
Statistical Mechanics of Dynamical System Identification
Recovering dynamical equations from observed noisy data is the central challenge of system identification. We develop a statistical mechanical approach to analyze sparse equation discovery algorithms, which typically balance data fit and parsimony through a trial-and-error selection of hyperparameters. In this framework, statistical mechanics offers tools to analyze the interplay between complexity and fitness, in analogy to that done between entropy and energy. To establish this analogy, we define the optimization procedure as a two-level Bayesian inference problem that separates variable selection from coefficient values and enables the computation of the posterior parameter distribution in closed form. A key advantage of employing statistical mechanical concepts, such as free energy and the partition function, is in the quantification of uncertainty, especially in in the low-data limit; frequently encountered in real-world applications. As the data volume increases, our approach mirrors the thermodynamic limit, leading to distinct sparsity- and noise-induced phase transitions that delineate correct from incorrect identification. This perspective of sparse equation discovery, is versatile and can be adapted to various other equation discovery algorithms.
[ "['Andrei A. Klishin' 'Joseph Bakarji' 'J. Nathan Kutz' 'Krithika Manohar']" ]
null
null
2403.01734
null
null
http://arxiv.org/pdf/2403.01734v1
2024-03-04T05:20:57Z
2024-03-04T05:20:57Z
Offline Goal-Conditioned Reinforcement Learning for Safety-Critical Tasks with Recovery Policy
Offline goal-conditioned reinforcement learning (GCRL) aims at solving goal-reaching tasks with sparse rewards from an offline dataset. While prior work has demonstrated various approaches for agents to learn near-optimal policies, these methods encounter limitations when dealing with diverse constraints in complex environments, such as safety constraints. Some of these approaches prioritize goal attainment without considering safety, while others excessively focus on safety at the expense of training efficiency. In this paper, we study the problem of constrained offline GCRL and propose a new method called Recovery-based Supervised Learning (RbSL) to accomplish safety-critical tasks with various goals. To evaluate the method performance, we build a benchmark based on the robot-fetching environment with a randomly positioned obstacle and use expert or random policies to generate an offline dataset. We compare RbSL with three offline GCRL algorithms and one offline safe RL algorithm. As a result, our method outperforms the existing state-of-the-art methods to a large extent. Furthermore, we validate the practicality and effectiveness of RbSL by deploying it on a real Panda manipulator. Code is available at https://github.com/Sunlighted/RbSL.git.
[ "['Chenyang Cao' 'Zichen Yan' 'Renhao Lu' 'Junbo Tan' 'Xueqian Wang']" ]
null
null
2403.01738
null
null
http://arxiv.org/pdf/2403.01738v1
2024-03-04T05:31:29Z
2024-03-04T05:31:29Z
ComS2T: A complementary spatiotemporal learning system for data-adaptive model evolution
Spatiotemporal (ST) learning has become a crucial technique to enable smart cities and sustainable urban development. Current ST learning models capture the heterogeneity via various spatial convolution and temporal evolution blocks. However, rapid urbanization leads to fluctuating distributions in urban data and city structures over short periods, resulting in existing methods suffering generalization and data adaptation issues. Despite efforts, existing methods fail to deal with newly arrived observations and those methods with generalization capacity are limited in repeated training. Motivated by complementary learning in neuroscience, we introduce a prompt-based complementary spatiotemporal learning termed ComS2T, to empower the evolution of models for data adaptation. ComS2T partitions the neural architecture into a stable neocortex for consolidating historical memory and a dynamic hippocampus for new knowledge update. We first disentangle two disjoint structures into stable and dynamic weights, and then train spatial and temporal prompts by characterizing distribution of main observations to enable prompts adaptive to new data. This data-adaptive prompt mechanism, combined with a two-stage training process, facilitates fine-tuning of the neural architecture conditioned on prompts, thereby enabling efficient adaptation during testing. Extensive experiments validate the efficacy of ComS2T in adapting to various spatiotemporal out-of-distribution scenarios while maintaining efficient inference capabilities.
[ "['Zhengyang Zhou' 'Qihe Huang' 'Binwu Wang' 'Jianpeng Hou' 'Kuo Yang'\n 'Yuxuan Liang' 'Yang Wang']" ]
null
null
2403.01742
null
null
http://arxiv.org/pdf/2403.01742v2
2024-03-14T06:35:34Z
2024-03-04T05:39:23Z
Diffusion-TS: Interpretable Diffusion for General Time Series Generation
Denoising diffusion probabilistic models (DDPMs) are becoming the leading paradigm for generative models. It has recently shown breakthroughs in audio synthesis, time series imputation and forecasting. In this paper, we propose Diffusion-TS, a novel diffusion-based framework that generates multivariate time series samples of high quality by using an encoder-decoder transformer with disentangled temporal representations, in which the decomposition technique guides Diffusion-TS to capture the semantic meaning of time series while transformers mine detailed sequential information from the noisy model input. Different from existing diffusion-based approaches, we train the model to directly reconstruct the sample instead of the noise in each diffusion step, combining a Fourier-based loss term. Diffusion-TS is expected to generate time series satisfying both interpretablity and realness. In addition, it is shown that the proposed Diffusion-TS can be easily extended to conditional generation tasks, such as forecasting and imputation, without any model changes. This also motivates us to further explore the performance of Diffusion-TS under irregular settings. Finally, through qualitative and quantitative experiments, results show that Diffusion-TS achieves the state-of-the-art results on various realistic analyses of time series.
[ "['Xinyu Yuan' 'Yan Qiao']" ]
null
null
2403.01757
null
null
http://arxiv.org/pdf/2403.01757v1
2024-03-04T06:24:21Z
2024-03-04T06:24:21Z
How Multimodal Integration Boost the Performance of LLM for Optimization: Case Study on Capacitated Vehicle Routing Problems
Recently, large language models (LLMs) have notably positioned them as capable tools for addressing complex optimization challenges. Despite this recognition, a predominant limitation of existing LLM-based optimization methods is their struggle to capture the relationships among decision variables when relying exclusively on numerical text prompts, especially in high-dimensional problems. Keeping this in mind, we first propose to enhance the optimization performance using multimodal LLM capable of processing both textual and visual prompts for deeper insights of the processed optimization problem. This integration allows for a more comprehensive understanding of optimization problems, akin to human cognitive processes. We have developed a multimodal LLM-based optimization framework that simulates human problem-solving workflows, thereby offering a more nuanced and effective analysis. The efficacy of this method is evaluated through extensive empirical studies focused on a well-known combinatorial optimization problem, i.e., capacitated vehicle routing problem. The results are compared against those obtained from the LLM-based optimization algorithms that rely solely on textual prompts, demonstrating the significant advantages of our multimodal approach.
[ "['Yuxiao Huang' 'Wenjie Zhang' 'Liang Feng' 'Xingyu Wu' 'Kay Chen Tan']" ]
null
null
2403.01758
null
null
http://arxiv.org/pdf/2403.01758v1
2024-03-04T06:24:24Z
2024-03-04T06:24:24Z
AFBT GAN: enhanced explainability and diagnostic performance for cognitive decline by counterfactual generative adversarial network
Existing explanation results of functional connectivity (FC) are normally generated by using classification result labels and correlation analysis methods such as Pearson's correlation or gradient backward. However, the diagnostic model is still trained on the black box model and might lack the attention of FCs in important regions during the training. To enhance the explainability and improve diagnostic performance, providing prior knowledge on neurodegeneration-related regions when healthy subjects (HC) develop into subject cognitive decline (SCD) and mild cognitive impairment (MCI) for the diagnostic model is a key step. To better determine the neurodegeneration-related regions, we employ counterfactual reasoning to generate the target label FC matrices derived from source label FC and then subtract source label FC with target label FC. The counterfactual reasoning architecture is constructed by adaptive forward and backward transformer generative adversarial network (AFBT GAN), which is specifically designed by network property in FC and inverse patch embedding operation in the transformer. The specific design can make the model focus more on the current network correlation and employ the global insight of the transformer to reconstruct FC, which both help the generation of high-quality target label FC. The validation experiments are conducted on both clinical and public datasets, the generated attention map are both vital correlated to cognitive function and the diagnostic performance is also significant. The code is available at https://github.com/SXR3015/AFBT-GAN.
[ "['Xiongri Shen' 'Zhenxi Song' 'Zhiguo Zhang']" ]
null
null
2403.01759
null
null
http://arxiv.org/pdf/2403.01759v2
2024-03-15T02:22:07Z
2024-03-04T06:25:26Z
Open-world Machine Learning: A Review and New Outlooks
Machine learning has achieved remarkable success in many applications. However, existing studies are largely based on the closed-world assumption, which assumes that the environment is stationary, and the model is fixed once deployed. In many real-world applications, this fundamental and rather naive assumption may not hold because an open environment is complex, dynamic, and full of unknowns. In such cases, rejecting unknowns, discovering novelties, and then incrementally learning them, could enable models to be safe and evolve continually as biological systems do. This paper provides a holistic view of open-world machine learning by investigating unknown rejection, novel class discovery, and class-incremental learning in a unified paradigm. The challenges, principles, and limitations of current methodologies are discussed in detail. Finally, we discuss several potential directions for future research. This paper aims to provide a comprehensive introduction to the emerging open-world machine learning paradigm, to help researchers build more powerful AI systems in their respective fields, and to promote the development of artificial general intelligence.
[ "['Fei Zhu' 'Shijie Ma' 'Zhen Cheng' 'Xu-Yao Zhang' 'Zhaoxiang Zhang'\n 'Cheng-Lin Liu']" ]
null
null
2403.01769
null
null
http://arxiv.org/pdf/2403.01769v1
2024-03-04T06:55:57Z
2024-03-04T06:55:57Z
A Safe Screening Rule with Bi-level Optimization of $ν$ Support Vector Machine
Support vector machine (SVM) has achieved many successes in machine learning, especially for a small sample problem. As a famous extension of the traditional SVM, the $nu$ support vector machine ($nu$-SVM) has shown outstanding performance due to its great model interpretability. However, it still faces challenges in training overhead for large-scale problems. To address this issue, we propose a safe screening rule with bi-level optimization for $nu$-SVM (SRBO-$nu$-SVM) which can screen out inactive samples before training and reduce the computational cost without sacrificing the prediction accuracy. Our SRBO-$nu$-SVM is strictly deduced by integrating the Karush-Kuhn-Tucker (KKT) conditions, the variational inequalities of convex problems and the $nu$-property. Furthermore, we develop an efficient dual coordinate descent method (DCDM) to further improve computational speed. Finally, a unified framework for SRBO is proposed to accelerate many SVM-type models, and it is successfully applied to one-class SVM. Experimental results on 6 artificial data sets and 30 benchmark data sets have verified the effectiveness and safety of our proposed methods in supervised and unsupervised tasks.
[ "['Zhiji Yang' 'Wanyi Chen' 'Huan Zhang' 'Yitian Xu' 'Lei Shi'\n 'Jianhua Zhao']" ]
null
null
2403.01773
null
null
http://arxiv.org/pdf/2403.01773v2
2024-06-03T05:05:24Z
2024-03-04T07:03:10Z
Improving out-of-distribution generalization in graphs via hierarchical semantic environments
Out-of-distribution (OOD) generalization in the graph domain is challenging due to complex distribution shifts and a lack of environmental contexts. Recent methods attempt to enhance graph OOD generalization by generating flat environments. However, such flat environments come with inherent limitations to capture more complex data distributions. Considering the DrugOOD dataset, which contains diverse training environments (e.g., scaffold, size, etc.), flat contexts cannot sufficiently address its high heterogeneity. Thus, a new challenge is posed to generate more semantically enriched environments to enhance graph invariant learning for handling distribution shifts. In this paper, we propose a novel approach to generate hierarchical semantic environments for each graph. Firstly, given an input graph, we explicitly extract variant subgraphs from the input graph to generate proxy predictions on local environments. Then, stochastic attention mechanisms are employed to re-extract the subgraphs for regenerating global environments in a hierarchical manner. In addition, we introduce a new learning objective that guides our model to learn the diversity of environments within the same hierarchy while maintaining consistency across different hierarchies. This approach enables our model to consider the relationships between environments and facilitates robust graph invariant learning. Extensive experiments on real-world graph data have demonstrated the effectiveness of our framework. Particularly, in the challenging dataset DrugOOD, our method achieves up to 1.29% and 2.83% improvement over the best baselines on IC50 and EC50 prediction tasks, respectively.
[ "['Yinhua Piao' 'Sangseon Lee' 'Yijingxiu Lu' 'Sun Kim']" ]
null
null
2403.01776
null
null
http://arxiv.org/pdf/2403.01776v1
2024-03-04T07:09:54Z
2024-03-04T07:09:54Z
Hybrid data-driven and physics-informed regularized learning of cyclic plasticity with Neural Networks
An extendable, efficient and explainable Machine Learning approach is proposed to represent cyclic plasticity and replace conventional material models based on the Radial Return Mapping algorithm. High accuracy and stability by means of a limited amount of training data is achieved by implementing physics-informed regularizations and the back stress information. The off-loading of the Neural Network is applied to the maximal extent. The proposed model architecture is simpler and more efficient compared to existing solutions from the literature, while representing a complete three-dimensional material model. The validation of the approach is carried out by means of surrogate data obtained with the Armstrong-Frederick kinematic hardening model. The Mean Squared Error is assumed as the loss function which stipulates several restrictions: deviatoric character of internal variables, compliance with the flow rule, the differentiation of elastic and plastic steps and the associativity of the flow rule. The latter, however, has a minor impact on the accuracy, which implies the generalizability of the model for a broad spectrum of evolution laws for internal variables. Numerical tests simulating several load cases are shown in detail and validated for accuracy and stability.
[ "['Stefan Hildebrand' 'Sandra Klinge']" ]
null
null
2403.01798
null
null
http://arxiv.org/pdf/2403.01798v1
2024-03-04T07:38:31Z
2024-03-04T07:38:31Z
Towards Fair and Efficient Learning-based Congestion Control
Recent years have witnessed a plethora of learning-based solutions for congestion control (CC) that demonstrate better performance over traditional TCP schemes. However, they fail to provide consistently good convergence properties, including {em fairness}, {em fast convergence} and {em stability}, due to the mismatch between their objective functions and these properties. Despite being intuitive, integrating these properties into existing learning-based CC is challenging, because: 1) their training environments are designed for the performance optimization of single flow but incapable of cooperative multi-flow optimization, and 2) there is no directly measurable metric to represent these properties into the training objective function. We present Astraea, a new learning-based congestion control that ensures fast convergence to fairness with stability. At the heart of Astraea is a multi-agent deep reinforcement learning framework that explicitly optimizes these convergence properties during the training process by enabling the learning of interactive policy between multiple competing flows, while maintaining high performance. We further build a faithful multi-flow environment that emulates the competing behaviors of concurrent flows, explicitly expressing convergence properties to enable their optimization during training. We have fully implemented Astraea and our comprehensive experiments show that Astraea can quickly converge to fairness point and exhibit better stability than its counterparts. For example, sys achieves near-optimal bandwidth sharing (i.e., fairness) when multiple flows compete for the same bottleneck, delivers up to 8.4$times$ faster convergence speed and 2.8$times$ smaller throughput deviation, while achieving comparable or even better performance over prior solutions.
[ "['Xudong Liao' 'Han Tian' 'Chaoliang Zeng' 'Xinchen Wan' 'Kai Chen']" ]
null
null
2403.01801
null
null
http://arxiv.org/abs/2403.01801v1
2024-03-04T07:45:29Z
2024-03-04T07:45:29Z
COLA: Cross-city Mobility Transformer for Human Trajectory Simulation
Human trajectory data produced by daily mobile devices has proven its usefulness in various substantial fields such as urban planning and epidemic prevention. In terms of the individual privacy concern, human trajectory simulation has attracted increasing attention from researchers, targeting at offering numerous realistic mobility data for downstream tasks. Nevertheless, the prevalent issue of data scarcity undoubtedly degrades the reliability of existing deep learning models. In this paper, we are motivated to explore the intriguing problem of mobility transfer across cities, grasping the universal patterns of human trajectories to augment the powerful Transformer with external mobility data. There are two crucial challenges arising in the knowledge transfer across cities: 1) how to transfer the Transformer to adapt for domain heterogeneity; 2) how to calibrate the Transformer to adapt for subtly different long-tail frequency distributions of locations. To address these challenges, we have tailored a Cross-city mObiLity trAnsformer (COLA) with a dedicated model-agnostic transfer framework by effectively transferring cross-city knowledge for human trajectory simulation. Firstly, COLA divides the Transformer into the private modules for city-specific characteristics and the shared modules for city-universal mobility patterns. Secondly, COLA leverages a lightweight yet effective post-hoc adjustment strategy for trajectory simulation, without disturbing the complex bi-level optimization of model-agnostic knowledge transfer. Extensive experiments of COLA compared to state-of-the-art single-city baselines and our implemented cross-city baselines have demonstrated its superiority and effectiveness. The code is available at https://github.com/Star607/Cross-city-Mobility-Transformer.
[ "['Yu Wang' 'Tongya Zheng' 'Yuxuan Liang' 'Shunyu Liu' 'Mingli Song']" ]
null
null
2403.01805
null
null
http://arxiv.org/pdf/2403.01805v1
2024-03-04T07:53:15Z
2024-03-04T07:53:15Z
Tsallis Entropy Regularization for Linearly Solvable MDP and Linear Quadratic Regulator
Shannon entropy regularization is widely adopted in optimal control due to its ability to promote exploration and enhance robustness, e.g., maximum entropy reinforcement learning known as Soft Actor-Critic. In this paper, Tsallis entropy, which is a one-parameter extension of Shannon entropy, is used for the regularization of linearly solvable MDP and linear quadratic regulators. We derive the solution for these problems and demonstrate its usefulness in balancing between exploration and sparsity of the obtained control law.
[ "['Yota Hashizume' 'Koshi Oishi' 'Kenji Kashima']" ]
null
null
2403.01820
null
null
http://arxiv.org/pdf/2403.01820v1
2024-03-04T08:10:42Z
2024-03-04T08:10:42Z
Macroscopic auxiliary asymptotic preserving neural networks for the linear radiative transfer equations
We develop a Macroscopic Auxiliary Asymptotic-Preserving Neural Network (MA-APNN) method to solve the time-dependent linear radiative transfer equations (LRTEs), which have a multi-scale nature and high dimensionality. To achieve this, we utilize the Physics-Informed Neural Networks (PINNs) framework and design a new adaptive exponentially weighted Asymptotic-Preserving (AP) loss function, which incorporates the macroscopic auxiliary equation that is derived from the original transfer equation directly and explicitly contains the information of the diffusion limit equation. Thus, as the scale parameter tends to zero, the loss function gradually transitions from the transport state to the diffusion limit state. In addition, the initial data, boundary conditions, and conservation laws serve as the regularization terms for the loss. We present several numerical examples to demonstrate the effectiveness of MA-APNNs.
[ "['Hongyan Li' 'Song Jiang' 'Wenjun Sun' 'Liwei Xu' 'Guanyu Zhou']" ]
null
null
2403.01841
null
null
http://arxiv.org/pdf/2403.01841v2
2024-03-12T07:34:28Z
2024-03-04T08:38:56Z
Making Pre-trained Language Models Great on Tabular Prediction
The transferability of deep neural networks (DNNs) has made significant progress in image and language processing. However, due to the heterogeneity among tables, such DNN bonus is still far from being well exploited on tabular data prediction (e.g., regression or classification tasks). Condensing knowledge from diverse domains, language models (LMs) possess the capability to comprehend feature names from various tables, potentially serving as versatile learners in transferring knowledge across distinct tables and diverse prediction tasks, but their discrete text representation space is inherently incompatible with numerical feature values in tables. In this paper, we present TP-BERTa, a specifically pre-trained LM for tabular data prediction. Concretely, a novel relative magnitude tokenization converts scalar numerical feature values to finely discrete, high-dimensional tokens, and an intra-feature attention approach integrates feature values with the corresponding feature names. Comprehensive experiments demonstrate that our pre-trained TP-BERTa leads the performance among tabular DNNs and is competitive with Gradient Boosted Decision Tree models in typical tabular data regime.
[ "['Jiahuan Yan' 'Bo Zheng' 'Hongxia Xu' 'Yiheng Zhu' 'Danny Z. Chen'\n 'Jimeng Sun' 'Jian Wu' 'Jintai Chen']" ]
null
null
2403.01845
null
null
http://arxiv.org/pdf/2403.01845v2
2024-03-10T05:49:03Z
2024-03-04T08:51:38Z
NASH: Neural Architecture Search for Hardware-Optimized Machine Learning Models
As machine learning (ML) algorithms get deployed in an ever-increasing number of applications, these algorithms need to achieve better trade-offs between high accuracy, high throughput and low latency. This paper introduces NASH, a novel approach that applies neural architecture search to machine learning hardware. Using NASH, hardware designs can achieve not only high throughput and low latency but also superior accuracy performance. We present four versions of the NASH strategy in this paper, all of which show higher accuracy than the original models. The strategy can be applied to various convolutional neural networks, selecting specific model operations among many to guide the training process toward higher accuracy. Experimental results show that applying NASH on ResNet18 or ResNet34 achieves a top 1 accuracy increase of up to 3.1% and a top 5 accuracy increase of up to 2.2% compared to the non-NASH version when tested on the ImageNet data set. We also integrated this approach into the FINN hardware model synthesis tool to automate the application of our approach and the generation of the hardware model. Results show that using FINN can achieve a maximum throughput of 324.5 fps. In addition, NASH models can also result in a better trade-off between accuracy and hardware resource utilization. The accuracy-hardware (HW) Pareto curve shows that the models with the four NASH versions represent the best trade-offs achieving the highest accuracy for a given HW utilization. The code for our implementation is open-source and publicly available on GitHub at https://github.com/MFJI/NASH.
[ "['Mengfei Ji' 'Yuchun Chang' 'Baolin Zhang' 'Zaid Al-Ars']" ]
null
null
2403.01849
null
null
http://arxiv.org/pdf/2403.01849v1
2024-03-04T08:59:32Z
2024-03-04T08:59:32Z
One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models
Large pre-trained Vision-Language Models (VLMs) like CLIP, despite having remarkable generalization ability, are highly vulnerable to adversarial examples. This work studies the adversarial robustness of VLMs from the novel perspective of the text prompt instead of the extensively studied model weights (frozen in this work). We first show that the effectiveness of both adversarial attack and defense are sensitive to the used text prompt. Inspired by this, we propose a method to improve resilience to adversarial attacks by learning a robust text prompt for VLMs. The proposed method, named Adversarial Prompt Tuning (APT), is effective while being both computationally and data efficient. Extensive experiments are conducted across 15 datasets and 4 data sparsity schemes (from 1-shot to full training data settings) to show APT's superiority over hand-engineered prompts and other state-of-the-art adaption methods. APT demonstrated excellent abilities in terms of the in-distribution performance and the generalization under input distribution shift and across datasets. Surprisingly, by simply adding one learned word to the prompts, APT can significantly boost the accuracy and robustness (epsilon=4/255) over the hand-engineered prompts by +13% and +8.5% on average respectively. The improvement further increases, in our most effective setting, to +26.4% for accuracy and +16.7% for robustness. Code is available at https://github.com/TreeLLi/APT.
[ "['Lin Li' 'Haoyan Guan' 'Jianing Qiu' 'Michael Spratling']" ]
null
null
2403.01857
null
null
http://arxiv.org/pdf/2403.01857v2
2024-06-05T09:00:36Z
2024-03-04T09:13:14Z
Reward Model Learning vs. Direct Policy Optimization: A Comparative Analysis of Learning from Human Preferences
In this paper, we take a step towards a deeper understanding of learning from human preferences by systematically comparing the paradigm of reinforcement learning from human feedback (RLHF) with the recently proposed paradigm of direct preference optimization (DPO). We focus our attention on the class of loglinear policy parametrization and linear reward functions. In order to compare the two paradigms, we first derive minimax statistical bounds on the suboptimality gap induced by both RLHF and DPO, assuming access to an oracle that exactly solves the optimization problems. We provide a detailed discussion on the relative comparison between the two paradigms, simultaneously taking into account the sample size, policy and reward class dimensions, and the regularization temperature. Moreover, we extend our analysis to the approximate optimization setting and derive exponentially decaying convergence rates for both RLHF and DPO. Next, we analyze the setting where the ground-truth reward is not realizable and find that, while RLHF incurs a constant additional error, DPO retains its asymptotically decaying gap by just tuning the temperature accordingly. Finally, we extend our comparison to the Markov decision process setting, where we generalize our results with exact optimization. To the best of our knowledge, we are the first to provide such a comparative analysis for RLHF and DPO.
[ "['Andi Nika' 'Debmalya Mandal' 'Parameswaran Kamalaruban'\n 'Georgios Tzannetos' 'Goran Radanović' 'Adish Singla']" ]
null
null
2403.01864
null
null
http://arxiv.org/pdf/2403.01864v1
2024-03-04T09:20:05Z
2024-03-04T09:20:05Z
RCoCo: Contrastive Collective Link Prediction across Multiplex Network in Riemannian Space
Link prediction typically studies the probability of future interconnection among nodes with the observation in a single social network. More often than not, real scenario is presented as a multiplex network with common (anchor) users active in multiple social networks. In the literature, most existing works study either the intra-link prediction in a single network or inter-link prediction among networks (a.k.a. network alignment), and consider two learning tasks are independent from each other, which is still away from the fact. On the representation space, the vast majority of existing methods are built upon the traditional Euclidean space, unaware of the inherent geometry of social networks. The third issue is on the scarce anchor users. Annotating anchor users is laborious and expensive, and thus it is impractical to work with quantities of anchor users. Herein, in light of the issues above, we propose to study a challenging yet practical problem of Geometry-aware Collective Link Prediction across Multiplex Network. To address this problem, we present a novel contrastive model, RCoCo, which collaborates intra- and inter-network behaviors in Riemannian spaces. In RCoCo, we design a curvature-aware graph attention network ($kappa-$GAT), conducting attention mechanism in Riemannian manifold whose curvature is estimated by the Ricci curvatures over the network. Thereafter, we formulate intra- and inter-contrastive loss in the manifolds, in which we augment graphs by exploring the high-order structure of community and information transfer on anchor users. Finally, we conduct extensive experiments with 14 strong baselines on 8 real-world datasets, and show the effectiveness of RCoCo.
[ "['Li Sun' 'Mengjie Li' 'Yong Yang' 'Xiao Li' 'Lin Liu' 'Pengfei Zhang'\n 'Haohua Du']" ]
null
null
2403.01865
null
null
http://arxiv.org/pdf/2403.01865v2
2024-03-11T13:11:51Z
2024-03-04T09:21:10Z
Improving generalisation via anchor multivariate analysis
We introduce a causal regularisation extension to anchor regression (AR) for improved out-of-distribution (OOD) generalisation. We present anchor-compatible losses, aligning with the anchor framework to ensure robustness against distribution shifts. Various multivariate analysis (MVA) algorithms, such as (Orthonormalized) PLS, RRR, and MLR, fall within the anchor framework. We observe that simple regularisation enhances robustness in OOD settings. Estimators for selected algorithms are provided, showcasing consistency and efficacy in synthetic and real-world climate science problems. The empirical validation highlights the versatility of anchor regularisation, emphasizing its compatibility with MVA approaches and its role in enhancing replicability while guarding against distribution shifts. The extended AR framework advances causal inference methodologies, addressing the need for reliable OOD generalisation.
[ "['Homer Durand' 'Gherardo Varando' 'Nathan Mankovich' 'Gustau Camps-Valls']" ]
null
null
2403.01874
null
null
http://arxiv.org/pdf/2403.01874v1
2024-03-04T09:30:35Z
2024-03-04T09:30:35Z
A Survey on Evaluation of Out-of-Distribution Generalization
Machine learning models, while progressively advanced, rely heavily on the IID assumption, which is often unfulfilled in practice due to inevitable distribution shifts. This renders them susceptible and untrustworthy for deployment in risk-sensitive applications. Such a significant problem has consequently spawned various branches of works dedicated to developing algorithms capable of Out-of-Distribution (OOD) generalization. Despite these efforts, much less attention has been paid to the evaluation of OOD generalization, which is also a complex and fundamental problem. Its goal is not only to assess whether a model's OOD generalization capability is strong or not, but also to evaluate where a model generalizes well or poorly. This entails characterizing the types of distribution shifts that a model can effectively address, and identifying the safe and risky input regions given a model. This paper serves as the first effort to conduct a comprehensive review of OOD evaluation. We categorize existing research into three paradigms: OOD performance testing, OOD performance prediction, and OOD intrinsic property characterization, according to the availability of test data. Additionally, we briefly discuss OOD evaluation in the context of pretrained models. In closing, we propose several promising directions for future research in OOD evaluation.
[ "['Han Yu' 'Jiashuo Liu' 'Xingxuan Zhang' 'Jiayun Wu' 'Peng Cui']" ]
null
null
2403.01875
null
null
http://arxiv.org/pdf/2403.01875v1
2024-03-04T09:31:56Z
2024-03-04T09:31:56Z
ICLN: Input Convex Loss Network for Decision Focused Learning
In decision-making problem under uncertainty, predicting unknown parameters is often considered independent of the optimization part. Decision-focused Learning (DFL) is a task-oriented framework to integrate prediction and optimization by adapting predictive model to give better decision for the corresponding task. Here, an inevitable challenge arises when computing gradients of the optimal decision with respect to the parameters. Existing researches cope this issue by smoothly reforming surrogate optimization or construct surrogate loss function that mimic task loss. However, they are applied to restricted optimization domain or build functions in a local manner leading a large computational time. In this paper, we propose Input Convex Loss Network (ICLN), a novel global surrogate loss which can be implemented in a general DFL paradigm. ICLN learns task loss via Input Convex Neural Networks which is guaranteed to be convex for some inputs, while keeping the global structure for the other inputs. This enables ICLN to admit general DFL through only a single surrogate loss without any sense for choosing appropriate parametric forms. We confirm effectiveness and flexibility of ICLN by evaluating our proposed model with three stochastic decision-making problems.
[ "['Haeun Jeon' 'Hyunglip Bae' 'Minsu Park' 'Chanyeong Kim' 'Woo Chang Kim']" ]
null
null
2403.01888
null
null
http://arxiv.org/pdf/2403.01888v2
2024-04-18T01:56:05Z
2024-03-04T09:49:35Z
Fast Benchmarking of Asynchronous Multi-Fidelity Optimization on Zero-Cost Benchmarks
While deep learning has celebrated many successes, its results often hinge on the meticulous selection of hyperparameters (HPs). However, the time-consuming nature of deep learning training makes HP optimization (HPO) a costly endeavor, slowing down the development of efficient HPO tools. While zero-cost benchmarks, which provide performance and runtime without actual training, offer a solution for non-parallel setups, they fall short in parallel setups as each worker must communicate its queried runtime to return its evaluation in the exact order. This work addresses this challenge by introducing a user-friendly Python package that facilitates efficient parallel HPO with zero-cost benchmarks. Our approach calculates the exact return order based on the information stored in file system, eliminating the need for long waiting times and enabling much faster HPO evaluations. We first verify the correctness of our approach through extensive testing and the experiments with 6 popular HPO libraries show its applicability to diverse libraries and its ability to achieve over 1000x speedup compared to a traditional approach. Our package can be installed via pip install mfhpo-simulator.
[ "['Shuhei Watanabe' 'Neeratyoy Mallik' 'Edward Bergman' 'Frank Hutter']" ]
null
null
2403.01895
null
null
http://arxiv.org/pdf/2403.01895v1
2024-03-04T09:55:16Z
2024-03-04T09:55:16Z
Unsupervised Distance Metric Learning for Anomaly Detection Over Multivariate Time Series
Distance-based time series anomaly detection methods are prevalent due to their relative non-parametric nature and interpretability. However, the commonly used Euclidean distance is sensitive to noise. While existing works have explored dynamic time warping (DTW) for its robustness, they only support supervised tasks over multivariate time series (MTS), leaving a scarcity of unsupervised methods. In this work, we propose FCM-wDTW, an unsupervised distance metric learning method for anomaly detection over MTS, which encodes raw data into latent space and reveals normal dimension relationships through cluster centers. FCM-wDTW introduces locally weighted DTW into fuzzy C-means clustering and learns the optimal latent space efficiently, enabling anomaly identification via data reconstruction. Experiments with 11 different types of benchmarks demonstrate our method's competitive accuracy and efficiency.
[ "['Hanyang Yuan' 'Qinglin Cai' 'Keting Yin']" ]
null
null
2403.01896
null
null
http://arxiv.org/pdf/2403.01896v1
2024-03-04T09:55:43Z
2024-03-04T09:55:43Z
Robustness Bounds on the Successful Adversarial Examples: Theory and Practice
Adversarial example (AE) is an attack method for machine learning, which is crafted by adding imperceptible perturbation to the data inducing misclassification. In the current paper, we investigated the upper bound of the probability of successful AEs based on the Gaussian Process (GP) classification. We proved a new upper bound that depends on AE's perturbation norm, the kernel function used in GP, and the distance of the closest pair with different labels in the training dataset. Surprisingly, the upper bound is determined regardless of the distribution of the sample dataset. We showed that our theoretical result was confirmed through the experiment using ImageNet. In addition, we showed that changing the parameters of the kernel function induces a change of the upper bound of the probability of successful AEs.
[ "['Hiroaki Maeshima' 'Akira Otsuka']" ]
null
null
2403.01900
null
null
http://arxiv.org/pdf/2403.01900v1
2024-03-04T09:59:11Z
2024-03-04T09:59:11Z
Universality of reservoir systems with recurrent neural networks
Approximation capability of reservoir systems whose reservoir is a recurrent neural network (RNN) is discussed. In our problem setting, a reservoir system approximates a set of functions just by adjusting its linear readout while the reservoir is fixed. We will show what we call uniform strong universality of a family of RNN reservoir systems for a certain class of functions to be approximated. This means that, for any positive number, we can construct a sufficiently large RNN reservoir system whose approximation error for each function in the class of functions to be approximated is bounded from above by the positive number. Such RNN reservoir systems are constructed via parallel concatenation of RNN reservoirs.
[ "['Hiroki Yasumoto' 'Toshiyuki Tanaka']" ]
null
null
2403.01907
null
null
http://arxiv.org/pdf/2403.01907v1
2024-03-04T10:10:23Z
2024-03-04T10:10:23Z
Capacity of the Hebbian-Hopfield network associative memory
In cite{Hop82}, Hopfield introduced a emph{Hebbian} learning rule based neural network model and suggested how it can efficiently operate as an associative memory. Studying random binary patterns, he also uncovered that, if a small fraction of errors is tolerated in the stored patterns retrieval, the capacity of the network (maximal number of memorized patterns, $m$) scales linearly with each pattern's size, $n$. Moreover, he famously predicted $alpha_c=lim_{nrightarrowinfty}frac{m}{n}approx 0.14$. We study this very same scenario with two famous pattern's basins of attraction: textbf{emph{(i)}} The AGS one from cite{AmiGutSom85}; and textbf{emph{(ii)}} The NLT one from cite{Newman88,Louk94,Louk94a,Louk97,Tal98}. Relying on the emph{fully lifted random duality theory} (fl RDT) from cite{Stojnicflrdt23}, we obtain the following explicit capacity characterizations on the first level of lifting: begin{equation} alpha_c^{(AGS,1)} = left ( max_{deltain left ( 0,frac{1}{2}right ) }frac{1-2delta}{sqrt{2} mbox{erfinv} left ( 1-2deltaright )} - frac{2}{sqrt{2pi}} e^{-left ( mbox{erfinv}left ( 1-2delta right )right )^2}right )^2 approx mathbf{0.137906} end{equation} begin{equation} alpha_c^{(NLT,1)} = frac{mbox{erf}(x)^2}{2x^2}-1+mbox{erf}(x)^2 approx mathbf{0.129490}, quad 1-mbox{erf}(x)^2- frac{2mbox{erf}(x)e^{-x^2}}{sqrt{pi}x}+frac{2e^{-2x^2}}{pi}=0. end{equation} A substantial numerical work gives on the second level of lifting $alpha_c^{(AGS,2)} approx mathbf{0.138186}$ and $alpha_c^{(NLT,2)} approx mathbf{0.12979}$, effectively uncovering a remarkably fast lifting convergence. Moreover, the obtained AGS characterizations exactly match the replica symmetry based ones of cite{AmiGutSom85} and the corresponding symmetry breaking ones of cite{SteKuh94}.
[ "['Mihailo Stojnic']" ]
null
null
2403.01918
null
null
http://arxiv.org/pdf/2403.01918v1
2024-03-04T10:32:48Z
2024-03-04T10:32:48Z
Towards Continuous Assurance Case Creation for ADS with the Evidential Tool Bus
An assurance case has become an integral component for the certification of safety-critical systems. While manually defining assurance case patterns can be not avoided, system-specific instantiations of assurance case patterns are both costly and time-consuming. It becomes especially complex to maintain an assurance case for a system when the requirements of the System-Under-Assurance change, or an assurance claim becomes invalid due to, e.g., degradation of a systems component, as common when deploying learning-enabled components. In this paper, we report on our preliminary experience leveraging the tool integration framework Evidential Tool Bus (ETB) for the construction and continuous maintenance of an assurance case from a predefined assurance case pattern. Specifically, we demonstrate the assurance process on an industrial Automated Valet Parking system from the automotive domain. We present the formalization of the provided assurance case pattern in the ETB processable logical specification language of workflows. Our findings show that ETB is able to create and maintain evidence required for the construction of an assurance case.
[ "['Lev Sorokin' 'Radouane Bouchekir' 'Tewodros A. Beyene'\n 'Brian Hsuan-Cheng Liao' 'Adam Molin']" ]
null
null
2403.01919
null
null
http://arxiv.org/pdf/2403.01919v2
2024-03-05T10:12:36Z
2024-03-04T10:36:06Z
Matrix Completion with Convex Optimization and Column Subset Selection
We introduce a two-step method for the matrix recovery problem. Our approach combines the theoretical foundations of the Column Subset Selection and Low-rank Matrix Completion problems. The proposed method, in each step, solves a convex optimization task. We present two algorithms that implement our Columns Selected Matrix Completion (CSMC) method, each dedicated to a different size problem. We performed a formal analysis of the presented method, in which we formulated the necessary assumptions and the probability of finding a correct solution. In the second part of the paper, we present the results of the experimental work. Numerical experiments verified the correctness and performance of the algorithms. To study the influence of the matrix size, rank, and the proportion of missing elements on the quality of the solution and the computation time, we performed experiments on synthetic data. The presented method was applied to two real-life problems problems: prediction of movie rates in a recommendation system and image inpainting. Our thorough analysis shows that CSMC provides solutions of comparable quality to matrix completion algorithms, which are based on convex optimization. However, CSMC offers notable savings in terms of runtime.
[ "['Antonina Krajewska' 'Ewa Niewiadomska-Szynkiewicz']" ]
null
null
2403.01922
null
null
http://arxiv.org/abs/2403.01922v2
2024-06-20T09:03:17Z
2024-03-04T10:39:58Z
FlowPrecision: Advancing FPGA-Based Real-Time Fluid Flow Estimation with Linear Quantization
In industrial and environmental monitoring, achieving real-time and precise fluid flow measurement remains a critical challenge. This study applies linear quantization in FPGA-based soft sensors for fluid flow estimation, significantly enhancing Neural Network model precision by overcoming the limitations of traditional fixed-point quantization. Our approach achieves up to a 10.10% reduction in Mean Squared Error and a notable 9.39% improvement in inference speed through targeted hardware optimizations. Validated across multiple data sets, our findings demonstrate that the optimized FPGA-based quantized models can provide efficient, accurate real-time inference, offering a viable alternative to cloud-based processing in pervasive autonomous systems.
[ "['Tianheng Ling' 'Julian Hoever' 'Chao Qian' 'Gregor Schiele']" ]
null
null
2403.01942
null
null
http://arxiv.org/pdf/2403.01942v2
2024-06-05T03:04:12Z
2024-03-04T11:24:51Z
Mitigating Label Noise on Graph via Topological Sample Selection
Despite the success of the carefully-annotated benchmarks, the effectiveness of existing graph neural networks (GNNs) can be considerably impaired in practice when the real-world graph data is noisily labeled. Previous explorations in sample selection have been demonstrated as an effective way for robust learning with noisy labels, however, the conventional studies focus on i.i.d data, and when moving to non-iid graph data and GNNs, two notable challenges remain: (1) nodes located near topological class boundaries are very informative for classification but cannot be successfully distinguished by the heuristic sample selection. (2) there is no available measure that considers the graph topological information to promote sample selection in a graph. To address this dilemma, we propose a $textit{Topological Sample Selection}$ (TSS) method that boosts the informative sample selection process in a graph by utilising topological information. We theoretically prove that our procedure minimizes an upper bound of the expected risk under target clean distribution, and experimentally show the superiority of our method compared with state-of-the-art baselines.
[ "['Yuhao Wu' 'Jiangchao Yao' 'Xiaobo Xia' 'Jun Yu' 'Ruxin Wang' 'Bo Han'\n 'Tongliang Liu']" ]
null
null
2403.01944
null
null
http://arxiv.org/pdf/2403.01944v2
2024-03-05T08:43:31Z
2024-03-04T11:30:02Z
Fourier-basis Functions to Bridge Augmentation Gap: Rethinking Frequency Augmentation in Image Classification
Computer vision models normally witness degraded performance when deployed in real-world scenarios, due to unexpected changes in inputs that were not accounted for during training. Data augmentation is commonly used to address this issue, as it aims to increase data variety and reduce the distribution gap between training and test data. However, common visual augmentations might not guarantee extensive robustness of computer vision models. In this paper, we propose Auxiliary Fourier-basis Augmentation (AFA), a complementary technique targeting augmentation in the frequency domain and filling the augmentation gap left by visual augmentations. We demonstrate the utility of augmentation via Fourier-basis additive noise in a straightforward and efficient adversarial setting. Our results show that AFA benefits the robustness of models against common corruptions, OOD generalization, and consistency of performance of models against increasing perturbations, with negligible deficit to the standard performance of models. It can be seamlessly integrated with other augmentation techniques to further boost performance. Code and models can be found at: https://github.com/nis-research/afa-augment
[ "['Puru Vaish' 'Shunxin Wang' 'Nicola Strisciuglio']" ]
null
null
2403.01946
null
null
http://arxiv.org/pdf/2403.01946v2
2024-06-20T21:56:54Z
2024-03-04T11:32:18Z
A Generative Model of Symmetry Transformations
Correctly capturing the symmetry transformations of data can lead to efficient models with strong generalization capabilities, though methods incorporating symmetries often require prior knowledge. While recent advancements have been made in learning those symmetries directly from the dataset, most of this work has focused on the discriminative setting. In this paper, we take inspiration from group theoretic ideas to construct a generative model that explicitly aims to capture the data's approximate symmetries. This results in a model that, given a prespecified broad set of possible symmetries, learns to what extent, if at all, those symmetries are actually present. Our model can be seen as a generative process for data augmentation. We provide a simple algorithm for learning our generative model and empirically demonstrate its ability to capture symmetries under affine and color transformations, in an interpretable way. Combining our symmetry model with standard generative models results in higher marginal test-log-likelihoods and improved data efficiency.
[ "['James Urquhart Allingham' 'Bruno Kacper Mlodozeniec' 'Shreyas Padhy'\n 'Javier Antorán' 'David Krueger' 'Richard E. Turner' 'Eric Nalisnick'\n 'José Miguel Hernández-Lobato']" ]
null
null
2403.01948
null
null
http://arxiv.org/pdf/2403.01948v1
2024-03-04T11:34:12Z
2024-03-04T11:34:12Z
On Fractional Moment Estimation from Polynomial Chaos Expansion
Fractional statistical moments are utilized for various tasks of uncertainty quantification, including the estimation of probability distributions. However, an estimation of fractional statistical moments of costly mathematical models by statistical sampling is challenging since it is typically not possible to create a large experimental design due to limitations in computing capacity. This paper presents a novel approach for the analytical estimation of fractional moments, directly from polynomial chaos expansions. Specifically, the first four statistical moments obtained from the deterministic PCE coefficients are used for an estimation of arbitrary fractional moments via H"{o}lder's inequality. The proposed approach is utilized for an estimation of statistical moments and probability distributions in three numerical examples of increasing complexity. Obtained results show that the proposed approach achieves a superior performance in estimating the distribution of the response, in comparison to a standard Latin hypercube sampling in the presented examples.
[ "['Lukáš Novák' 'Marcos Valdebenito' 'Matthias Faes']" ]
null
null
2403.02004
null
null
http://arxiv.org/pdf/2403.02004v2
2024-04-11T07:54:55Z
2024-03-04T12:57:26Z
Error bounds for particle gradient descent, and extensions of the log-Sobolev and Talagrand inequalities
We prove non-asymptotic error bounds for particle gradient descent (PGD)~(Kuntz et al., 2023), a recently introduced algorithm for maximum likelihood estimation of large latent variable models obtained by discretizing a gradient flow of the free energy. We begin by showing that, for models satisfying a condition generalizing both the log-Sobolev and the Polyak--{L}ojasiewicz inequalities (LSI and P{L}I, respectively), the flow converges exponentially fast to the set of minimizers of the free energy. We achieve this by extending a result well-known in the optimal transport literature (that the LSI implies the Talagrand inequality) and its counterpart in the optimization literature (that the P{L}I implies the so-called quadratic growth condition), and applying it to our new setting. We also generalize the Bakry--'Emery Theorem and show that the LSI/P{L}I generalization holds for models with strongly concave log-likelihoods. For such models, we further control PGD's discretization error, obtaining non-asymptotic error bounds. While we are motivated by the study of PGD, we believe that the inequalities and results we extend may be of independent interest.
[ "['Rocco Caprio' 'Juan Kuntz' 'Samuel Power' 'Adam M. Johansen']" ]
null
null
2403.02011
null
null
http://arxiv.org/pdf/2403.02011v1
2024-03-04T13:12:02Z
2024-03-04T13:12:02Z
Bipartite Graph Variational Auto-Encoder with Fair Latent Representation to Account for Sampling Bias in Ecological Networks
We propose a method to represent bipartite networks using graph embeddings tailored to tackle the challenges of studying ecological networks, such as the ones linking plants and pollinators, where many covariates need to be accounted for, in particular to control for sampling bias. We adapt the variational graph auto-encoder approach to the bipartite case, which enables us to generate embeddings in a latent space where the two sets of nodes are positioned based on their probability of connection. We translate the fairness framework commonly considered in sociology in order to address sampling bias in ecology. By incorporating the Hilbert-Schmidt independence criterion (HSIC) as an additional penalty term in the loss we optimize, we ensure that the structure of the latent space is independent of continuous variables, which are related to the sampling process. Finally, we show how our approach can change our understanding of ecological networks when applied to the Spipoll data set, a citizen science monitoring program of plant-pollinator interactions to which many observers contribute, making it prone to sampling bias.
[ "['Emre Anakok' 'Pierre Barbillon' 'Colin Fontaine' 'Elisa Thebault']" ]
null
null
2403.02019
null
null
http://arxiv.org/pdf/2403.02019v1
2024-03-04T13:20:52Z
2024-03-04T13:20:52Z
Active Learning of Mealy Machines with Timers
We present the first algorithm for query learning of a general class of Mealy machines with timers (MMTs) in a black-box context. Our algorithm is an extension of the L# algorithm of Vaandrager et al. to a timed setting. Like the algorithm for learning timed automata proposed by Waga, our algorithm is inspired by ideas of Maler & Pnueli. Based on the elementary languages of, both Waga's and our algorithm use symbolic queries, which are then implemented using finitely many concrete queries. However, whereas Waga needs exponentially many concrete queries to implement a single symbolic query, we only need a polynomial number. This is because in order to learn a timed automaton, a learner needs to determine the exact guard and reset for each transition (out of exponentially many possibilities), whereas for learning an MMT a learner only needs to figure out which of the preceding transitions caused a timeout. As shown in our previous work, this can be done efficiently for a subclass of MMTs that are race-avoiding: if a timeout is caused by a preceding input then a slight change in the timing of this input will induce a corresponding change in the timing of the timeout ("wiggling"). Experiments with a prototype implementation, written in Rust, show that our algorithm is able to efficiently learn realistic benchmarks.
[ "['Véronique Bruyère' 'Bharat Garhewal' 'Guillermo A. Pérez'\n 'Gaëtan Staquet' 'Frits W. Vaandrager']" ]
null
null
2403.02035
null
null
http://arxiv.org/pdf/2403.02035v2
2024-06-14T14:02:12Z
2024-03-04T13:39:22Z
Exponential Expressivity of ReLU$^k$ Neural Networks on Gevrey Classes with Point Singularities
We analyze deep Neural Network emulation rates of smooth functions with point singularities in bounded, polytopal domains $mathrm{D} subset mathbb{R}^d$, $d=2,3$. We prove exponential emulation rates in Sobolev spaces in terms of the number of neurons and in terms of the number of nonzero coefficients for Gevrey-regular solution classes defined in terms of weighted Sobolev scales in $mathrm{D}$, comprising the countably-normed spaces of I.M. Babuv{s}ka and B.Q. Guo. As intermediate result, we prove that continuous, piecewise polynomial high order (``$p$-version'') finite elements with elementwise polynomial degree $pinmathbb{N}$ on arbitrary, regular, simplicial partitions of polyhedral domains $mathrm{D} subset mathbb{R}^d$, $dgeq 2$ can be exactly emulated by neural networks combining ReLU and ReLU$^2$ activations. On shape-regular, simplicial partitions of polytopal domains $mathrm{D}$, both the number of neurons and the number of nonzero parameters are proportional to the number of degrees of freedom of the finite element space, in particular for the $hp$-Finite Element Method of I.M. Babuv{s}ka and B.Q. Guo.
[ "['Joost A. A. Opschoor' 'Christoph Schwab']" ]
null
null
2403.02042
null
null
http://arxiv.org/pdf/2403.02042v1
2024-03-04T13:47:33Z
2024-03-04T13:47:33Z
Deep Neural Network for Constraint Acquisition through Tailored Loss Function
The significance of learning constraints from data is underscored by its potential applications in real-world problem-solving. While constraints are popular for modeling and solving, the approaches to learning constraints from data remain relatively scarce. Furthermore, the intricate task of modeling demands expertise and is prone to errors, thus constraint acquisition methods offer a solution by automating this process through learnt constraints from examples or behaviours of solutions and non-solutions. This work introduces a novel approach grounded in Deep Neural Network (DNN) based on Symbolic Regression that, by setting suitable loss functions, constraints can be extracted directly from datasets. Using the present approach, direct formulation of constraints was achieved. Furthermore, given the broad pre-developed architectures and functionalities of DNN, connections and extensions with other frameworks could be foreseen.
[ "['Eduardo Vyhmeister' 'Rocio Paez' 'Gabriel Gonzalez']" ]
null
null
2403.02051
null
null
http://arxiv.org/pdf/2403.02051v1
2024-03-04T13:53:41Z
2024-03-04T13:53:41Z
Differential Privacy of Noisy (S)GD under Heavy-Tailed Perturbations
Injecting heavy-tailed noise to the iterates of stochastic gradient descent (SGD) has received increasing attention over the past few years. While various theoretical properties of the resulting algorithm have been analyzed mainly from learning theory and optimization perspectives, their privacy preservation properties have not yet been established. Aiming to bridge this gap, we provide differential privacy (DP) guarantees for noisy SGD, when the injected noise follows an $alpha$-stable distribution, which includes a spectrum of heavy-tailed distributions (with infinite variance) as well as the Gaussian distribution. Considering the $(epsilon, delta)$-DP framework, we show that SGD with heavy-tailed perturbations achieves $(0, tilde{mathcal{O}}(1/n))$-DP for a broad class of loss functions which can be non-convex, where $n$ is the number of data points. As a remarkable byproduct, contrary to prior work that necessitates bounded sensitivity for the gradients or clipping the iterates, our theory reveals that under mild assumptions, such a projection step is not actually necessary. We illustrate that the heavy-tailed noising mechanism achieves similar DP guarantees compared to the Gaussian case, which suggests that it can be a viable alternative to its light-tailed counterparts.
[ "['Umut Şimşekli' 'Mert Gürbüzbalaban' 'Sinan Yıldırım' 'Lingjiong Zhu']" ]
null
null
2403.02080
null
null
http://arxiv.org/pdf/2403.02080v1
2024-03-04T14:26:52Z
2024-03-04T14:26:52Z
Hybrid Quantum Neural Network Advantage for Radar-Based Drone Detection and Classification in Low Signal-to-Noise Ratio
In this paper, we investigate the performance of a Hybrid Quantum Neural Network (HQNN) and a comparable classical Convolution Neural Network (CNN) for detection and classification problem using a radar. Specifically, we take a fairly complex radar time-series model derived from electromagnetic theory, namely the Martin-Mulgrew model, that is used to simulate radar returns of objects with rotating blades, such as drones. We find that when that signal-to-noise ratio (SNR) is high, CNN outperforms the HQNN for detection and classification. However, in the low SNR regime (which is of greatest interest in practice) the performance of HQNN is found to be superior to that of the CNN of a similar architecture.
[ "['Aiswariya Sweety Malarvanan']" ]
null
null
2403.02090
null
null
http://arxiv.org/pdf/2403.02090v3
2024-04-29T12:16:04Z
2024-03-04T14:46:58Z
Modeling Multimodal Social Interactions: New Challenges and Baselines with Densely Aligned Representations
Understanding social interactions involving both verbal and non-verbal cues is essential for effectively interpreting social situations. However, most prior works on multimodal social cues focus predominantly on single-person behaviors or rely on holistic visual representations that are not aligned to utterances in multi-party environments. Consequently, they are limited in modeling the intricate dynamics of multi-party interactions. In this paper, we introduce three new challenging tasks to model the fine-grained dynamics between multiple people: speaking target identification, pronoun coreference resolution, and mentioned player prediction. We contribute extensive data annotations to curate these new challenges in social deduction game settings. Furthermore, we propose a novel multimodal baseline that leverages densely aligned language-visual representations by synchronizing visual features with their corresponding utterances. This facilitates concurrently capturing verbal and non-verbal cues pertinent to social reasoning. Experiments demonstrate the effectiveness of the proposed approach with densely aligned multimodal representations in modeling fine-grained social interactions. Project website: https://sangmin-git.github.io/projects/MMSI.
[ "['Sangmin Lee' 'Bolin Lai' 'Fiona Ryan' 'Bikram Boote' 'James M. Rehg']" ]
null
null
2403.02107
null
null
http://arxiv.org/pdf/2403.02107v2
2024-05-25T11:42:15Z
2024-03-04T15:07:33Z
Iterated $Q$-Network: Beyond One-Step Bellman Updates in Deep Reinforcement Learning
The vast majority of Reinforcement Learning methods is largely impacted by the computation effort and data requirements needed to obtain effective estimates of action-value functions, which in turn determine the quality of the overall performance and the sample-efficiency of the learning procedure. Typically, action-value functions are estimated through an iterative scheme that alternates the application of an empirical approximation of the Bellman operator and a subsequent projection step onto a considered function space. It has been observed that this scheme can be potentially generalized to carry out multiple iterations of the Bellman operator at once, benefiting the underlying learning algorithm. However, till now, it has been challenging to effectively implement this idea, especially in high-dimensional problems. In this paper, we introduce iterated $Q$-Network (iQN), a novel principled approach that enables multiple consecutive Bellman updates by learning a tailored sequence of action-value functions where each serves as the target for the next. We show that iQN is theoretically grounded and that it can be seamlessly used in value-based and actor-critic methods. We empirically demonstrate the advantages of iQN in Atari $2600$ games and MuJoCo continuous control problems.
[ "['Théo Vincent' 'Daniel Palenicek' 'Boris Belousov' 'Jan Peters'\n \"Carlo D'Eramo\"]" ]
null
null
2403.02116
null
null
http://arxiv.org/pdf/2403.02116v1
2024-03-04T15:20:19Z
2024-03-04T15:20:19Z
Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Machine learning (ML) is vulnerable to inference (e.g., membership inference, property inference, and data reconstruction) attacks that aim to infer the private information of training data or dataset. Existing defenses are only designed for one specific type of attack and sacrifice significant utility or are soon broken by adaptive attacks. We address these limitations by proposing an information-theoretic defense framework, called Inf2Guard, against the three major types of inference attacks. Our framework, inspired by the success of representation learning, posits that learning shared representations not only saves time/costs but also benefits numerous downstream tasks. Generally, Inf2Guard involves two mutual information objectives, for privacy protection and utility preservation, respectively. Inf2Guard exhibits many merits: it facilitates the design of customized objectives against the specific inference attack; it provides a general defense framework which can treat certain existing defenses as special cases; and importantly, it aids in deriving theoretical results, e.g., inherent utility-privacy tradeoff and guaranteed privacy leakage. Extensive evaluations validate the effectiveness of Inf2Guard for learning privacy-preserving representations against inference attacks and demonstrate the superiority over the baselines.
[ "['Sayedeh Leila Noorbakhsh' 'Binghui Zhang' 'Yuan Hong' 'Binghui Wang']" ]
null
null
2403.02150
null
null
http://arxiv.org/pdf/2403.02150v1
2024-03-04T16:00:35Z
2024-03-04T16:00:35Z
Recency-Weighted Temporally-Segmented Ensemble for Time-Series Modeling
Time-series modeling in process industries faces the challenge of dealing with complex, multi-faceted, and evolving data characteristics. Conventional single model approaches often struggle to capture the interplay of diverse dynamics, resulting in suboptimal forecasts. Addressing this, we introduce the Recency-Weighted Temporally-Segmented (ReWTS, pronounced `roots') ensemble model, a novel chunk-based approach for multi-step forecasting. The key characteristics of the ReWTS model are twofold: 1) It facilitates specialization of models into different dynamics by segmenting the training data into `chunks' of data and training one model per chunk. 2) During inference, an optimization procedure assesses each model on the recent past and selects the active models, such that the appropriate mixture of previously learned dynamics can be recalled to forecast the future. This method not only captures the nuances of each period, but also adapts more effectively to changes over time compared to conventional `global' models trained on all data in one go. We present a comparative analysis, utilizing two years of data from a wastewater treatment plant and a drinking water treatment plant in Norway, demonstrating the ReWTS ensemble's superiority. It consistently outperforms the global model in terms of mean squared forecasting error across various model architectures by 10-70% on both datasets, notably exhibiting greater resilience to outliers. This approach shows promise in developing automatic, adaptable forecasting models for decision-making and control systems in process industries and other complex systems.
[ "['Pål V. Johnsen' 'Eivind Bøhn' 'Sølve Eidnes' 'Filippo Remonato'\n 'Signe Riemer-Sørensen']" ]
null
null
2403.02171
null
null
http://arxiv.org/pdf/2403.02171v1
2024-03-04T16:17:43Z
2024-03-04T16:17:43Z
Predicting large scale cosmological structure evolution with GAN-based autoencoders
Cosmological simulations play a key role in the prediction and understanding of large scale structure formation from initial conditions. We make use of GAN-based Autoencoders (AEs) in an attempt to predict structure evolution within simulations. The AEs are trained on images and cubes issued from respectively 2D and 3D N-body simulations describing the evolution of the dark matter (DM) field. We find that while the AEs can predict structure evolution for 2D simulations of DM fields well, using only the density fields as input, they perform significantly more poorly in similar conditions for 3D simulations. However, additionally providing velocity fields as inputs greatly improves results, with similar predictions regardless of time-difference between input and target.
[ "['Marion Ullmo' 'Nabila Aghnim' 'Aurélien Decelle' 'Miguel Aragon-Calvo']" ]
null
null
2403.02178
null
null
http://arxiv.org/pdf/2403.02178v2
2024-07-10T19:15:24Z
2024-03-04T16:21:54Z
Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models
In reasoning tasks, even a minor error can cascade into inaccurate results, leading to suboptimal performance of large language models in such domains. Earlier fine-tuning approaches sought to mitigate this by leveraging more precise supervisory signals from human labeling, larger models, or self-sampling, although at a high cost. Conversely, we develop a method that avoids external resources, relying instead on introducing perturbations to the input. Our training approach randomly masks certain tokens within the chain of thought, a technique we found to be particularly effective for reasoning tasks. When applied to fine-tuning with GSM8K on Llama-2-7B, this method achieved a 5% improvement in GSM8K accuracy and a 10% improvement in GSM-IC accuracy over standard supervised fine-tuning with a few codes modified. Furthermore, it is complementary to existing methods. When integrated with related explicit data augmentation methods, it leads to improvements across five datasets of various augmentation methods, as well as two different base models. We further investigate the mechanisms behind this improvement through case studies and quantitative analysis, suggesting that our approach may provide superior support for the model in capturing long-distance dependencies, especially those related to questions. This enhancement could deepen understanding of the premises in questions and prior steps. Our code is available at Github.
[ "['Changyu Chen' 'Xiting Wang' 'Ting-En Lin' 'Ang Lv' 'Yuchuan Wu'\n 'Xin Gao' 'Ji-Rong Wen' 'Rui Yan' 'Yongbin Li']" ]
null
null
2403.02181
null
null
http://arxiv.org/pdf/2403.02181v3
2024-07-09T11:59:01Z
2024-03-04T16:23:58Z
Not All Layers of LLMs Are Necessary During Inference
Due to the large number of parameters, the inference phase of Large Language Models (LLMs) is resource-intensive. However, not all requests posed to LLMs are equally difficult to handle. Through analysis, we show that for some tasks, LLMs can achieve results comparable to the final output at some intermediate layers. That is, not all layers of LLMs are necessary during inference. If we can predict at which layer the inferred results match the final results (produced by evaluating all layers), we could significantly reduce the inference cost. To this end, we propose a simple yet effective algorithm named AdaInfer to adaptively terminate the inference process for an input instance. AdaInfer relies on easily obtainable statistical features and classic classifiers like SVM. Experiments on well-known LLMs like the Llama2 series and OPT, show that AdaInfer can achieve an average of 17.8% pruning ratio, and up to 43% on sentiment tasks, with nearly no performance drop (<1%). Because AdaInfer does not alter LLM parameters, the LLMs incorporated with AdaInfer maintain generalizability across tasks.
[ "['Siqi Fan' 'Xin Jiang' 'Xiang Li' 'Xuying Meng' 'Peng Han' 'Shuo Shang'\n 'Aixin Sun' 'Yequan Wang' 'Zhongyuan Wang']" ]
null
null
2403.02185
null
null
http://arxiv.org/pdf/2403.02185v1
2024-03-04T16:27:21Z
2024-03-04T16:27:21Z
Distilled ChatGPT Topic & Sentiment Modeling with Applications in Finance
In this study, ChatGPT is utilized to create streamlined models that generate easily interpretable features. These features are then used to evaluate financial outcomes from earnings calls. We detail a training approach that merges knowledge distillation and transfer learning, resulting in lightweight topic and sentiment classification models without significant loss in accuracy. These models are assessed through a dataset annotated by experts. The paper also delves into two practical case studies, highlighting how the generated features can be effectively utilized in quantitative investing scenarios.
[ "['Olivier Gandouet' 'Mouloud Belbahri' 'Armelle Jezequel' 'Yuriy Bodjov']" ]
null
null
2403.02187
null
null
http://arxiv.org/pdf/2403.02187v3
2024-05-25T09:37:21Z
2024-03-04T16:28:04Z
Mutual Information Estimation via Normalizing Flows
We propose a novel approach to the problem of mutual information (MI) estimation via introducing a family of estimators based on normalizing flows. The estimator maps original data to the target distribution, for which MI is easier to estimate. We additionally explore the target distributions with known closed-form expressions for MI. Theoretical guarantees are provided to demonstrate that our approach yields MI estimates for the original data. Experiments with high-dimensional data are conducted to highlight the practical advantages of the proposed method.
[ "['Ivan Butakov' 'Alexander Tolmachev' 'Sofia Malanchuk'\n 'Anna Neopryatnaya' 'Alexey Frolov']" ]
null
null
2403.02215
null
null
http://arxiv.org/pdf/2403.02215v3
2024-05-06T22:45:25Z
2024-03-04T17:02:23Z
Joint Parameter and Parameterization Inference with Uncertainty Quantification through Differentiable Programming
Accurate representations of unknown and sub-grid physical processes through parameterizations (or closure) in numerical simulations with quantified uncertainty are critical for resolving the coarse-grained partial differential equations that govern many problems ranging from weather and climate prediction to turbulence simulations. Recent advances have seen machine learning (ML) increasingly applied to model these subgrid processes, resulting in the development of hybrid physics-ML models through the integration with numerical solvers. In this work, we introduce a novel framework for the joint estimation of physical parameters and machine learning parameterizations with uncertainty quantification. Our framework incorporates online training and efficient Bayesian inference within a high-dimensional parameter space, facilitated by differentiable programming. This proof of concept underscores the substantial potential of differentiable programming in synergistically combining machine learning with differential equations, thereby enhancing the capabilities of hybrid physics-ML modeling.
[ "['Yongquan Qu' 'Mohamed Aziz Bhouri' 'Pierre Gentine']" ]
null
null
2403.02221
null
null
http://arxiv.org/pdf/2403.02221v2
2024-03-18T16:01:26Z
2024-03-04T17:08:57Z
TPLLM: A Traffic Prediction Framework Based on Pretrained Large Language Models
Traffic prediction constitutes a pivotal facet within the purview of Intelligent Transportation Systems (ITS), and the attainment of highly precise predictions holds profound significance for efficacious traffic management. The precision of prevailing deep learning-driven traffic prediction models typically sees an upward trend with a rise in the volume of training data. However, the procurement of comprehensive spatiotemporal datasets for traffic is often fraught with challenges, primarily stemming from the substantial costs associated with data collection and retention. Consequently, developing a model that can achieve accurate predictions and good generalization ability in areas with limited historical traffic data is a challenging problem. It is noteworthy that the rapidly advancing pretrained Large Language Models (LLMs) of recent years have demonstrated exceptional proficiency in cross-modality knowledge transfer and few-shot learning. Recognizing the sequential nature of traffic data, similar to language, we introduce TPLLM, a novel traffic prediction framework leveraging LLMs. In this framework, we construct a sequence embedding layer based on Convolutional Neural Networks (CNNs) and a graph embedding layer based on Graph Convolutional Networks (GCNs) to extract sequence features and spatial features, respectively. These are subsequently integrated to form inputs that are suitable for LLMs. A Low-Rank Adaptation (LoRA) fine-tuning approach is applied to TPLLM, thereby facilitating efficient learning and minimizing computational demands. Experiments on two real-world datasets demonstrate that TPLLM exhibits commendable performance in both full-sample and few-shot prediction scenarios, effectively supporting the development of ITS in regions with scarce historical traffic data.
[ "['Yilong Ren' 'Yue Chen' 'Shuai Liu' 'Boyue Wang' 'Haiyang Yu'\n 'Zhiyong Cui']" ]
null
null
2403.02232
null
null
http://arxiv.org/abs/2403.02232v2
2024-03-25T21:33:18Z
2024-03-04T17:22:43Z
Comprehensive evaluation of Mal-API-2019 dataset by machine learning in malware detection
This study conducts a thorough examination of malware detection using machine learning techniques, focusing on the evaluation of various classification models using the Mal-API-2019 dataset. The aim is to advance cybersecurity capabilities by identifying and mitigating threats more effectively. Both ensemble and non-ensemble machine learning methods, such as Random Forest, XGBoost, K Nearest Neighbor (KNN), and Neural Networks, are explored. Special emphasis is placed on the importance of data pre-processing techniques, particularly TF-IDF representation and Principal Component Analysis, in improving model performance. Results indicate that ensemble methods, particularly Random Forest and XGBoost, exhibit superior accuracy, precision, and recall compared to others, highlighting their effectiveness in malware detection. The paper also discusses limitations and potential future directions, emphasizing the need for continuous adaptation to address the evolving nature of malware. This research contributes to ongoing discussions in cybersecurity and provides practical insights for developing more robust malware detection systems in the digital era.
[ "['Zhenglin Li' 'Haibei Zhu' 'Houze Liu' 'Jintong Song' 'Qishuo Cheng']" ]
null
null
2403.02233
null
null
http://arxiv.org/pdf/2403.02233v2
2024-06-05T00:22:56Z
2024-03-04T17:24:03Z
How Transformers Learn Diverse Attention Correlations in Masked Vision Pretraining
Masked reconstruction, which predicts randomly masked patches from unmasked ones, has emerged as an important approach in self-supervised pretraining. However, the theoretical understanding of masked pretraining is rather limited, especially for the foundational architecture of transformers. In this paper, to the best of our knowledge, we provide the first end-to-end theoretical guarantee of learning one-layer transformers in masked reconstruction self-supervised pretraining. On the conceptual side, we posit a mechanism of how transformers trained with masked vision pretraining objectives produce empirically observed local and diverse attention patterns, on data distributions with spatial structures that highlight feature-position correlations. On the technical side, our end-to-end characterization of training dynamics in softmax-attention models simultaneously accounts for input and position embeddings, which is developed based on a careful analysis tracking the interplay between feature-wise and position-wise attention correlations.
[ "['Yu Huang' 'Zixin Wen' 'Yuejie Chi' 'Yingbin Liang']" ]
null
null
2403.02241
null
null
http://arxiv.org/pdf/2403.02241v2
2024-03-05T11:43:24Z
2024-03-04T17:33:20Z
Neural Redshift: Random Networks are not Random Functions
Our understanding of the generalization capabilities of neural networks (NNs) is still incomplete. Prevailing explanations are based on implicit biases of gradient descent (GD) but they cannot account for the capabilities of models from gradient-free methods nor the simplicity bias recently observed in untrained networks. This paper seeks other sources of generalization in NNs. Findings. To understand the inductive biases provided by architectures independently from GD, we examine untrained, random-weight networks. Even simple MLPs show strong inductive biases: uniform sampling in weight space yields a very biased distribution of functions in terms of complexity. But unlike common wisdom, NNs do not have an inherent "simplicity bias". This property depends on components such as ReLUs, residual connections, and layer normalizations. Alternative architectures can be built with a bias for any level of complexity. Transformers also inherit all these properties from their building blocks. Implications. We provide a fresh explanation for the success of deep learning independent from gradient-based training. It points at promising avenues for controlling the solutions implemented by trained models.
[ "['Damien Teney' 'Armand Nicolicioiu' 'Valentin Hartmann'\n 'Ehsan Abbasnejad']" ]
null
null
2403.02243
null
null
http://arxiv.org/abs/2403.02243v1
2024-03-04T17:33:39Z
2024-03-04T17:33:39Z
Better Schedules for Low Precision Training of Deep Neural Networks
Low precision training can significantly reduce the computational overhead of training deep neural networks (DNNs). Though many such techniques exist, cyclic precision training (CPT), which dynamically adjusts precision throughout training according to a cyclic schedule, achieves particularly impressive improvements in training efficiency, while actually improving DNN performance. Existing CPT implementations take common learning rate schedules (e.g., cyclical cosine schedules) and use them for low precision training without adequate comparisons to alternative scheduling options. We define a diverse suite of CPT schedules and analyze their performance across a variety of DNN training regimes, some of which are unexplored in the low precision training literature (e.g., node classification with graph neural networks). From these experiments, we discover alternative CPT schedules that offer further improvements in training efficiency and model performance, as well as derive a set of best practices for choosing CPT schedules. Going further, we find that a correlation exists between model performance and training cost, and that changing the underlying CPT schedule can control the tradeoff between these two variables. To explain the direct correlation between model performance and training cost, we draw a connection between quantized training and critical learning periods, suggesting that aggressive quantization is a form of learning impairment that can permanently damage model performance.
[ "['Cameron R. Wolfe' 'Anastasios Kyrillidis']" ]
null
null
2403.02251
null
null
http://arxiv.org/pdf/2403.02251v1
2024-03-04T17:35:30Z
2024-03-04T17:35:30Z
A prediction rigidity formalism for low-cost uncertainties in trained neural networks
Regression methods are fundamental for scientific and technological applications. However, fitted models can be highly unreliable outside of their training domain, and hence the quantification of their uncertainty is crucial in many of their applications. Based on the solution of a constrained optimization problem, we propose "prediction rigidities" as a method to obtain uncertainties of arbitrary pre-trained regressors. We establish a strong connection between our framework and Bayesian inference, and we develop a last-layer approximation that allows the new method to be applied to neural networks. This extension affords cheap uncertainties without any modification to the neural network itself or its training procedure. We show the effectiveness of our method on a wide range of regression tasks, ranging from simple toy models to applications in chemistry and meteorology.
[ "['Filippo Bigi' 'Sanggyu Chong' 'Michele Ceriotti' 'Federico Grasselli']" ]
null
null
2403.02253
null
null
http://arxiv.org/pdf/2403.02253v2
2024-06-15T11:34:45Z
2024-03-04T17:38:32Z
KnowPhish: Large Language Models Meet Multimodal Knowledge Graphs for Enhancing Reference-Based Phishing Detection
Phishing attacks have inflicted substantial losses on individuals and businesses alike, necessitating the development of robust and efficient automated phishing detection approaches. Reference-based phishing detectors (RBPDs), which compare the logos on a target webpage to a known set of logos, have emerged as the state-of-the-art approach. However, a major limitation of existing RBPDs is that they rely on a manually constructed brand knowledge base, making it infeasible to scale to a large number of brands, which results in false negative errors due to the insufficient brand coverage of the knowledge base. To address this issue, we propose an automated knowledge collection pipeline, using which we collect a large-scale multimodal brand knowledge base, KnowPhish, containing 20k brands with rich information about each brand. KnowPhish can be used to boost the performance of existing RBPDs in a plug-and-play manner. A second limitation of existing RBPDs is that they solely rely on the image modality, ignoring useful textual information present in the webpage HTML. To utilize this textual information, we propose a Large Language Model (LLM)-based approach to extract brand information of webpages from text. Our resulting multimodal phishing detection approach, KnowPhish Detector (KPD), can detect phishing webpages with or without logos. We evaluate KnowPhish and KPD on a manually validated dataset, and a field study under Singapore's local context, showing substantial improvements in effectiveness and efficiency compared to state-of-the-art baselines.
[ "['Yuexin Li' 'Chengyu Huang' 'Shumin Deng' 'Mei Lin Lock' 'Tri Cao'\n 'Nay Oo' 'Hoon Wei Lim' 'Bryan Hooi']" ]
null
null
2403.02271
null
null
http://arxiv.org/pdf/2403.02271v2
2024-06-06T14:43:30Z
2024-03-04T17:58:09Z
RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language Models
Pre-trained Language Models (PLMs) can be accurately fine-tuned for downstream text processing tasks. Recently, researchers have introduced several parameter-efficient fine-tuning methods that optimize input prompts or adjust a small number of model parameters (e.g LoRA). In this study, we explore the impact of altering the input text of the original task in conjunction with parameter-efficient fine-tuning methods. To most effectively rewrite the input text, we train a few-shot paraphrase model with a Maximum-Marginal Likelihood objective. Using six few-shot text classification datasets, we show that enriching data with paraphrases at train and test time enhances the performance beyond what can be achieved with parameter-efficient fine-tuning alone. The code used for our experiments can be found at https://github.com/SaeedNajafi/RIFF.
[ "['Saeed Najafi' 'Alona Fyshe']" ]
null
null
2403.02274
null
null
http://arxiv.org/pdf/2403.02274v1
2024-03-04T18:02:41Z
2024-03-04T18:02:41Z
NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction
Recent advancements in multimodal Human-Robot Interaction (HRI) datasets have highlighted the fusion of speech and gesture, expanding robots' capabilities to absorb explicit and implicit HRI insights. However, existing speech-gesture HRI datasets often focus on elementary tasks, like object pointing and pushing, revealing limitations in scaling to intricate domains and prioritizing human command data over robot behavior records. To bridge these gaps, we introduce NatSGD, a multimodal HRI dataset encompassing human commands through speech and gestures that are natural, synchronized with robot behavior demonstrations. NatSGD serves as a foundational resource at the intersection of machine learning and HRI research, and we demonstrate its effectiveness in training robots to understand tasks through multimodal human commands, emphasizing the significance of jointly considering speech and gestures. We have released our dataset, simulator, and code to facilitate future research in human-robot interaction system learning; access these resources at https://www.snehesh.com/natsgd/
[ "['Snehesh Shrestha' 'Yantian Zha' 'Saketh Banagiri' 'Ge Gao'\n 'Yiannis Aloimonos' 'Cornelia Fermuller']" ]
null
null
2403.02289
null
null
http://arxiv.org/abs/2403.02289v1
2024-03-04T18:18:52Z
2024-03-04T18:18:52Z
Physics-Informed Neural Networks with Skip Connections for Modeling and Control of Gas-Lifted Oil Wells
Neural networks, while powerful, often lack interpretability. Physics-Informed Neural Networks (PINNs) address this limitation by incorporating physics laws into the loss function, making them applicable to solving Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs). The recently introduced PINC framework extends PINNs to control applications, allowing for open-ended long-range prediction and control of dynamic systems. In this work, we enhance PINC for modeling highly nonlinear systems such as gas-lifted oil wells. By introducing skip connections in the PINC network and refining certain terms in the ODE, we achieve more accurate gradients during training, resulting in an effective modeling process for the oil well system. Our proposed improved PINC demonstrates superior performance, reducing the validation prediction error by an average of 67% in the oil well application and significantly enhancing gradient flow through the network layers, increasing its magnitude by four orders of magnitude compared to the original PINC. Furthermore, experiments showcase the efficacy of Model Predictive Control (MPC) in regulating the bottom-hole pressure of the oil well using the improved PINC model, even in the presence of noisy measurements.
[ "['Jonas Ekeland Kittelsen' 'Eric Aislan Antonelo' 'Eduardo Camponogara'\n 'Lars Struen Imsland']" ]
null
null
2403.02290
null
null
http://arxiv.org/pdf/2403.02290v1
2024-03-04T18:19:48Z
2024-03-04T18:19:48Z
Koopman-Assisted Reinforcement Learning
The Bellman equation and its continuous form, the Hamilton-Jacobi-Bellman (HJB) equation, are ubiquitous in reinforcement learning (RL) and control theory. However, these equations quickly become intractable for systems with high-dimensional states and nonlinearity. This paper explores the connection between the data-driven Koopman operator and Markov Decision Processes (MDPs), resulting in the development of two new RL algorithms to address these limitations. We leverage Koopman operator techniques to lift a nonlinear system into new coordinates where the dynamics become approximately linear, and where HJB-based methods are more tractable. In particular, the Koopman operator is able to capture the expectation of the time evolution of the value function of a given system via linear dynamics in the lifted coordinates. By parameterizing the Koopman operator with the control actions, we construct a ``Koopman tensor'' that facilitates the estimation of the optimal value function. Then, a transformation of Bellman's framework in terms of the Koopman tensor enables us to reformulate two max-entropy RL algorithms: soft value iteration and soft actor-critic (SAC). This highly flexible framework can be used for deterministic or stochastic systems as well as for discrete or continuous-time dynamics. Finally, we show that these Koopman Assisted Reinforcement Learning (KARL) algorithms attain state-of-the-art (SOTA) performance with respect to traditional neural network-based SAC and linear quadratic regulator (LQR) baselines on four controlled dynamical systems: a linear state-space system, the Lorenz system, fluid flow past a cylinder, and a double-well potential with non-isotropic stochastic forcing.
[ "['Preston Rozwood' 'Edward Mehrez' 'Ludger Paehler' 'Wen Sun'\n 'Steven L. Brunton']" ]
null
null
2403.02292
null
null
http://arxiv.org/pdf/2403.02292v3
2024-03-15T18:59:55Z
2024-03-04T18:21:56Z
A Decade of Privacy-Relevant Android App Reviews: Large Scale Trends
We present an analysis of 12 million instances of privacy-relevant reviews publicly visible on the Google Play Store that span a 10 year period. By leveraging state of the art NLP techniques, we examine what users have been writing about privacy along multiple dimensions: time, countries, app types, diverse privacy topics, and even across a spectrum of emotions. We find consistent growth of privacy-relevant reviews, and explore topics that are trending (such as Data Deletion and Data Theft), as well as those on the decline (such as privacy-relevant reviews on sensitive permissions). We find that although privacy reviews come from more than 200 countries, 33 countries provide 90% of privacy reviews. We conduct a comparison across countries by examining the distribution of privacy topics a country's users write about, and find that geographic proximity is not a reliable indicator that nearby countries have similar privacy perspectives. We uncover some countries with unique patterns and explore those herein. Surprisingly, we uncover that it is not uncommon for reviews that discuss privacy to be positive (32%); many users express pleasure about privacy features within apps or privacy-focused apps. We also uncover some unexpected behaviors, such as the use of reviews to deliver privacy disclaimers to developers. Finally, we demonstrate the value of analyzing app reviews with our approach as a complement to existing methods for understanding users' perspectives about privacy
[ "['Omer Akgul' 'Sai Teja Peddinti' 'Nina Taft' 'Michelle L. Mazurek'\n 'Hamza Harkous' 'Animesh Srivastava' 'Benoit Seguin']" ]
null
null
2403.02300
null
null
http://arxiv.org/pdf/2403.02300v1
2024-03-04T18:30:33Z
2024-03-04T18:30:33Z
Statistical Query Lower Bounds for Learning Truncated Gaussians
We study the problem of estimating the mean of an identity covariance Gaussian in the truncated setting, in the regime when the truncation set comes from a low-complexity family $mathcal{C}$ of sets. Specifically, for a fixed but unknown truncation set $S subseteq mathbb{R}^d$, we are given access to samples from the distribution $mathcal{N}(boldsymbol{ mu}, mathbf{ I})$ truncated to the set $S$. The goal is to estimate $boldsymbolmu$ within accuracy $epsilon>0$ in $ell_2$-norm. Our main result is a Statistical Query (SQ) lower bound suggesting a super-polynomial information-computation gap for this task. In more detail, we show that the complexity of any SQ algorithm for this problem is $d^{mathrm{poly}(1/epsilon)}$, even when the class $mathcal{C}$ is simple so that $mathrm{poly}(d/epsilon)$ samples information-theoretically suffice. Concretely, our SQ lower bound applies when $mathcal{C}$ is a union of a bounded number of rectangles whose VC dimension and Gaussian surface are small. As a corollary of our construction, it also follows that the complexity of the previously known algorithm for this task is qualitatively best possible.
[ "['Ilias Diakonikolas' 'Daniel M. Kane' 'Thanasis Pittas' 'Nikos Zarifis']" ]
null
null
2403.02302
null
null
http://arxiv.org/pdf/2403.02302v2
2024-03-20T20:05:45Z
2024-03-04T18:32:12Z
Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation
Multimodal Large Language Models (MLLMs) have recently gained immense popularity. Powerful commercial models like ChatGPT-4V and Gemini, as well as open-source ones such as LLaVA, are essentially general-purpose models and are applied to solve a wide variety of tasks, including those in computer vision. These neural networks possess such strong general knowledge and reasoning abilities that they have proven capable of working even on tasks for which they were not specifically trained. We compared the capabilities of the most powerful MLLMs to date: ShareGPT4V, ChatGPT, LLaVA-Next in a specialized task of age and gender estimation with our state-of-the-art specialized model, MiVOLO. We also updated MiVOLO and provide details and new metrics in this article. This comparison has yielded some interesting results and insights about the strengths and weaknesses of the participating models. Furthermore, we attempted various ways to fine-tune the ShareGPT4V model for this specific task, aiming to achieve state-of-the-art results in this particular challenge. Although such a model would not be practical in production, as it is incredibly expensive compared to a specialized model like MiVOLO, it could be very useful in some tasks, like data annotation.
[ "['Maksim Kuprashevich' 'Grigorii Alekseenko' 'Irina Tolstykh']" ]
null
null
2403.02310
null
null
http://arxiv.org/pdf/2403.02310v3
2024-06-17T21:10:46Z
2024-03-04T18:47:08Z
Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve
Each LLM serving request goes through two phases. The first is prefill which processes the entire input prompt and produces the first output token and the second is decode which generates the rest of output tokens, one-at-a-time. Prefill iterations have high latency but saturate GPU compute due to parallel processing of the input prompt. In contrast, decode iterations have low latency but also low compute utilization because a decode iteration processes only a single token per request. This makes batching highly effective for decodes and consequently for overall throughput. However, batching multiple requests leads to an interleaving of prefill and decode iterations which makes it challenging to achieve both high throughput and low latency. We introduce an efficient LLM inference scheduler, Sarathi-Serve, to address this throughput-latency tradeoff. Sarathi-Serve introduces chunked-prefills which splits a prefill request into near equal sized chunks and creates stall-free schedules that adds new requests in a batch without pausing ongoing decodes. Stall-free scheduling unlocks the opportunity to improve throughput with large batch sizes while minimizing the effect of batching on latency. Furthermore, uniform batches in Sarathi-Serve ameliorate the imbalance between iterations resulting in minimal pipeline bubbles. Our techniques yield significant improvements in inference performance across models and hardware under tail latency constraints. For Mistral-7B on single A100 GPUs, we achieve 2.6x higher serving capacity and up to 3.7x higher serving capacity for the Yi-34B model on two A100 GPUs as compared to vLLM. When used with pipeline parallelism on Falcon-180B, Sarathi-Serve provides up to 5.6x gain in the end-to-end serving capacity. The source code for Sarathi-Serve is available at https://github.com/microsoft/sarathi-serve.
[ "['Amey Agrawal' 'Nitin Kedia' 'Ashish Panwar' 'Jayashree Mohan'\n 'Nipun Kwatra' 'Bhargav S. Gulavani' 'Alexey Tumanov'\n 'Ramachandran Ramjee']" ]
null
null
2403.02325
null
null
http://arxiv.org/pdf/2403.02325v1
2024-03-04T18:55:30Z
2024-03-04T18:55:30Z
Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training
Highlighting particularly relevant regions of an image can improve the performance of vision-language models (VLMs) on various vision-language (VL) tasks by guiding the model to attend more closely to these regions of interest. For example, VLMs can be given a "visual prompt", where visual markers such as bounding boxes delineate key image regions. However, current VLMs that can incorporate visual guidance are either proprietary and expensive or require costly training on curated data that includes visual prompts. We introduce Contrastive Region Guidance (CRG), a training-free guidance method that enables open-source VLMs to respond to visual prompts. CRG contrasts model outputs produced with and without visual prompts, factoring out biases revealed by the model when answering without the information required to produce a correct answer (i.e., the model's prior). CRG achieves substantial improvements in a wide variety of VL tasks: When region annotations are provided, CRG increases absolute accuracy by up to 11.1% on ViP-Bench, a collection of six diverse region-based tasks such as recognition, math, and object relationship reasoning. We also show CRG's applicability to spatial reasoning, with 10% improvement on What'sUp, as well as to compositional generalization -- improving accuracy by 11.5% and 7.5% on two challenging splits from SugarCrepe -- and to image-text alignment for generated images, where we improve by up to 8.4 AUROC and 6.8 F1 points on SeeTRUE. When reference regions are absent, CRG allows us to re-rank proposed regions in referring expression comprehension and phrase grounding benchmarks like RefCOCO/+/g and Flickr30K Entities, with an average gain of 3.2% in accuracy. Our analysis explores alternative masking strategies for CRG, quantifies CRG's probability shift, and evaluates the role of region guidance strength, empirically validating CRG's design choices.
[ "['David Wan' 'Jaemin Cho' 'Elias Stengel-Eskin' 'Mohit Bansal']" ]
null
null
2403.02329
null
null
http://arxiv.org/pdf/2403.02329v1
2024-03-04T18:57:11Z
2024-03-04T18:57:11Z
COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against Semantic Attacks
Multi-sensor fusion systems (MSFs) play a vital role as the perception module in modern autonomous vehicles (AVs). Therefore, ensuring their robustness against common and realistic adversarial semantic transformations, such as rotation and shifting in the physical world, is crucial for the safety of AVs. While empirical evidence suggests that MSFs exhibit improved robustness compared to single-modal models, they are still vulnerable to adversarial semantic transformations. Despite the proposal of empirical defenses, several works show that these defenses can be attacked again by new adaptive attacks. So far, there is no certified defense proposed for MSFs. In this work, we propose the first robustness certification framework COMMIT certify robustness of multi-sensor fusion systems against semantic attacks. In particular, we propose a practical anisotropic noise mechanism that leverages randomized smoothing with multi-modal data and performs a grid-based splitting method to characterize complex semantic transformations. We also propose efficient algorithms to compute the certification in terms of object detection accuracy and IoU for large-scale MSF models. Empirically, we evaluate the efficacy of COMMIT in different settings and provide a comprehensive benchmark of certified robustness for different MSF models using the CARLA simulation platform. We show that the certification for MSF models is at most 48.39% higher than that of single-modal models, which validates the advantages of MSF models. We believe our certification framework and benchmark will contribute an important step towards certifiably robust AVs in practice.
[ "['Zijian Huang' 'Wenda Chu' 'Linyi Li' 'Chejian Xu' 'Bo Li']" ]
null
null
2403.02334
null
null
http://arxiv.org/pdf/2403.02334v1
2024-03-04T18:58:46Z
2024-03-04T18:58:46Z
Gradient Correlation Subspace Learning against Catastrophic Forgetting
Efficient continual learning techniques have been a topic of significant research over the last few years. A fundamental problem with such learning is severe degradation of performance on previously learned tasks, known also as catastrophic forgetting. This paper introduces a novel method to reduce catastrophic forgetting in the context of incremental class learning called Gradient Correlation Subspace Learning (GCSL). The method detects a subspace of the weights that is least affected by previous tasks and projects the weights to train for the new task into said subspace. The method can be applied to one or more layers of a given network architectures and the size of the subspace used can be altered from layer to layer and task to task. Code will be available at href{https://github.com/vgthengane/GCSL}{https://github.com/vgthengane/GCSL}
[ "['Tammuz Dubnov' 'Vishal Thengane']" ]
null
null
2403.02338
null
null
http://arxiv.org/pdf/2403.02338v1
2024-03-04T18:59:30Z
2024-03-04T18:59:30Z
Twisting Lids Off with Two Hands
Manipulating objects with two multi-fingered hands has been a long-standing challenge in robotics, attributed to the contact-rich nature of many manipulation tasks and the complexity inherent in coordinating a high-dimensional bimanual system. In this work, we consider the problem of twisting lids of various bottle-like objects with two hands, and demonstrate that policies trained in simulation using deep reinforcement learning can be effectively transferred to the real world. With novel engineering insights into physical modeling, real-time perception, and reward design, the policy demonstrates generalization capabilities across a diverse set of unseen objects, showcasing dynamic and dexterous behaviors. Our findings serve as compelling evidence that deep reinforcement learning combined with sim-to-real transfer remains a promising approach for addressing manipulation problems of unprecedented complexity.
[ "['Toru Lin' 'Zhao-Heng Yin' 'Haozhi Qi' 'Pieter Abbeel' 'Jitendra Malik']" ]
null
null
2403.02347
null
null
http://arxiv.org/pdf/2403.02347v2
2024-06-19T12:21:15Z
2024-02-29T23:20:19Z
On the Convergence of Federated Learning Algorithms without Data Similarity
Data similarity assumptions have traditionally been relied upon to understand the convergence behaviors of federated learning methods. Unfortunately, this approach often demands fine-tuning step sizes based on the level of data similarity. When data similarity is low, these small step sizes result in an unacceptably slow convergence speed for federated methods. In this paper, we present a novel and unified framework for analyzing the convergence of federated learning algorithms without the need for data similarity conditions. Our analysis centers on an inequality that captures the influence of step sizes on algorithmic convergence performance. By applying our theorems to well-known federated algorithms, we derive precise expressions for three widely used step size schedules: fixed, diminishing, and step-decay step sizes, which are independent of data similarity conditions. Finally, we conduct comprehensive evaluations of the performance of these federated learning algorithms, employing the proposed step size strategies to train deep neural network models on benchmark datasets under varying data similarity conditions. Our findings demonstrate significant improvements in convergence speed and overall performance, marking a substantial advancement in federated learning research.
[ "['Ali Beikmohammadi' 'Sarit Khirirat' 'Sindri Magnússon']" ]
null
null
2403.02352
null
null
http://arxiv.org/pdf/2403.02352v1
2024-03-01T19:24:37Z
2024-03-01T19:24:37Z
ATP: Enabling Fast LLM Serving via Attention on Top Principal Keys
We propose a new attention mechanism with linear complexity, ATP, that fixates textbf{A}ttention on textbf{T}op textbf{P}rincipal keys, rather than on each individual token. Particularly, ATP is driven by an important observation that input sequences are typically low-rank, i.e., input sequences can be represented by a few principal bases. Therefore, instead of directly iterating over all the input tokens, ATP transforms inputs into an orthogonal space and computes attention only on the top principal bases (keys). Owing to the observed low-rank structure in input sequences, ATP is able to capture semantic relationships in input sequences with a few principal keys. Furthermore, the attention complexity is reduced from emph{quadratic} to emph{linear} without incurring a noticeable performance drop. ATP further reduces complexity for other linear layers with low-rank inputs, leading to more speedup compared to prior works that solely target the attention module. Our evaluations on various models (e.g., BERT and Llama) demonstrate that ATP achieves comparable accuracy with much lower computation and memory complexity than the standard attention mechanism. In particular, ATP barely loses accuracy with only $1/2$ principal keys, and only incurs around $2%$ accuracy drops with $1/4$ principal keys.
[ "['Yue Niu' 'Saurav Prakash' 'Salman Avestimehr']" ]
null
null
2403.02354
null
null
http://arxiv.org/pdf/2403.02354v3
2024-06-06T04:27:33Z
2024-03-02T10:14:42Z
Spatio-Temporal Field Neural Networks for Air Quality Inference
The air quality inference problem aims to utilize historical data from a limited number of observation sites to infer the air quality index at an unknown location. Considering the sparsity of data due to the high maintenance cost of the stations, good inference algorithms can effectively save the cost and refine the data granularity. While spatio-temporal graph neural networks have made excellent progress on this problem, their non-Euclidean and discrete data structure modeling of reality limits its potential. In this work, we make the first attempt to combine two different spatio-temporal perspectives, fields and graphs, by proposing a new model, Spatio-Temporal Field Neural Network, and its corresponding new framework, Pyramidal Inference. Extensive experiments validate that our model achieves state-of-the-art performance in nationwide air quality inference in the Chinese Mainland, demonstrating the superiority of our proposed model and framework.
[ "['Yutong Feng' 'Qiongyan Wang' 'Yutong Xia' 'Junlin Huang' 'Siru Zhong'\n 'Yuxuan Liang']" ]
null
null
2403.02355
null
null
http://arxiv.org/pdf/2403.02355v1
2024-03-02T16:50:48Z
2024-03-02T16:50:48Z
Temporal Knowledge Graph Completion with Time-sensitive Relations in Hypercomplex Space
Temporal knowledge graph completion (TKGC) aims to fill in missing facts within a given temporal knowledge graph at a specific time. Existing methods, operating in real or complex spaces, have demonstrated promising performance in this task. This paper advances beyond conventional approaches by introducing more expressive quaternion representations for TKGC within hypercomplex space. Unlike existing quaternion-based methods, our study focuses on capturing time-sensitive relations rather than time-aware entities. Specifically, we model time-sensitive relations through time-aware rotation and periodic time translation, effectively capturing complex temporal variability. Furthermore, we theoretically demonstrate our method's capability to model symmetric, asymmetric, inverse, compositional, and evolutionary relation patterns. Comprehensive experiments on public datasets validate that our proposed approach achieves state-of-the-art performance in the field of TKGC.
[ "['Li Cai' 'Xin Mao' 'Zhihong Wang' 'Shangqing Zhao' 'Yuhao Zhou'\n 'Changxu Wu' 'Man Lan']" ]
null
null
2403.02360
null
null
http://arxiv.org/pdf/2403.02360v1
2024-03-04T05:10:28Z
2024-03-04T05:10:28Z
Towards Optimal Customized Architecture for Heterogeneous Federated Learning with Contrastive Cloud-Edge Model Decoupling
Federated learning, as a promising distributed learning paradigm, enables collaborative training of a global model across multiple network edge clients without the need for central data collecting. However, the heterogeneity of edge data distribution drags the model towards the local minima, which can be distant from the global optimum. Such heterogeneity often leads to slow convergence and substantial communication overhead. To address these issues, we propose a novel federated learning framework called FedCMD, a model decoupling tailored to the Cloud-edge supported federated learning that separates deep neural networks into a body for capturing shared representations in Cloud and a personalized head for migrating data heterogeneity. Our motivation is that, by the deep investigation of the performance of selecting different neural network layers as the personalized head, we found rigidly assigning the last layer as the personalized head in current studies is not always optimal. Instead, it is necessary to dynamically select the personalized layer that maximizes the training performance by taking the representation difference between neighbor layers into account. To find the optimal personalized layer, we utilize the low-dimensional representation of each layer to contrast feature distribution transfer and introduce a Wasserstein-based layer selection method, aimed at identifying the best-match layer for personalization. Additionally, a weighted global aggregation algorithm is proposed based on the selected personalized layer for the practical application of FedCMD. Extensive experiments on ten benchmarks demonstrate the efficiency and superior performance of our solution compared with nine state-of-the-art solutions. All code and results are available at https://github.com/elegy112138/FedCMD.
[ "['Xingyan Chen' 'Tian Du' 'Mu Wang' 'Tiancheng Gu' 'Yu Zhao' 'Gang Kou'\n 'Changqiao Xu' 'Dapeng Oliver Wu']" ]
null
null
2403.02363
null
null
http://arxiv.org/pdf/2403.02363v1
2024-03-04T08:06:57Z
2024-03-04T08:06:57Z
Addressing Long-Tail Noisy Label Learning Problems: a Two-Stage Solution with Label Refurbishment Considering Label Rarity
Real-world datasets commonly exhibit noisy labels and class imbalance, such as long-tailed distributions. While previous research addresses this issue by differentiating noisy and clean samples, reliance on information from predictions based on noisy long-tailed data introduces potential errors. To overcome the limitations of prior works, we introduce an effective two-stage approach by combining soft-label refurbishing with multi-expert ensemble learning. In the first stage of robust soft label refurbishing, we acquire unbiased features through contrastive learning, making preliminary predictions using a classifier trained with a carefully designed BAlanced Noise-tolerant Cross-entropy (BANC) loss. In the second stage, our label refurbishment method is applied to obtain soft labels for multi-expert ensemble learning, providing a principled solution to the long-tail noisy label problem. Experiments conducted across multiple benchmarks validate the superiority of our approach, Label Refurbishment considering Label Rarity (LR^2), achieving remarkable accuracies of 94.19% and 77.05% on simulated noisy CIFAR-10 and CIFAR-100 long-tail datasets, as well as 77.74% and 81.40% on real-noise long-tail datasets, Food-101N and Animal-10N, surpassing existing state-of-the-art methods.
[ "['Ying-Hsuan Wu' 'Jun-Wei Hsieh' 'Li Xin' 'Shin-You Teng' 'Yi-Kuan Hsieh'\n 'Ming-Ching Chang']" ]
null
null
2403.02368
null
null
http://arxiv.org/abs/2403.02368v1
2024-03-04T13:22:53Z
2024-03-04T13:22:53Z
A Novel Hybrid Feature Importance and Feature Interaction Detection Framework for Predictive Optimization in Industry 4.0 Applications
Advanced machine learning algorithms are increasingly utilized to provide data-based prediction and decision-making support in Industry 4.0. However, the prediction accuracy achieved by the existing models is insufficient to warrant practical implementation in real-world applications. This is because not all features present in real-world datasets possess a direct relevance to the predictive analysis being conducted. Consequently, the careful incorporation of select features has the potential to yield a substantial positive impact on the outcome. To address the research gap, this paper proposes a novel hybrid framework that combines the feature importance detector - local interpretable model-agnostic explanations (LIME) and the feature interaction detector - neural interaction detection (NID), to improve prediction accuracy. By applying the proposed framework, unnecessary features can be eliminated, and interactions are encoded to generate a more conducive dataset for predictive purposes. Subsequently, the proposed model is deployed to refine the prediction of electricity consumption in foundry processing. The experimental outcomes reveal an augmentation of up to 9.56% in the R2 score, and a diminution of up to 24.05% in the root mean square error.
[ "['Zhipeng Ma' 'Bo Nørregaard Jørgensen' 'Zheng Grace Ma']" ]
null
null
2403.02372
null
null
http://arxiv.org/pdf/2403.02372v1
2024-03-04T18:23:55Z
2024-03-04T18:23:55Z
OTClean: Data Cleaning for Conditional Independence Violations using Optimal Transport
Ensuring Conditional Independence (CI) constraints is pivotal for the development of fair and trustworthy machine learning models. In this paper, we introduce sys, a framework that harnesses optimal transport theory for data repair under CI constraints. Optimal transport theory provides a rigorous framework for measuring the discrepancy between probability distributions, thereby ensuring control over data utility. We formulate the data repair problem concerning CIs as a Quadratically Constrained Linear Program (QCLP) and propose an alternating method for its solution. However, this approach faces scalability issues due to the computational cost associated with computing optimal transport distances, such as the Wasserstein distance. To overcome these scalability challenges, we reframe our problem as a regularized optimization problem, enabling us to develop an iterative algorithm inspired by Sinkhorn's matrix scaling algorithm, which efficiently addresses high-dimensional and large-scale data. Through extensive experiments, we demonstrate the efficacy and efficiency of our proposed methods, showcasing their practical utility in real-world data cleaning and preprocessing tasks. Furthermore, we provide comparisons with traditional approaches, highlighting the superiority of our techniques in terms of preserving data utility while ensuring adherence to the desired CI constraints.
[ "['Alireza Pirhadi' 'Mohammad Hossein Moslemi' 'Alexander Cloninger'\n 'Mostafa Milani' 'Babak Salimi']" ]
null
null
2403.02405
null
null
http://arxiv.org/pdf/2403.02405v1
2024-03-04T19:01:14Z
2024-03-04T19:01:14Z
Classification of the Fashion-MNIST Dataset on a Quantum Computer
The potential impact of quantum machine learning algorithms on industrial applications remains an exciting open question. Conventional methods for encoding classical data into quantum computers are not only too costly for a potential quantum advantage in the algorithms but also severely limit the scale of feasible experiments on current hardware. Therefore, recent works, despite claiming the near-term suitability of their algorithms, do not provide experimental benchmarking on standard machine learning datasets. We attempt to solve the data encoding problem by improving a recently proposed variational algorithm [1] that approximately prepares the encoded data, using asymptotically shallow circuits that fit the native gate set and topology of currently available quantum computers. We apply the improved algorithm to encode the Fashion-MNIST dataset [2], which can be directly used in future empirical studies of quantum machine learning algorithms. We deploy simple quantum variational classifiers trained on the encoded dataset on a current quantum computer ibmq-kolkata [3] and achieve moderate accuracies, providing a proof of concept for the near-term usability of our data encoding method.
[ "['Kevin Shen' 'Bernhard Jobst' 'Elvira Shishenina' 'Frank Pollmann']" ]
null
null
2403.02411
null
null
http://arxiv.org/pdf/2403.02411v4
2024-06-13T19:55:19Z
2024-03-04T19:08:20Z
NiNformer: A Network in Network Transformer with Token Mixing Generated Gating Function
The attention mechanism is the main component of the transformer architecture, and since its introduction, it has led to significant advancements in deep learning that span many domains and multiple tasks. The attention mechanism was utilized in computer vision as the Vision Transformer ViT, and its usage has expanded into many tasks in the vision domain, such as classification, segmentation, object detection, and image generation. While this mechanism is very expressive and capable, it comes with the drawback of being computationally expensive and requiring datasets of considerable size for effective optimization. To address these shortcomings, many designs have been proposed in the literature to reduce the computational burden and alleviate the data size requirements. Examples of such attempts in the vision domain are the MLP-Mixer, the Conv-Mixer, the Perciver-IO, and many more. This paper introduces a new computational block as an alternative to the standard ViT block that reduces the compute burdens by replacing the normal attention layers with a Network in Network structure that enhances the static approach of the MLP-Mixer with a dynamic system of learning an element-wise gating function by a token mixing process. Extensive experimentation shows that the proposed design provides better performance than the baseline architectures on multiple datasets applied in the image classification task of the vision domain.
[ "['Abdullah Nazhat Abdullah' 'Tarkan Aydin']" ]
null
null
2403.02418
null
null
http://arxiv.org/pdf/2403.02418v1
2024-03-04T19:12:13Z
2024-03-04T19:12:13Z
From Zero to Hero: How local curvature at artless initial conditions leads away from bad minima
We investigate the optimization dynamics of gradient descent in a non-convex and high-dimensional setting, with a focus on the phase retrieval problem as a case study for complex loss landscapes. We first study the high-dimensional limit where both the number $M$ and the dimension $N$ of the data are going to infinity at fixed signal-to-noise ratio $alpha = M/N$. By analyzing how the local curvature changes during optimization, we uncover that for intermediate $alpha$, the Hessian displays a downward direction pointing towards good minima in the first regime of the descent, before being trapped in bad minima at the end. Hence, the local landscape is benign and informative at first, before gradient descent brings the system into a uninformative maze. The transition between the two regimes is associated to a BBP-type threshold in the time-dependent Hessian. Through both theoretical analysis and numerical experiments, we show that in practical cases, i.e. for finite but even very large $N$, successful optimization via gradient descent in phase retrieval is achieved by falling towards the good minima before reaching the bad ones. This mechanism explains why successful recovery is obtained well before the algorithmic transition corresponding to the high-dimensional limit. Technically, this is associated to strong logarithmic corrections of the algorithmic transition at large $N$ with respect to the one expected in the $Ntoinfty$ limit. Our analysis sheds light on such a new mechanism that facilitate gradient descent dynamics in finite large dimensions, also highlighting the importance of good initialization of spectral properties for optimization in complex high-dimensional landscapes.
[ "['Tony Bonnaire' 'Giulio Biroli' 'Chiara Cammarota']" ]
null
null
2403.02419
null
null
http://arxiv.org/pdf/2403.02419v2
2024-06-04T21:20:33Z
2024-03-04T19:12:48Z
Are More LLM Calls All You Need? Towards Scaling Laws of Compound Inference Systems
Many recent state-of-the-art results in language tasks were achieved using compound systems that perform multiple Language Model (LM) calls and aggregate their responses. However, there is little understanding of how the number of LM calls - e.g., when asking the LM to answer each question multiple times and taking a majority vote - affects such a compound system's performance. In this paper, we initiate the study of scaling properties of compound inference systems. We analyze, theoretically and empirically, how the number of LM calls affects the performance of Vote and Filter-Vote, two of the simplest compound system designs, which aggregate LM responses via majority voting, optionally applying LM filters. We find, surprisingly, that across multiple language tasks, the performance of both Vote and Filter-Vote can first increase but then decrease as a function of the number of LM calls. Our theoretical results suggest that this non-monotonicity is due to the diversity of query difficulties within a task: more LM calls lead to higher performance on "easy" queries, but lower performance on "hard" queries, and non-monotone behavior can emerge when a task contains both types of queries. This insight then allows us to compute, from a small number of samples, the number of LM calls that maximizes system performance, and define an analytical scaling model for both systems. Experiments show that our scaling model can accurately predict the performance of Vote and Filter-Vote systems and thus find the optimal number of LM calls to make.
[ "['Lingjiao Chen' 'Jared Quincy Davis' 'Boris Hanin' 'Peter Bailis'\n 'Ion Stoica' 'Matei Zaharia' 'James Zou']" ]
null
null
2403.02426
null
null
http://arxiv.org/pdf/2403.02426v1
2024-03-04T19:18:53Z
2024-03-04T19:18:53Z
Digital Twins and Civil Engineering Phases: Reorienting Adoption Strategies
Digital twin (DT) technology has received immense attention over the years due to the promises it presents to various stakeholders in science and engineering. As a result, different thematic areas of DT have been explored. This is no different in specific fields such as manufacturing, automation, oil and gas, and civil engineering, leading to fragmented approaches for field-specific applications. The civil engineering industry is further disadvantaged in this regard as it relies on external techniques by other engineering fields for its DT adoption. A rising consequence of these extensions is a concentrated application of DT to the operations and maintenance phase. On another spectrum, Building Information Modeling (BIM) are pervasively utilized in the planning/design phase, and the transient nature of the construction phase remains a challenge for its DT adoption. In this paper, we present a phase-based development of DT in the Architecture, Engineering, and Construction industry. We commence by presenting succinct expositions on DT as a concept and as a service and establish a five-level scale system. Furthermore, we present separately a systematic literature review of the conventional techniques employed at each civil engineering phase. In this regard, we identified enabling technologies such as computer vision for extended sensing and the Internet of Things for reliable integration. Ultimately, we attempt to reveal DT as an important tool across the entire life cycle of civil engineering projects and nudge researchers to think more holistically in their quest for the integration of DT for civil engineering applications.
[ "['Taiwo A. Adebiyi' 'Nafeezat A. Ajenifuja' 'Ruda Zhang']" ]
null
null
2403.02429
null
null
http://arxiv.org/pdf/2403.02429v1
2024-03-04T19:22:09Z
2024-03-04T19:22:09Z
Towards efficient deep autoencoders for multivariate time series anomaly detection
Multivariate time series anomaly detection is a crucial problem in many industrial and research applications. Timely detection of anomalies allows, for instance, to prevent defects in manufacturing processes and failures in cyberphysical systems. Deep learning methods are preferred among others for their accuracy and robustness for the analysis of complex multivariate data. However, a key aspect is being able to extract predictions in a timely manner, to accommodate real-time requirements in different applications. In the case of deep learning models, model reduction is extremely important to achieve optimal results in real-time systems with limited time and memory constraints. In this paper, we address this issue by proposing a novel compression method for deep autoencoders that involves three key factors. First, pruning reduces the number of weights, while preventing catastrophic drops in accuracy by means of a fast search process that identifies high sparsity levels. Second, linear and non-linear quantization reduces model complexity by reducing the number of bits for every single weight. The combined contribution of these three aspects allow the model size to be reduced, by removing a subset of the weights (pruning), and decreasing their bit-width (quantization). As a result, the compressed model is faster and easier to adopt in highly constrained hardware environments. Experiments performed on popular multivariate anomaly detection benchmarks, show that our method is capable of achieving significant model compression ratio (between 80% and 95%) without a significant reduction in the anomaly detection performance.
[ "['Marcin Pietroń' 'Dominik Żurek' 'Kamil Faber' 'Roberto Corizzo']" ]
null
null
2403.02432
null
null
http://arxiv.org/pdf/2403.02432v1
2024-03-04T19:26:39Z
2024-03-04T19:26:39Z
On the impact of measure pre-conditionings on general parametric ML models and transfer learning via domain adaptation
We study a new technique for understanding convergence of learning agents under small modifications of data. We show that such convergence can be understood via an analogue of Fatou's lemma which yields gamma-convergence. We show it's relevance and applications in general machine learning tasks and domain adaptation transfer learning.
[ "['Joaquín Sánchez García']" ]
null
null
2403.02437
null
null
http://arxiv.org/pdf/2403.02437v2
2024-06-05T19:00:03Z
2024-03-04T19:35:08Z
SoK: Challenges and Opportunities in Federated Unlearning
Federated learning (FL), introduced in 2017, facilitates collaborative learning between non-trusting parties with no need for the parties to explicitly share their data among themselves. This allows training models on user data while respecting privacy regulations such as GDPR and CPRA. However, emerging privacy requirements may mandate model owners to be able to emph{forget} some learned data, e.g., when requested by data owners or law enforcement. This has given birth to an active field of research called emph{machine unlearning}. In the context of FL, many techniques developed for unlearning in centralized settings are not trivially applicable! This is due to the unique differences between centralized and distributed learning, in particular, interactivity, stochasticity, heterogeneity, and limited accessibility in FL. In response, a recent line of work has focused on developing unlearning mechanisms tailored to FL. This SoK paper aims to take a deep look at the emph{federated unlearning} literature, with the goal of identifying research trends and challenges in this emerging field. By carefully categorizing papers published on FL unlearning (since 2020), we aim to pinpoint the unique complexities of federated unlearning, highlighting limitations on directly applying centralized unlearning methods. We compare existing federated unlearning methods regarding influence removal and performance recovery, compare their threat models and assumptions, and discuss their implications and limitations. For instance, we analyze the experimental setup of FL unlearning studies from various perspectives, including data heterogeneity and its simulation, the datasets used for demonstration, and evaluation metrics. Our work aims to offer insights and suggestions for future research on federated unlearning.
[ "['Hyejun Jeong' 'Shiqing Ma' 'Amir Houmansadr']" ]
null
null
2403.02439
null
null
http://arxiv.org/pdf/2403.02439v1
2024-03-04T19:38:50Z
2024-03-04T19:38:50Z
Root Causing Prediction Anomalies Using Explainable AI
This paper presents a novel application of explainable AI (XAI) for root-causing performance degradation in machine learning models that learn continuously from user engagement data. In such systems a single feature corruption can cause cascading feature, label and concept drifts. We have successfully applied this technique to improve the reliability of models used in personalized advertising. Performance degradation in such systems manifest as prediction anomalies in the models. These models are typically trained continuously using features that are produced by hundreds of real time data processing pipelines or derived from other upstream models. A failure in any of these pipelines or an instability in any of the upstream models can cause feature corruption, causing the model's predicted output to deviate from the actual output and the training data to become corrupted. The causal relationship between the features and the predicted output is complex, and root-causing is challenging due to the scale and dynamism of the system. We demonstrate how temporal shifts in the global feature importance distribution can effectively isolate the cause of a prediction anomaly, with better recall than model-to-feature correlation methods. The technique appears to be effective even when approximating the local feature importance using a simple perturbation-based method, and aggregating over a few thousand examples. We have found this technique to be a model-agnostic, cheap and effective way to monitor complex data pipelines in production and have deployed a system for continuously analyzing the global feature importance distribution of continuously trained models.
[ "['Ramanathan Vishnampet' 'Rajesh Shenoy' 'Jianhui Chen' 'Anuj Gupta']" ]
null
null
2403.02444
null
null
http://arxiv.org/pdf/2403.02444v1
2024-03-04T19:56:19Z
2024-03-04T19:56:19Z
Anatomically Constrained Tractography of the Fetal Brain
Diffusion-weighted Magnetic Resonance Imaging (dMRI) is increasingly used to study the fetal brain in utero. An important computation enabled by dMRI is streamline tractography, which has unique applications such as tract-specific analysis of the brain white matter and structural connectivity assessment. However, due to the low fetal dMRI data quality and the challenging nature of tractography, existing methods tend to produce highly inaccurate results. They generate many false streamlines while failing to reconstruct streamlines that constitute the major white matter tracts. In this paper, we advocate for anatomically constrained tractography based on an accurate segmentation of the fetal brain tissue directly in the dMRI space. We develop a deep learning method to compute the segmentation automatically. Experiments on independent test data show that this method can accurately segment the fetal brain tissue and drastically improve tractography results. It enables the reconstruction of highly curved tracts such as optic radiations. Importantly, our method infers the tissue segmentation and streamline propagation direction from a diffusion tensor fit to the dMRI data, making it applicable to routine fetal dMRI scans. The proposed method can lead to significant improvements in the accuracy and reproducibility of quantitative assessment of the fetal brain with dMRI.
[ "['Camilo Calixto' 'Camilo Jaimes' 'Matheus D. Soldatelli'\n 'Simon K. Warfield' 'Ali Gholipour' 'Davood Karimi']" ]
null
null
2403.02446
null
null
http://arxiv.org/pdf/2403.02446v1
2024-03-04T19:59:32Z
2024-03-04T19:59:32Z
On Latency Predictors for Neural Architecture Search
Efficient deployment of neural networks (NN) requires the co-optimization of accuracy and latency. For example, hardware-aware neural architecture search has been used to automatically find NN architectures that satisfy a latency constraint on a specific hardware device. Central to these search algorithms is a prediction model that is designed to provide a hardware latency estimate for a candidate NN architecture. Recent research has shown that the sample efficiency of these predictive models can be greatly improved through pre-training on some textit{training} devices with many samples, and then transferring the predictor on the textit{test} (target) device. Transfer learning and meta-learning methods have been used for this, but often exhibit significant performance variability. Additionally, the evaluation of existing latency predictors has been largely done on hand-crafted training/test device sets, making it difficult to ascertain design features that compose a robust and general latency predictor. To address these issues, we introduce a comprehensive suite of latency prediction tasks obtained in a principled way through automated partitioning of hardware device sets. We then design a general latency predictor to comprehensively study (1) the predictor architecture, (2) NN sample selection methods, (3) hardware device representations, and (4) NN operation encoding schemes. Building on conclusions from our study, we present an end-to-end latency predictor training strategy that outperforms existing methods on 11 out of 12 difficult latency prediction tasks, improving latency prediction by 22.5% on average, and up to to 87.6% on the hardest tasks. Focusing on latency prediction, our HW-Aware NAS reports a $5.8times$ speedup in wall-clock time. Our code is available on href{https://github.com/abdelfattah-lab/nasflat_latency}{https://github.com/abdelfattah-lab/nasflat_latency}.
[ "['Yash Akhauri' 'Mohamed S. Abdelfattah']" ]
null
null
2403.02467
null
null
http://arxiv.org/pdf/2403.02467v1
2024-03-04T20:28:28Z
2024-03-04T20:28:28Z
Applied Causal Inference Powered by ML and AI
An introduction to the emerging fusion of machine learning and causal inference. The book presents ideas from classical structural equation models (SEMs) and their modern AI equivalent, directed acyclical graphs (DAGs) and structural causal models (SCMs), and covers Double/Debiased Machine Learning methods to do inference in such models using modern predictive tools.
[ "['Victor Chernozhukov' 'Christian Hansen' 'Nathan Kallus'\n 'Martin Spindler' 'Vasilis Syrgkanis']" ]
null
null
2403.02469
null
null
http://arxiv.org/pdf/2403.02469v2
2024-04-15T13:51:30Z
2024-03-04T20:29:51Z
Vision-Language Models for Medical Report Generation and Visual Question Answering: A Review
Medical vision-language models (VLMs) combine computer vision (CV) and natural language processing (NLP) to analyze visual and textual medical data. Our paper reviews recent advancements in developing VLMs specialized for healthcare, focusing on models designed for medical report generation and visual question answering (VQA). We provide background on NLP and CV, explaining how techniques from both fields are integrated into VLMs to enable learning from multimodal data. Key areas we address include the exploration of medical vision-language datasets, in-depth analyses of architectures and pre-training strategies employed in recent noteworthy medical VLMs, and comprehensive discussion on evaluation metrics for assessing VLMs' performance in medical report generation and VQA. We also highlight current challenges and propose future directions, including enhancing clinical validity and addressing patient privacy concerns. Overall, our review summarizes recent progress in developing VLMs to harness multimodal medical data for improved healthcare applications.
[ "['Iryna Hartsock' 'Ghulam Rasool']" ]
null
null
2403.02475
null
null
http://arxiv.org/pdf/2403.02475v1
2024-03-04T20:39:24Z
2024-03-04T20:39:24Z
Enhancing LLM Safety via Constrained Direct Preference Optimization
The rapidly increasing capabilities of large language models (LLMs) raise an urgent need to align AI systems with diverse human preferences to simultaneously enhance their usefulness and safety, despite the often conflicting nature of these goals. To address this important problem, a promising approach is to enforce a safety constraint at the fine-tuning stage through a constrained Reinforcement Learning from Human Feedback (RLHF) framework. This approach, however, is computationally expensive and often unstable. In this work, we introduce Constrained DPO (C-DPO), a novel extension of the recently proposed Direct Preference Optimization (DPO) approach for fine-tuning LLMs that is both efficient and lightweight. By integrating dual gradient descent and DPO, our method identifies a nearly optimal trade-off between helpfulness and harmlessness without using reinforcement learning. Empirically, our approach provides a safety guarantee to LLMs that is missing in DPO while achieving significantly higher rewards under the same safety constraint compared to a recently proposed safe RLHF approach. Warning: This paper contains example data that may be offensive or harmful.
[ "['Zixuan Liu' 'Xiaolin Sun' 'Zizhan Zheng']" ]
null
null
2403.02476
null
null
http://arxiv.org/pdf/2403.02476v2
2024-06-25T22:18:09Z
2024-03-04T20:40:02Z
A Simple Finite-Time Analysis of TD Learning with Linear Function Approximation
We study the finite-time convergence of TD learning with linear function approximation under Markovian sampling. Existing proofs for this setting either assume a projection step in the algorithm to simplify the analysis, or require a fairly intricate argument to ensure stability of the iterates. We ask: textit{Is it possible to retain the simplicity of a projection-based analysis without actually performing a projection step in the algorithm?} Our main contribution is to show this is possible via a novel two-step argument. In the first step, we use induction to prove that under a standard choice of a constant step-size $alpha$, the iterates generated by TD learning remain uniformly bounded in expectation. In the second step, we establish a recursion that mimics the steady-state dynamics of TD learning up to a bounded perturbation on the order of $O(alpha^2)$ that captures the effect of Markovian sampling. Combining these pieces leads to an overall approach that considerably simplifies existing proofs. We conjecture that our inductive proof technique will find applications in the analyses of more complex stochastic approximation algorithms, and conclude by providing some examples of such applications.
[ "['Aritra Mitra']" ]
null
null
2403.02484
null
null
http://arxiv.org/pdf/2403.02484v1
2024-03-04T21:05:52Z
2024-03-04T21:05:52Z
Encodings for Prediction-based Neural Architecture Search
Predictor-based methods have substantially enhanced Neural Architecture Search (NAS) optimization. The efficacy of these predictors is largely influenced by the method of encoding neural network architectures. While traditional encodings used an adjacency matrix describing the graph structure of a neural network, novel encodings embrace a variety of approaches from unsupervised pretraining of latent representations to vectors of zero-cost proxies. In this paper, we categorize and investigate neural encodings from three main types: structural, learned, and score-based. Furthermore, we extend these encodings and introduce textit{unified encodings}, that extend NAS predictors to multiple search spaces. Our analysis draws from experiments conducted on over 1.5 million neural network architectures on NAS spaces such as NASBench-101 (NB101), NB201, NB301, Network Design Spaces (NDS), and TransNASBench-101. Building on our study, we present our predictor textbf{FLAN}: textbf{Fl}ow textbf{A}ttention for textbf{N}AS. FLAN integrates critical insights on predictor design, transfer learning, and textit{unified encodings} to enable more than an order of magnitude cost reduction for training NAS accuracy predictors. Our implementation and encodings for all neural networks are open-sourced at href{https://github.com/abdelfattah-lab/flan_nas}{https://github.com/abdelfattah-lab/flan_nas}.
[ "['Yash Akhauri' 'Mohamed S. Abdelfattah']" ]
null
null
2403.02500
null
null
http://arxiv.org/pdf/2403.02500v1
2024-03-04T21:48:32Z
2024-03-04T21:48:32Z
RVRAE: A Dynamic Factor Model Based on Variational Recurrent Autoencoder for Stock Returns Prediction
In recent years, the dynamic factor model has emerged as a dominant tool in economics and finance, particularly for investment strategies. This model offers improved handling of complex, nonlinear, and noisy market conditions compared to traditional static factor models. The advancement of machine learning, especially in dealing with nonlinear data, has further enhanced asset pricing methodologies. This paper introduces a groundbreaking dynamic factor model named RVRAE. This model is a probabilistic approach that addresses the temporal dependencies and noise in market data. RVRAE ingeniously combines the principles of dynamic factor modeling with the variational recurrent autoencoder (VRAE) from deep learning. A key feature of RVRAE is its use of a prior-posterior learning method. This method fine-tunes the model's learning process by seeking an optimal posterior factor model informed by future data. Notably, RVRAE is adept at risk modeling in volatile stock markets, estimating variances from latent space distributions while also predicting returns. Our empirical tests with real stock market data underscore RVRAE's superior performance compared to various established baseline methods.
[ "['Yilun Wang' 'Shengjie Guo']" ]